threads
listlengths
1
2.99k
[ { "msg_contents": "This fixes bug #18456 [1]. Since we're in back-branch release freeze,\nI'll just park it for the moment. But I think we should shove it in\nonce the freeze lifts so it's in 17beta1.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/18456-82d3d70134aefd28%40postgresql.org", "msg_date": "Sat, 04 May 2024 16:16:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Fix for recursive plpython triggers" }, { "msg_contents": "On 5/4/24 10:16 PM, Tom Lane wrote:\n> This fixes bug #18456 [1]. Since we're in back-branch release freeze,\n> I'll just park it for the moment. But I think we should shove it in\n> once the freeze lifts so it's in 17beta1.\nThere is a similar issue with the return type (at least if it is a \ngeneric record) in the code but it is hard to trigger with sane code so \nI don't know if it is worth fixing but this and the bug Jacques found \nshows the downsides of the hacky fix for recursion that we have in plpython.\n\nI found this issue while reading the code, so am very unclear if there \nis any sane code which could trigger it.\n\nIn the example below the recursive call to f('int') changes the return \ntype of the f('text') call causing it to fail.\n\n# CREATE OR REPLACE FUNCTION f(t text) RETURNS record LANGUAGE \nplpython3u AS $$\nif t == \"text\":\n plpy.execute(\"SELECT * FROM f('int') AS (a int)\");\n return { \"a\": \"x\" }\nelif t == \"int\":\n return { \"a\": 1 }\n$$;\nCREATE FUNCTION\n\n# SELECT * FROM f('text') AS (a text);\nERROR: invalid input syntax for type integer: \"x\"\nCONTEXT: while creating return value\nPL/Python function \"f\"\n\nAndreas\n\n\n", "msg_date": "Wed, 8 May 2024 09:03:01 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix for recursive plpython triggers" }, { "msg_contents": "Andreas Karlsson <[email protected]> writes:\n> I found this issue while reading the code, so am very unclear if there \n> is any sane code which could trigger it.\n\n> In the example below the recursive call to f('int') changes the return \n> type of the f('text') call causing it to fail.\n\n> # CREATE OR REPLACE FUNCTION f(t text) RETURNS record LANGUAGE \n> plpython3u AS $$\n> if t == \"text\":\n> plpy.execute(\"SELECT * FROM f('int') AS (a int)\");\n> return { \"a\": \"x\" }\n> elif t == \"int\":\n> return { \"a\": 1 }\n> $$;\n> CREATE FUNCTION\n\n> # SELECT * FROM f('text') AS (a text);\n> ERROR: invalid input syntax for type integer: \"x\"\n> CONTEXT: while creating return value\n> PL/Python function \"f\"\n\nOh, nice one. I think we can fix this trivially though: the problem\nis that RECORD return-type setup was stuck into PLy_function_build_args,\nwhere it has no particular business being in the first place, rather\nthan being done at the point of use. We can just move the code.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 08 May 2024 11:51:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix for recursive plpython triggers" } ]
[ { "msg_contents": "Dear PostgreSQL Hackers,\n\nI hope this email finds you well. As I delve into the codebase, I've\nencountered some challenges understanding how routes are implemented within\nthe application.\n\nAs I navigate through the codebase, I've encountered some challenges\nunderstanding how routes are implemented within pgAdmin4. While I've made\nefforts to study the documentation and examine the source code, I'm still\nstruggling to grasp certain aspects, particularly in files like roles\n__init__.py.\n\nComing from a different stack, I'm accustomed to certain patterns and\nconventions which seem to differ in Flask. Specifically, I cannot locate\nthe familiar @blueprint.route decorator in these files, which has left me\nsomewhat perplexed.\n\nThank you very much for your time and assistance. I look forward to hearing\nfrom you at your earliest convenience.\n\nDear PostgreSQL Hackers,I hope this email finds you well. As I delve into the codebase, I've encountered some challenges understanding how routes are implemented within the application.As I navigate through the codebase, I've encountered some challenges understanding how routes are implemented within pgAdmin4. While I've made efforts to study the documentation and examine the source code, I'm still struggling to grasp certain aspects, particularly in files like roles __init__.py.Coming from a different stack, I'm accustomed to certain patterns and conventions which seem to differ in Flask. Specifically, I cannot locate the familiar @blueprint.route decorator in these files, which has left me somewhat perplexed.Thank you very much for your time and assistance. I look forward to hearing from you at your earliest convenience.", "msg_date": "Sun, 5 May 2024 10:09:29 +0500", "msg_from": "Ahmad Mehmood <[email protected]>", "msg_from_op": true, "msg_subject": "Help regarding figuring out routes in pgAdmin4" } ]
[ { "msg_contents": "Hi hackers,\n\nMore or less by chance, I stumbled on a Security Technical Implementation\nGuide (STIG, promulgated by the US Dept. of Defense, Defense Information\nSystems Agency) for PostgreSQL (specific to PG 9.x, so a bit dated).\n\nThere is a rule in the STIG that pertains to PLs, and seems to get\nbackwards the meaning of 'trusted'/'untrusted' for those:\n\n However, the use of procedural languages within PostgreSQL, such as pl/R\n and pl/Python, introduce security risk. Any user on the PostgreSQL who\n is granted access to pl/R or pl/Python is able to run UDFs to escalate\n privileges and perform unintended functions. Procedural languages such\n as pl/Perl and pl/Java have \"untrusted\" mode of operation, which do not\n allow a non-privileged PostgreSQL user to escalate privileges or perform\n actions as a database administrator.\n\nNaturally, that should refer to the \"trusted\" mode of operation as the\none with measures to prevent escalation.\n\nThere doesn't seem to be much substantively wrong with the rule,\nas long as one reads 'trusted' for 'untrusted'.\n\nNot being sure that the doc I had stumbled on was a latest edition,\nI found four PostgreSQL-related current STIGs published at [0]:\n\n [1] Crunchy Data PostgreSQL STIG - Ver 2, Rel 2\n [2] EDB Postgres Advanced Server STIG\n [3] EDB Postgres Advanced Server v11 for Windows STIG - Ver 2, Rel 3\n [4] PostgreSQL 9.x STIG - Ver 2, Rel 4\n\nThe problem usage of 'untrusted' for 'trusted' is present in both [1]\nand [4]. There is no corresponding rule in [2] at all, so the issue\ndoes not arise there.\n\nIn [3], interestingly, the corresponding rule has a much more extended\ndiscussion that uses 'trusted' / 'untrusted' correctly, includes snippets\nof SQL to query for routines of interest, and so on. It does seem to have\na minor problem of its own, though: it advises querying for roles that\nmight be granted USAGE on an untrusted PL. AFAICS, one isn't even\nallowed to GRANT USAGE on an untrusted PL, and that's been so all the\nway back to 7.3.\n\nThe four STIGs suggest the same email address [5] for comments or\nproposed revisions. I could send these comments there myself, but\nI thought it likely that others in the community have already been\ninvolved in the development of those documents and might have better\nconnections.\n\nRegards,\n-Chap\n\n\n[0] https://public.cyber.mil/stigs/downloads/\n[1]\nhttps://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_CD_PGSQL_V2R2_STIG.zip\n[2] https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_EPAS_V1R1_STIG.zip\n[3]\nhttps://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_EDB_PGS_Advanced_Server_v11_Windows_V2R3_STIG.zip\n[4]\nhttps://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_PGS_SQL_9-x_V2R4_STIG.zip\n[5] mailto:[email protected]\n\n\n", "msg_date": "Sun, 5 May 2024 13:53:57 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": true, "msg_subject": "'trusted'/'untrusted' PL in DoD/DISA PostgreSQL STIGs" }, { "msg_contents": "On 5/5/24 13:53, Chapman Flack wrote:\n> The four STIGs suggest the same email address [5] for comments or\n> proposed revisions. I could send these comments there myself, but\n> I thought it likely that others in the community have already been\n> involved in the development of those documents and might have better\n> connections.\n\nThose docs were developed by the respective companies (Crunchy and EDB) \nin cooperation with DISA. The community has nothing to do with them. I \nsuggest you contact the two companies with corrections and suggestions.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sun, 5 May 2024 19:22:46 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'trusted'/'untrusted' PL in DoD/DISA PostgreSQL STIGs" } ]
[ { "msg_contents": "Hello hackers,\n\nWhile trying to catch a sporadic regression test failure, I've discovered\nthat tests tsearch and advisory_lock in the parallel_schedule's group:\ntest: select_views ...  tsearch ... advisory_lock indirect_toast equivclass\n\nmight fail, depending on timing, because the test equivclass creates and\ndrops int alias operators.\n\nWith the attached patch applied to make tests in question repeatable, the\nfollowing test run fails for me:\n(printf 'test: test_setup\\n'; \\\n  printf 'test: tsearch equivclass %.0s\\n' `seq 300`}) >/tmp/test_schedule;\nmake -s check-tests TESTS=\"--schedule=/tmp/test_schedule\"\n...\n# 17 of 601 tests failed.\n\nregression.diffs contains:\ndiff -U3 .../src/test/regress/expected/tsearch.out .../src/test/regress/results/tsearch.out\n--- .../src/test/regress/expected/tsearch.out       2024-05-06 08:11:54.892649407 +0000\n+++ .../src/test/regress/results/tsearch.out        2024-05-06 08:13:35.514113420 +0000\n@@ -16,10 +16,7 @@\n  WHERE prsnamespace = 0 OR prsstart = 0 OR prstoken = 0 OR prsend = 0 OR\n        -- prsheadline is optional\n        prslextype = 0;\n- oid | prsname\n------+---------\n-(0 rows)\n-\n+ERROR:  cache lookup failed for type 18517\n  SELECT oid, dictname\n...\n\nOr with advisory_lock:\n(printf 'test: test_setup\\n'; \\\n  printf 'test: advisory_lock equivclass %.0s\\n' `seq 300`}) >/tmp/test_schedule;\nmake -s check-tests TESTS=\"--schedule=/tmp/test_schedule\"\n...\n# 1 of 601 tests failed.\n\nregression.diffs contains:\ndiff -U3 .../src/test/regress/expected/advisory_lock.out .../src/test/regress/results/advisory_lock.out\n--- .../src/test/regress/expected/advisory_lock.out 2024-04-10 14:36:57.709586678 +0000\n+++ .../src/test/regress/results/advisory_lock.out  2024-05-06 08:15:09.235456794 +0000\n@@ -14,40 +14,17 @@\n  SELECT locktype, classid, objid, objsubid, mode, granted\n         FROM pg_locks WHERE locktype = 'advisory' AND database = :datoid\n         ORDER BY classid, objid, objsubid;\n- locktype | classid | objid | objsubid |     mode      | granted\n-----------+---------+-------+----------+---------------+---------\n- advisory |       0 |     1 |        1 | ExclusiveLock | t\n- advisory |       0 |     2 |        1 | ShareLock     | t\n- advisory |       1 |     1 |        2 | ExclusiveLock | t\n- advisory |       2 |     2 |        2 | ShareLock     | t\n-(4 rows)\n-\n+ERROR:  cache lookup failed for type 17976\n  -- pg_advisory_unlock_all() shouldn't release xact locks\n...\n\nWith backtrace_functions = 'getBaseTypeAndTypmod' specified,\nI see the following stack trace of the error:\n2024-05-06 08:30:51.344 UTC client backend[869479] pg_regress/tsearch ERROR:  cache lookup failed for type 18041\n2024-05-06 08:30:51.344 UTC client backend[869479] pg_regress/tsearch BACKTRACE:\ngetBaseTypeAndTypmod at lsyscache.c:2550:4\ngetBaseType at lsyscache.c:2526:1\nfind_coercion_pathway at parse_coerce.c:3131:18\ncan_coerce_type at parse_coerce.c:598:6\nfunc_match_argtypes at parse_func.c:939:6\noper_select_candidate at parse_oper.c:327:5\noper at parse_oper.c:428:6\nmake_op at parse_oper.c:696:30\ntransformBoolExpr at parse_expr.c:1431:9\n  (inlined by) transformExprRecurse at parse_expr.c:226:13\ntransformExpr at parse_expr.c:133:22\ntransformWhereClause at parse_clause.c:1867:1\ntransformSelectStmt at analyze.c:1382:20\n  (inlined by) transformStmt at analyze.c:368:15\nparse_analyze_fixedparams at analyze.c:110:15\npg_analyze_and_rewrite_fixedparams at postgres.c:691:5\n  (inlined by) exec_simple_query at postgres.c:1190:20\nPostgresMain at postgres.c:4680:7\nBackendMain at backend_startup.c:61:2\n\nBest regards,\nAlexander", "msg_date": "Mon, 6 May 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Test equivclass interferes with tests tsearch and advisory_lock" } ]
[ { "msg_contents": "On 2024-04-03 Wn 04:21, Andrew Dunstan\n> I don't think a cast that doesn't cater for all the forms json can take\nis going to work very well. At the very least you would need to error out\nin cases you didn't want to cover, and have tests for all of > > those\nerrors. But the above is only a tiny fraction of those. If the error cases\nare going to be so much more than the cases that work it seems a bit\npointless.\nHi everyone\nI changed my mail account to be officially displayed in the correspondence.\nI also made an error conclusion if we are given an incorrect value. I\nbelieve that such a cast is needed by PostgreSQL since we already have\nseveral incomplete casts, but they perform their duties well and help in\nthe right situations.\n\ncheers\n\nAntoine", "msg_date": "Mon, 6 May 2024 17:39:13 +0700", "msg_from": "=?UTF-8?B?0JDQvdGC0YPQsNC9INCS0LjQvtC70LjQvQ==?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension for PostgreSQL cast jsonb to hstore WIP" } ]
[ { "msg_contents": "Greetings,\n\nI've been asked by the debezium developers if it is possible to include\nxid8 in the logical replication protocol. Are there any previous threads on\nthis topic?\n\nAny reason why we wouldn't include the epoch ?\n\nDave Cramer\n\nGreetings,I've been asked by the debezium developers if it is possible to include xid8 in the logical replication protocol. Are there any previous threads on this topic?Any reason why we wouldn't include the epoch ?Dave Cramer", "msg_date": "Mon, 6 May 2024 08:34:22 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Possible to include xid8 in logical replication" } ]
[ { "msg_contents": "One-line Summary: There are use cases for long table names so people might\nuse Oracle and MS SQL Server because Postgres does not support table names\nlonger than 63 characters, so the max character limit should be increased\nto 128 or higher.\n\nBusiness Use-case: I want to create a table\nnamed things_that_take_up_a_lot_of_storage_and_space_on_a_computer_and_hard_drive\nof 75 characters. I also want to create a column\nnamed thing_that_takes_up_a_lot_of_storage_and_space_on_a_computer_and_hard_drive_id\nof 78 characters. People have asked for this feature before. For more\ndetails, please see and visit\nhttps://www.reddit.com/r/PostgreSQL/comments/6kyyev/i_have_hit_the_table_name_length_limit_a_number/.\nIn particular, see\nhttps://www.reddit.com/r/PostgreSQL/comments/6kyyev/i_have_hit_the_table_name_length_limit_a_number/#:~:text=been%20wondering%20the%20same\nwhich states I've been wondering the same. It comes down to changing\nNAMEDATALEN documented here:\nhttps://www.postgresql.org/docs/9.3/static/runtime-config-preset.html and\nsomeone made a patch that changes it to 256 characters:\nhttps://gist.github.com/langner/5c7bc1d74a8b957cab26\n\nThe Postgres people seem to be resistant to changing this. I accept that\nwhere possible one should keep identifiers short but descriptive but I\nhaven't seen a good reason WHY it shouldn't be changed since there are many\ninstances when remaining within 63 characters is quite difficult.\n\nSincerely,\n\nPeter Burbery\n\nOne-line Summary: There are use cases for long table names so people might use Oracle and MS SQL Server because Postgres does not support table names longer than 63 characters, so the max character limit should be increased to 128 or higher.Business Use-case: I want to create a table named things_that_take_up_a_lot_of_storage_and_space_on_a_computer_and_hard_drive of 75 characters. I also want to create a column named thing_that_takes_up_a_lot_of_storage_and_space_on_a_computer_and_hard_drive_id of 78 characters. People have asked for this feature before. For more details, please see and visit https://www.reddit.com/r/PostgreSQL/comments/6kyyev/i_have_hit_the_table_name_length_limit_a_number/. In particular, see https://www.reddit.com/r/PostgreSQL/comments/6kyyev/i_have_hit_the_table_name_length_limit_a_number/#:~:text=been%20wondering%20the%20same which states I've been wondering the same. It comes down to changing NAMEDATALEN documented here: https://www.postgresql.org/docs/9.3/static/runtime-config-preset.html and someone made a patch that changes it to 256 characters: https://gist.github.com/langner/5c7bc1d74a8b957cab26The Postgres people seem to be resistant to changing this. I accept that where possible one should keep identifiers short but descriptive but I haven't seen a good reason WHY it shouldn't be changed since there are many instances when remaining within 63 characters is quite difficult.Sincerely,Peter Burbery", "msg_date": "Mon, 6 May 2024 08:49:51 -0400", "msg_from": "Peter Burbery <[email protected]>", "msg_from_op": true, "msg_subject": "Increase the length of identifers from 63 characters to 128\n characters or more" }, { "msg_contents": "On Monday, May 6, 2024, Peter Burbery <[email protected]> wrote:\n\n>\n> Business Use-case: I want to create a table named things_that_take_up_a_\n> lot_of_storage_and_space_on_a_computer_and_hard_drive of 75 characters. I\n> also want to create a column named thing_that_takes_up_a_\n> lot_of_storage_and_space_on_a_computer_and_hard_drive_id of 78\n> characters. People have asked for this feature before.\n>\n>\nWe have a mailing list archive. You should do the same research of past\nrequests and discussions on it since that is where you will find the\ndevelopers involved in the discussion and can find out the past reasoning\nas to why this hasn’t been changed.\n\n David J.\n\nOn Monday, May 6, 2024, Peter Burbery <[email protected]> wrote:Business Use-case: I want to create a table named things_that_take_up_a_lot_of_storage_and_space_on_a_computer_and_hard_drive of 75 characters. I also want to create a column named thing_that_takes_up_a_lot_of_storage_and_space_on_a_computer_and_hard_drive_id of 78 characters. People have asked for this feature before.We have a mailing list archive.  You should do the same research of past requests and discussions on it since that is where you will find the developers involved in the discussion and can find out the past reasoning as to why this hasn’t been changed. David J.", "msg_date": "Mon, 6 May 2024 05:55:45 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increase the length of identifers from 63 characters to 128\n characters or more" }, { "msg_contents": "On 2024-May-06, David G. Johnston wrote:\n\n> On Monday, May 6, 2024, Peter Burbery <[email protected]> wrote:\n> \n> >\n> > Business Use-case: I want to create a table named things_that_take_up_a_\n> > lot_of_storage_and_space_on_a_computer_and_hard_drive of 75 characters. I\n> > also want to create a column named thing_that_takes_up_a_\n> > lot_of_storage_and_space_on_a_computer_and_hard_drive_id of 78\n> > characters. People have asked for this feature before.\n>\n> We have a mailing list archive. You should do the same research of past\n> requests and discussions on it since that is where you will find the\n> developers involved in the discussion and can find out the past reasoning\n> as to why this hasn’t been changed.\n\nThere's been a lot of discussion on this topic -- the most recent seems\nto be [1] which will lead you to the older patches by John Naylor at [2].\n\nIn short, it's not that we're completely opposed to it, just that\nsomebody needs to do some more work in order to have a good\nimplementation for it.\n\n[1] https://www.postgresql.org/message-id/324703.1696948627%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/flat/CALSd-crdmj9PGdvdioU%3Da5W7P%3DTgNmEB2QP9wiF6DTUbBuMXrQ%40mail.gmail.com\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n", "msg_date": "Mon, 6 May 2024 15:23:22 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increase the length of identifers from 63 characters to 128\n characters or more" } ]
[ { "msg_contents": "In March, I noticed that a backend got stuck overnight doing:\n\nbackend_start | 2024-03-27 22:34:12.774195-07\nxact_start | 2024-03-27 22:34:39.741933-07\nquery_start | 2024-03-27 22:34:41.997276-07\nstate_change | 2024-03-27 22:34:41.997307-07\nwait_event_type | IO\nwait_event | DataFileExtend\nstate | active\nbackend_xid | 330896991\nbackend_xmin | 330896991\nquery_id | -3255287420265634540\nquery | PREPARE mml_1 AS INSERT INTO child.un...\n\nThe backend was spinning at 100% CPU:\n\n[pryzbyj@telsa2021 ~]$ ps -O wchan,pcpu 7881\n PID WCHAN %CPU S TTY TIME COMMAND\n 7881 ? 99.4 R ? 08:14:55 postgres: telsasoft ts [local] INSERT\n\nThis was postgres 16 STABLE compiled at 14e991db8.\n\nIt's possible that this is a rare issue that we haven't hit before.\nIt's also possible this this is a recent regression. We previously\ncompiled at b2c9936a7 without hitting any issue (possibly by chance).\n\nI could neither strace the process nor attach a debugger. They got\nstuck attaching. Maybe it's possible there's a kernel issue. This is a\nVM running centos7 / 3.10.0-1160.49.1.el7.x86_64.\n\n$ awk '{print $14, $15}' /proc/7881/stat # usr / sys\n229 3088448\n\nWhen I tried to shut down postgres (hoping to finally be able to attach\na debugger), instead it got stuck:\n\n$ ps -fu postgres\nUID PID PPID C STIME TTY TIME CMD\npostgres 7881 119674 99 mar27 ? 08:38:06 postgres: telsasoft ts [local] INSERT\npostgres 119674 1 0 mar25 ? 00:07:13 /usr/pgsql-16/bin/postgres -D /var/lib/pgsql/16/data/\npostgres 119676 119674 0 mar25 ? 00:00:11 postgres: logger \npostgres 119679 119674 0 mar25 ? 00:11:56 postgres: checkpointer \n\nThis first occurred on Mar 27, but I see today that it's now recurring\nat a different customer:\n\nbackend_start | 2024-05-05 22:19:17.009477-06\nxact_start | 2024-05-05 22:20:18.129305-06\nquery_start | 2024-05-05 22:20:19.409555-06\nstate_change | 2024-05-05 22:20:19.409556-06\npid | 18468\nwait_event_type | IO\nwait_event | DataFileExtend\nstate | active\nbackend_xid | 4236249136\nbackend_xmin | 4236221661\nquery_id | 2601062835886299431\nleft | PREPARE mml_1 AS INSERT INTO chil\n\nThis time it's running v16.2 (REL_16_STABLE compiled at b78fa8547),\nunder centos7 / 3.10.0-1160.66.1.el7.x86_64.\n\nThe symptoms are the same: backend stuck using 100% CPU in user mode:\n[pryzbyj@telsasoft-centos7 ~]$ awk '{print $14, $15}' /proc/18468/stat # usr / sys\n364 2541168\n\nThis seems to have affected other backends, which are now waiting on\nRegisterSyncRequest, frozenid, and CheckpointerComm.\n\nIn both cases, the backend got stuck after 10pm, which is when a backup\njob kicks off, followed by other DB maintenance. Our backup job uses\npg_export_snapshot() + pg_dump --snapshot. In today's case, the pg_dump\nwould've finished and snapshot closed at 2023-05-05 22:15. The backup\njob did some more pg_dumps involving historic tables without snapshots\nand finished at 01:11:40, at which point a reindex job started, but it\nlooks like the DB was already stuck for the purpose of reindex, and so\nthe script ended after a handful of commands were \"[canceled] due to\nstatement timeout\".\n\nFull disclosure: the VM that hit this issue today has had storage-level\nerrors (reported here at ZZqr_GTaHyuW7fLp@pryzbyj2023), as recently as 3\ndays ago.\n\nMaybe more importantly, this VM also seems to suffer from some memory\nleak, and the leaky process was Killed shortly after the stuck query\nstarted. This makes me suspect a race condition which was triggered\nwhile swapping:\n\nMay 5 22:24:05 localhost kernel: Out of memory: Kill process 17157 (python3.8) score 134 or sacrifice child\n\nWe don't have as good logs from March, but I'm not aware of killed\nprocesses on the VM where we hit this in March, but it's true that the\nI/O there is not fast.\n\nAlso, I fibbed when I said these were compiled at 16 STABLE - I'd\nbackpatched a small number of patches from master:\n\na97bbe1f1df Reduce branches in heapgetpage()'s per-tuple loop\n98f320eb2ef Increase default vacuum_buffer_usage_limit to 2MB.\n44086b09753 Preliminary refactor of heap scanning functions\n959b38d770b Invent --transaction-size option for pg_restore.\na45c78e3284 Rearrange pg_dump's handling of large objects for better efficiency.\n9d1a5354f58 Fix costing bug in MergeAppend\na5cf808be55 Read include/exclude commands for dump/restore from file\n8c16ad3b432 Allow using syncfs() in frontend utilities.\ncccc6cdeb32 Add support for syncfs() in frontend support functions.\n3ed19567198 Make enum for sync methods available to frontend code.\nf39b265808b Move PG_TEMP_FILE* macros to file_utils.h.\na14354cac0e Add GUC parameter \"huge_pages_status\"\n\nI will need to restart services this morning, but if someone wants to\nsuggest diagnostic measures, I will see whether the command gets stuck\nor not.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 May 2024 09:05:38 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "backend stuck in DataFileExtend" }, { "msg_contents": "Hi,\n\nOn 2024-05-06 09:05:38 -0500, Justin Pryzby wrote:\n> In March, I noticed that a backend got stuck overnight doing:\n> \n> backend_start | 2024-03-27 22:34:12.774195-07\n> xact_start | 2024-03-27 22:34:39.741933-07\n> query_start | 2024-03-27 22:34:41.997276-07\n> state_change | 2024-03-27 22:34:41.997307-07\n> wait_event_type | IO\n> wait_event | DataFileExtend\n> state | active\n> backend_xid | 330896991\n> backend_xmin | 330896991\n> query_id | -3255287420265634540\n> query | PREPARE mml_1 AS INSERT INTO child.un...\n> \n> The backend was spinning at 100% CPU:\n> \n> [pryzbyj@telsa2021 ~]$ ps -O wchan,pcpu 7881\n> PID WCHAN %CPU S TTY TIME COMMAND\n> 7881 ? 99.4 R ? 08:14:55 postgres: telsasoft ts [local] INSERT\n> \n> This was postgres 16 STABLE compiled at 14e991db8.\n> \n> It's possible that this is a rare issue that we haven't hit before.\n> It's also possible this this is a recent regression. We previously\n> compiled at b2c9936a7 without hitting any issue (possibly by chance).\n> \n> I could neither strace the process nor attach a debugger. They got\n> stuck attaching. Maybe it's possible there's a kernel issue. This is a\n> VM running centos7 / 3.10.0-1160.49.1.el7.x86_64.\n\n> $ awk '{print $14, $15}' /proc/7881/stat # usr / sys\n> 229 3088448\n> \n> When I tried to shut down postgres (hoping to finally be able to attach\n> a debugger), instead it got stuck:\n\nThat strongly indicates either a kernel bug or storage having an issue. It\ncan't be postgres' fault if an IO never completes.\n\nWhat do /proc/$pid/stack say?\n\n\n> In both cases, the backend got stuck after 10pm, which is when a backup\n> job kicks off, followed by other DB maintenance. Our backup job uses\n> pg_export_snapshot() + pg_dump --snapshot. In today's case, the pg_dump\n> would've finished and snapshot closed at 2023-05-05 22:15. The backup\n> job did some more pg_dumps involving historic tables without snapshots\n> and finished at 01:11:40, at which point a reindex job started, but it\n> looks like the DB was already stuck for the purpose of reindex, and so\n> the script ended after a handful of commands were \"[canceled] due to\n> statement timeout\".\n\nIs it possible that you're \"just\" waiting for very slow IO? Is there a lot of\ndirty memory? Particularly on these old kernels that can lead to very extreme\ndelays.\n\ngrep -Ei 'dirty|writeback' /proc/meminfo\n\n\n> [...]\n> Full disclosure: the VM that hit this issue today has had storage-level\n> errors (reported here at ZZqr_GTaHyuW7fLp@pryzbyj2023), as recently as 3\n> days ago.\n\nSo indeed, my suspicion from above is confirmed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 May 2024 10:04:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend stuck in DataFileExtend" }, { "msg_contents": "On Mon, May 06, 2024 at 10:04:13AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-06 09:05:38 -0500, Justin Pryzby wrote:\n> > In March, I noticed that a backend got stuck overnight doing:\n> > \n> > backend_start | 2024-03-27 22:34:12.774195-07\n> > xact_start | 2024-03-27 22:34:39.741933-07\n> > query_start | 2024-03-27 22:34:41.997276-07\n> > state_change | 2024-03-27 22:34:41.997307-07\n> > wait_event_type | IO\n> > wait_event | DataFileExtend\n> > state | active\n> > backend_xid | 330896991\n> > backend_xmin | 330896991\n> > query_id | -3255287420265634540\n> > query | PREPARE mml_1 AS INSERT INTO child.un...\n> > \n> > The backend was spinning at 100% CPU:\n> > \n> > [pryzbyj@telsa2021 ~]$ ps -O wchan,pcpu 7881\n> > PID WCHAN %CPU S TTY TIME COMMAND\n> > 7881 ? 99.4 R ? 08:14:55 postgres: telsasoft ts [local] INSERT\n> > \n> > This was postgres 16 STABLE compiled at 14e991db8.\n> > \n> > It's possible that this is a rare issue that we haven't hit before.\n> > It's also possible this this is a recent regression. We previously\n> > compiled at b2c9936a7 without hitting any issue (possibly by chance).\n> > \n> > I could neither strace the process nor attach a debugger. They got\n> > stuck attaching. Maybe it's possible there's a kernel issue. This is a\n> > VM running centos7 / 3.10.0-1160.49.1.el7.x86_64.\n> \n> > $ awk '{print $14, $15}' /proc/7881/stat # usr / sys\n> > 229 3088448\n> > \n> > When I tried to shut down postgres (hoping to finally be able to attach\n> > a debugger), instead it got stuck:\n> \n> That strongly indicates either a kernel bug or storage having an issue. It\n> can't be postgres' fault if an IO never completes.\n\nIs that for sure even though wchan=? (which I take to mean \"not in a system\ncall\"), and the process is stuck in user mode ?\n\n> What do /proc/$pid/stack say?\n\n[pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/18468/stat\n18468 (postgres) R 2274 18468 18468 0 -1 4857920 91836 0 3985 0 364 3794271 0 0 20 0 1 0 6092292660 941846528 10 18446744073709551615 4194304 12848820 140732995870240 140732995857304 139726958536394 0 4194304 19929088 536896135 0 0 0 17 3 0 0 1682 0 0 14949632 15052146 34668544 140732995874457 140732995874511 140732995874511 140732995874781 0\n\n> > In both cases, the backend got stuck after 10pm, which is when a backup\n> > job kicks off, followed by other DB maintenance. Our backup job uses\n> > pg_export_snapshot() + pg_dump --snapshot. In today's case, the pg_dump\n> > would've finished and snapshot closed at 2023-05-05 22:15. The backup\n> > job did some more pg_dumps involving historic tables without snapshots\n> > and finished at 01:11:40, at which point a reindex job started, but it\n> > looks like the DB was already stuck for the purpose of reindex, and so\n> > the script ended after a handful of commands were \"[canceled] due to\n> > statement timeout\".\n> \n> Is it possible that you're \"just\" waiting for very slow IO? Is there a lot of\n> dirty memory? Particularly on these old kernels that can lead to very extreme\n> delays.\n> \n> grep -Ei 'dirty|writeback' /proc/meminfo\n\n[pryzbyj@telsasoft-centos7 ~]$ grep -Ei 'dirty|writeback' /proc/meminfo\nDirty: 28 kB\nWriteback: 0 kB\nWritebackTmp: 0 kB\n\n> > Full disclosure: the VM that hit this issue today has had storage-level\n> > errors (reported here at ZZqr_GTaHyuW7fLp@pryzbyj2023), as recently as 3\n> > days ago.\n> \n> So indeed, my suspicion from above is confirmed.\n\nI'd be fine with that conclusion (as in the earlier thread), except this\nhas now happened on 2 different VMs, and the first one has no I/O\nissues. If this were another symptom of a storage failure, and hadn't\npreviously happened on another VM, I wouldn't be re-reporting it.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 May 2024 12:37:26 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backend stuck in DataFileExtend" }, { "msg_contents": "Hi,\n\nOn 2024-05-06 12:37:26 -0500, Justin Pryzby wrote:\n> On Mon, May 06, 2024 at 10:04:13AM -0700, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2024-05-06 09:05:38 -0500, Justin Pryzby wrote:\n> > > In March, I noticed that a backend got stuck overnight doing:\n> > >\n> > > backend_start | 2024-03-27 22:34:12.774195-07\n> > > xact_start | 2024-03-27 22:34:39.741933-07\n> > > query_start | 2024-03-27 22:34:41.997276-07\n> > > state_change | 2024-03-27 22:34:41.997307-07\n> > > wait_event_type | IO\n> > > wait_event | DataFileExtend\n> > > state | active\n> > > backend_xid | 330896991\n> > > backend_xmin | 330896991\n> > > query_id | -3255287420265634540\n> > > query | PREPARE mml_1 AS INSERT INTO child.un...\n> > >\n> > > The backend was spinning at 100% CPU:\n> > >\n> > > [pryzbyj@telsa2021 ~]$ ps -O wchan,pcpu 7881\n> > > PID WCHAN %CPU S TTY TIME COMMAND\n> > > 7881 ? 99.4 R ? 08:14:55 postgres: telsasoft ts [local] INSERT\n> > >\n> > > This was postgres 16 STABLE compiled at 14e991db8.\n> > >\n> > > It's possible that this is a rare issue that we haven't hit before.\n> > > It's also possible this this is a recent regression. We previously\n> > > compiled at b2c9936a7 without hitting any issue (possibly by chance).\n> > >\n> > > I could neither strace the process nor attach a debugger. They got\n> > > stuck attaching. Maybe it's possible there's a kernel issue. This is a\n> > > VM running centos7 / 3.10.0-1160.49.1.el7.x86_64.\n> >\n> > > $ awk '{print $14, $15}' /proc/7881/stat # usr / sys\n> > > 229 3088448\n> > >\n> > > When I tried to shut down postgres (hoping to finally be able to attach\n> > > a debugger), instead it got stuck:\n> >\n> > That strongly indicates either a kernel bug or storage having an issue. It\n> > can't be postgres' fault if an IO never completes.\n>\n> Is that for sure even though wchan=? (which I take to mean \"not in a system\n> call\"), and the process is stuck in user mode ?\n\nPostgres doesn't do anything to prevent a debugger from working, so this is\njust indicative that the kernel is stuck somewhere that it didn't set up\ninformation about being blocked - because it's busy doing something.\n\n\n> > What do /proc/$pid/stack say?\n>\n> [pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/18468/stat\n> 18468 (postgres) R 2274 18468 18468 0 -1 4857920 91836 0 3985 0 364 3794271 0 0 20 0 1 0 6092292660 941846528 10 18446744073709551615 4194304 12848820 140732995870240 140732995857304 139726958536394 0 4194304 19929088 536896135 0 0 0 17 3 0 0 1682 0 0 14949632 15052146 34668544 140732995874457 140732995874511 140732995874511 140732995874781 0\n\nstack, not stat...\n\n\n> > > Full disclosure: the VM that hit this issue today has had storage-level\n> > > errors (reported here at ZZqr_GTaHyuW7fLp@pryzbyj2023), as recently as 3\n> > > days ago.\n> >\n> > So indeed, my suspicion from above is confirmed.\n>\n> I'd be fine with that conclusion (as in the earlier thread), except this\n> has now happened on 2 different VMs, and the first one has no I/O\n> issues. If this were another symptom of a storage failure, and hadn't\n> previously happened on another VM, I wouldn't be re-reporting it.\n\nIs it the same VM hosting environment? And the same old distro?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 May 2024 10:51:08 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend stuck in DataFileExtend" }, { "msg_contents": "On Mon, May 06, 2024 at 10:51:08AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-06 12:37:26 -0500, Justin Pryzby wrote:\n> > On Mon, May 06, 2024 at 10:04:13AM -0700, Andres Freund wrote:\n> > > Hi,\n> > >\n> > > On 2024-05-06 09:05:38 -0500, Justin Pryzby wrote:\n> > > > In March, I noticed that a backend got stuck overnight doing:\n> > > >\n> > > > backend_start | 2024-03-27 22:34:12.774195-07\n> > > > xact_start | 2024-03-27 22:34:39.741933-07\n> > > > query_start | 2024-03-27 22:34:41.997276-07\n> > > > state_change | 2024-03-27 22:34:41.997307-07\n> > > > wait_event_type | IO\n> > > > wait_event | DataFileExtend\n> > > > state | active\n> > > > backend_xid | 330896991\n> > > > backend_xmin | 330896991\n> > > > query_id | -3255287420265634540\n> > > > query | PREPARE mml_1 AS INSERT INTO child.un...\n> > > >\n> > > > The backend was spinning at 100% CPU:\n> > > >\n> > > > [pryzbyj@telsa2021 ~]$ ps -O wchan,pcpu 7881\n> > > > PID WCHAN %CPU S TTY TIME COMMAND\n> > > > 7881 ? 99.4 R ? 08:14:55 postgres: telsasoft ts [local] INSERT\n> > > >\n> > > > This was postgres 16 STABLE compiled at 14e991db8.\n> > > >\n> > > > It's possible that this is a rare issue that we haven't hit before.\n> > > > It's also possible this this is a recent regression. We previously\n> > > > compiled at b2c9936a7 without hitting any issue (possibly by chance).\n> > > >\n> > > > I could neither strace the process nor attach a debugger. They got\n> > > > stuck attaching. Maybe it's possible there's a kernel issue. This is a\n> > > > VM running centos7 / 3.10.0-1160.49.1.el7.x86_64.\n> > >\n> > > > $ awk '{print $14, $15}' /proc/7881/stat # usr / sys\n> > > > 229 3088448\n> > > >\n> > > > When I tried to shut down postgres (hoping to finally be able to attach\n> > > > a debugger), instead it got stuck:\n> > >\n> > > That strongly indicates either a kernel bug or storage having an issue. It\n> > > can't be postgres' fault if an IO never completes.\n> >\n> > Is that for sure even though wchan=? (which I take to mean \"not in a system\n> > call\"), and the process is stuck in user mode ?\n> \n> Postgres doesn't do anything to prevent a debugger from working, so this is\n> just indicative that the kernel is stuck somewhere that it didn't set up\n> information about being blocked - because it's busy doing something.\n> \n> \n> > > What do /proc/$pid/stack say?\n> >\n> > [pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/18468/stat\n> > 18468 (postgres) R 2274 18468 18468 0 -1 4857920 91836 0 3985 0 364 3794271 0 0 20 0 1 0 6092292660 941846528 10 18446744073709551615 4194304 12848820 140732995870240 140732995857304 139726958536394 0 4194304 19929088 536896135 0 0 0 17 3 0 0 1682 0 0 14949632 15052146 34668544 140732995874457 140732995874511 140732995874511 140732995874781 0\n> \n> stack, not stat...\n\nAh, that is illuminating - thanks.\n\n[pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/18468/stack \n[<ffffffffaa2d7856>] __cond_resched+0x26/0x30\n[<ffffffffc10af35e>] dbuf_rele+0x1e/0x40 [zfs]\n[<ffffffffc10bb730>] dmu_buf_rele_array.part.6+0x40/0x70 [zfs]\n[<ffffffffc10bd96a>] dmu_write_uio_dnode+0x11a/0x180 [zfs]\n[<ffffffffc10bda24>] dmu_write_uio_dbuf+0x54/0x70 [zfs]\n[<ffffffffc11abd1b>] zfs_write+0xb9b/0xfb0 [zfs]\n[<ffffffffc11ed202>] zpl_aio_write+0x152/0x1a0 [zfs]\n[<ffffffffaa44dadb>] do_sync_readv_writev+0x7b/0xd0\n[<ffffffffaa44f62e>] do_readv_writev+0xce/0x260\n[<ffffffffaa44f855>] vfs_writev+0x35/0x60\n[<ffffffffaa44fc12>] SyS_pwritev+0xc2/0xf0\n[<ffffffffaa999f92>] system_call_fastpath+0x25/0x2a\n[<ffffffffffffffff>] 0xffffffffffffffff\n\nFWIW: both are running zfs-2.2.3 RPMs from zfsonlinux.org.\n\nIt's surely possible that there's an issue that affects older kernels\nbut not more recent ones.\n\n> > > > Full disclosure: the VM that hit this issue today has had storage-level\n> > > > errors (reported here at ZZqr_GTaHyuW7fLp@pryzbyj2023), as recently as 3\n> > > > days ago.\n> > >\n> > > So indeed, my suspicion from above is confirmed.\n> >\n> > I'd be fine with that conclusion (as in the earlier thread), except this\n> > has now happened on 2 different VMs, and the first one has no I/O\n> > issues. If this were another symptom of a storage failure, and hadn't\n> > previously happened on another VM, I wouldn't be re-reporting it.\n> \n> Is it the same VM hosting environment? And the same old distro?\n\nYes, they're running centos7 with the indicated kernels.\n\ndmidecode shows they're both running:\n\n Product Name: VMware Virtual Platform\n\nBut they're different customers, so I'd be somewhat surprised if they're\nrunning same versions of the hypervisor.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 May 2024 13:21:06 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backend stuck in DataFileExtend" }, { "msg_contents": "On Tue, May 7, 2024 at 6:21 AM Justin Pryzby <[email protected]> wrote:\n> FWIW: both are running zfs-2.2.3 RPMs from zfsonlinux.org.\n...\n> Yes, they're running centos7 with the indicated kernels.\n\nSo far we've got:\n\n* spurious EIO when opening a file (your previous report)\n* hanging with CPU spinning (?) inside pwritev()\n* old kernel, bleeding edge ZFS\n\n From an (uninformed) peek at the ZFS code, if it really is spinning\nthere is seems like a pretty low level problem: it's finish the write,\nand now is just trying to release (something like our unpin) and\nunlock the buffers, which involves various code paths that might touch\nvarious mutexes and spinlocks, and to get stuck like that I guess it's\neither corrupted itself or it is deadlocking against something else,\nbut what? Do you see any other processes (including kernel threads)\nwith any stuck stacks that might be a deadlock partner?\n\nWhile looking around for reported issues I found your abandoned report\nagainst an older ZFS version from a few years ago, same old Linux\nversion:\n\nhttps://github.com/openzfs/zfs/issues/11641\n\nI don't know enough to say anything useful about that but it certainly\nsmells similar...\n\nI see you've been busy reporting lots of issues, which seems to\ninvolve big data, big \"recordsize\" (= ZFS block sizes), compression\nand PostgreSQL:\n\nhttps://github.com/openzfs/zfs/issues?q=is%3Aissue+author%3Ajustinpryzby\n\n\n", "msg_date": "Tue, 7 May 2024 10:55:28 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend stuck in DataFileExtend" }, { "msg_contents": "On Tue, May 07, 2024 at 10:55:28AM +1200, Thomas Munro wrote:\n> On Tue, May 7, 2024 at 6:21 AM Justin Pryzby <[email protected]> wrote:\n> > FWIW: both are running zfs-2.2.3 RPMs from zfsonlinux.org.\n> ...\n> > Yes, they're running centos7 with the indicated kernels.\n> \n> So far we've got:\n> \n> * spurious EIO when opening a file (your previous report)\n> * hanging with CPU spinning (?) inside pwritev()\n> * old kernel, bleeding edge ZFS\n> \n> From an (uninformed) peek at the ZFS code, if it really is spinning\n> there is seems like a pretty low level problem: it's finish the write,\n> and now is just trying to release (something like our unpin) and\n> unlock the buffers, which involves various code paths that might touch\n> various mutexes and spinlocks, and to get stuck like that I guess it's\n> either corrupted itself or it is deadlocking against something else,\n> but what? Do you see any other processes (including kernel threads)\n> with any stuck stacks that might be a deadlock partner?\n\nSorry, but even after forgetting several times, I finally remembered to\ngo back to issue, and already rebooted the VM as needed to kill the\nstuck process.\n\nBut .. it seems to have recurred again this AM. Note that yesterday,\nI'd taken the opportunity to upgrade to zfs-2.2.4.\n\nThese two procs are the oldest active postgres procs, and also (now)\nadjacent in ps -ef --sort start_time.\n\n-[ RECORD 1 ]----+----------------------------------------------------------------\nbackend_start | 2024-05-07 09:45:06.228528-06\napplication_name | \nxact_start | 2024-05-07 09:55:38.409549-06\nquery_start | 2024-05-07 09:55:38.409549-06\nstate_change | 2024-05-07 09:55:38.409549-06\npid | 27449\nbackend_type | autovacuum worker\nwait_event_type | BufferPin\nwait_event | BufferPin\nstate | active\nbackend_xid | \nbackend_xmin | 4293757489\nquery_id | \nleft | autovacuum: VACUUM ANALYZE child.\n-[ RECORD 2 ]----+----------------------------------------------------------------\nbackend_start | 2024-05-07 09:49:24.686314-06\napplication_name | MasterMetricsLoader -n -m Xml\nxact_start | 2024-05-07 09:50:30.387156-06\nquery_start | 2024-05-07 09:50:32.744435-06\nstate_change | 2024-05-07 09:50:32.744436-06\npid | 5051\nbackend_type | client backend\nwait_event_type | IO\nwait_event | DataFileExtend\nstate | active\nbackend_xid | 4293757489\nbackend_xmin | 4293757429\nquery_id | 2230046159014513529\nleft | PREPARE mml_0 AS INSERT INTO chil\n\nThe earlier proc is doing:\nstrace: Process 27449 attached\nepoll_wait(11, ^Cstrace: Process 27449 detached\n <detached ...>\n\nThe later process is stuck, with:\n[pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/5051/stack \n[<ffffffffffffffff>] 0xffffffffffffffff\n\nFor good measure:\n[pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/27433/stack \n[<ffffffffc0600c2e>] taskq_thread+0x48e/0x4e0 [spl]\n[<ffffffff9eec5f91>] kthread+0xd1/0xe0\n[<ffffffff9f599df7>] ret_from_fork_nospec_end+0x0/0x39\n[<ffffffffffffffff>] 0xffffffffffffffff\n[pryzbyj@telsasoft-centos7 ~]$ sudo cat /proc/27434/stack \n[<ffffffffc0600c2e>] taskq_thread+0x48e/0x4e0 [spl]\n[<ffffffff9eec5f91>] kthread+0xd1/0xe0\n[<ffffffff9f599df7>] ret_from_fork_nospec_end+0x0/0x39\n[<ffffffffffffffff>] 0xffffffffffffffff\n\n[pryzbyj@telsasoft-centos7 ~]$ ps -O wchan================ 5051 27449\n PID =============== S TTY TIME COMMAND\n 5051 ? R ? 02:14:27 postgres: telsasoft ts ::1(53708) INSERT\n27449 ep_poll S ? 00:05:16 postgres: autovacuum worker ts\n\nThe interesting procs might be:\n\nps -eO wchan===============,lstart --sort start_time\n...\n15632 worker_thread Mon May 6 23:51:34 2024 S ? 00:00:00 [kworker/2:2H]\n27433 taskq_thread Tue May 7 09:35:59 2024 S ? 00:00:56 [z_wr_iss]\n27434 taskq_thread Tue May 7 09:35:59 2024 S ? 00:00:57 [z_wr_iss]\n27449 ep_poll Tue May 7 09:45:05 2024 S ? 00:05:16 postgres: autovacuum worker ts\n 5051 ? Tue May 7 09:49:23 2024 R ? 02:23:04 postgres: telsasoft ts ::1(53708) INSERT\n 7861 ep_poll Tue May 7 09:51:25 2024 S ? 00:03:04 /usr/local/bin/python3.8 -u /home/telsasoft/server/alarms/core/pr...\n 7912 ep_poll Tue May 7 09:51:27 2024 S ? 00:00:00 postgres: telsasoft ts ::1(53794) idle\n24518 futex_wait_que Tue May 7 10:42:56 2024 S ? 00:00:55 java -jar /home/telsasoft/server/alarms/alcatel_lucent/jms/jms2rm...\n...\n\n> While looking around for reported issues I found your abandoned report\n> against an older ZFS version from a few years ago, same old Linux\n> version:\n> \n> https://github.com/openzfs/zfs/issues/11641\n> \n> I don't know enough to say anything useful about that but it certainly\n> smells similar...\n\nWow - I'd completely forgotten about that problem report.\nThe symptoms are the same, even with a zfs version 3+ years newer.\nI wish the ZFS people would do more with their problem reports.\n\nBTW, we'll be upgrading this VM to a newer kernel, if not a newer OS\n(for some reason, these projects take a very long time). With any luck,\nit'll either recur, or not.\n\nI'm not sure if any of that is useful, or interesting.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 May 2024 13:54:16 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backend stuck in DataFileExtend" }, { "msg_contents": "On Wed, May 8, 2024 at 6:54 AM Justin Pryzby <[email protected]> wrote:\n> On Tue, May 07, 2024 at 10:55:28AM +1200, Thomas Munro wrote:\n> > https://github.com/openzfs/zfs/issues/11641\n> >\n> > I don't know enough to say anything useful about that but it certainly\n> > smells similar...\n>\n> Wow - I'd completely forgotten about that problem report.\n> The symptoms are the same, even with a zfs version 3+ years newer.\n> I wish the ZFS people would do more with their problem reports.\n\nIf I had to guess, my first idea would be that your 1MB or ginormous\n16MB recordsize (a relatively new option) combined with PostgreSQL's\n8KB block-at-a-time random order I/O patterns are tickling strange\ncorners and finding a bug that no one has seen before. I would\nimagine that almost everyone in the galaxy who uses very large records\ndoes so with 'settled' data that gets streamed out once sequentially\n(for example I think some of the OpenZFS maintainers are at Lawrence\nLivermore National Lab where I guess they might pump around petabytes\nof data produced by particle physics research or whatever it might be,\nprobably why they they are also adding direct I/O to avoid caches\ncompletely...). But for us, if we have lots of backends reading,\nwriting and extending random 8KB fragments of a 16MB page concurrently\n(2048 pages per record!), maybe we hit some broken edge... I'd be\nsure to include that sort of detail in any future reports.\n\nNormally I suppress urges to blame problems on kernels, file systems\netc and in the past accusations that ZFS was buggy turned out to be\nbugs in PostgreSQL IIRC, but user space sure seems to be off the hook\nfor this one...\n\n\n", "msg_date": "Wed, 8 May 2024 13:13:21 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend stuck in DataFileExtend" } ]
[ { "msg_contents": "Currently, it is pretty easy to subvert the restrictions imposed\nby row-level security and security_barrier views. All you have to\nto is use EXPLAIN (ANALYZE) and see how many rows were filtered\nout by the RLS policy or the view condition.\n\nThis is not considered a security bug (I asked), but I still think\nit should be fixed.\n\nMy idea is to forbid EXPLAIN (ANALYZE) for ordinary users whenever\na statement uses either of these features. But restricting it to\nsuperusers would be too restrictive (with a superuser, you can never\nobserve RLS, since superusers are exempt) and it would also be\ndangerous (you shouldn't perform DML on untrusted tables as superuser).\n\nSo I thought we could restrict the use of EXPLAIN (ANALYZE) in these\nsituations to the members of a predefined role. That could be a new\npredefined role, but I think it might as well be \"pg_read_all_stats\",\nsince that role allows you to view sensitive data like the MCV in\npg_statistic, and EXPLAIN (ANALYZE) can be seen as provideing executor\nstatistics.\n\nAttached is a POC patch that implements that (documentation and\nregression tests are still missing) to form a basis for a discussion.\n\nThere are a few things I would like feedback about:\n\n- is it OK to use \"pg_read_all_stats\" for that?\n\n- should the check be moved to standard_ExplainOneQuery()?\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 06 May 2024 16:46:48 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Restrict EXPLAIN (ANALYZE) for RLS and security_barrier views" }, { "msg_contents": "On Mon, 2024-05-06 at 16:46 +0200, Laurenz Albe wrote:\n> Attached is a POC patch that implements that (documentation and\n> regression tests are still missing) to form a basis for a discussion.\n\n... and here is a complete patch with regression tests and documentation.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 15 May 2024 17:12:27 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restrict EXPLAIN (ANALYZE) for RLS and security_barrier views" }, { "msg_contents": "On Mon, 6 May 2024 at 15:46, Laurenz Albe <[email protected]> wrote:\n>\n> Currently, it is pretty easy to subvert the restrictions imposed\n> by row-level security and security_barrier views. All you have to\n> to is use EXPLAIN (ANALYZE) and see how many rows were filtered\n> out by the RLS policy or the view condition.\n>\n> This is not considered a security bug (I asked), but I still think\n> it should be fixed.\n>\n> My idea is to forbid EXPLAIN (ANALYZE) for ordinary users whenever\n> a statement uses either of these features.\n>\n\nHmm, but there are other ways to see how many rows were filtered out:\n\n - Use pg_stat_get_tuples_returned()\n - Use pg_class.reltuples\n - Use the row estimates from a plain EXPLAIN\n\nand probably more.\n\nGiven that this isn't really a security bug, I think EXPLAIN should\nprobably be left as-is. Otherwise, you'd have to go round closing all\nthose other \"holes\" too.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 16 Jul 2024 18:36:40 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restrict EXPLAIN (ANALYZE) for RLS and security_barrier views" }, { "msg_contents": "On Tue, 2024-07-16 at 18:36 +0100, Dean Rasheed wrote:\n> On Mon, 6 May 2024 at 15:46, Laurenz Albe <[email protected]> wrote:\n> > \n> > Currently, it is pretty easy to subvert the restrictions imposed\n> > by row-level security and security_barrier views. All you have to\n> > to is use EXPLAIN (ANALYZE) and see how many rows were filtered\n> > out by the RLS policy or the view condition.\n> > \n> > This is not considered a security bug (I asked), but I still think\n> > it should be fixed.\n> > \n> > My idea is to forbid EXPLAIN (ANALYZE) for ordinary users whenever\n> > a statement uses either of these features.\n> \n> Hmm, but there are other ways to see how many rows were filtered out:\n> \n> - Use pg_stat_get_tuples_returned()\n\nThat is true, but it will only work if there is no concurrent DML activity\non the database. Still, I agree that it would be good to improve that,\nfor example by imposing a similar restriction on viewing these statistics.\n\n> - Use pg_class.reltuples\n\nI don't accept that. The estimated row count doesn't tell me anything\nabout the contents of the table.\n\n> - Use the row estimates from a plain EXPLAIN\n\nI don't buy that either. plain EXPLAIN will just tell you how many\nrows PostgreSQL estimates for each node, and it won't tell you how\nmany rows will get filtered out by a RLS policy. Also, these are only\nestimates.\n\n> and probably more.\n> \n> Given that this isn't really a security bug, I think EXPLAIN should\n> probably be left as-is. Otherwise, you'd have to go round closing all\n> those other \"holes\" too.\n\nThe only reason this is not a security bug is because we document these\nweaknesses of row-level security and security barrier views. I don't\nthink that should count as an argument against improving the situation.\n\nWith the exception of the table statistics you mention above, all the\nother ways to derive knowledge about the \"hidden\" data leak very little\ninformation, as does examining the query execution time (which is mentioned\nin the documentation).\n\nThe information you can glean from EXPLAIN (ANALYZE) is much more\nconclusive: \"rows removed by filter: 1\"\n\nThe patch that I am proposing certainly won't close all possibilities\nto make an educated guess about \"hidden\" data, but it will close the\nsimplest way to reliably subvert RLS and security barrier views.\n\nIs that not a worthy goal?\nShouldn't we plug the most glaring hole in PostgreSQL's data security\nfeatures?\n\nI am aware that this change will make performance analysis more\ncumbersome, but that's the price to pay for improved security.\nI'd be ready to look at restricting pg_stat_get_tuples_returned(),\nbut perhaps that should be a separate patch.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 17 Jul 2024 10:30:46 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restrict EXPLAIN (ANALYZE) for RLS and security_barrier views" }, { "msg_contents": "On Mon, 2024-05-06 at 16:46 +0200, Laurenz Albe wrote:\n> Currently, it is pretty easy to subvert the restrictions imposed\n> by row-level security and security_barrier views.  All you have to\n> to is use EXPLAIN (ANALYZE) and see how many rows were filtered\n> out by the RLS policy or the view condition.\n> \n> This is not considered a security bug (I asked), but I still think\n> it should be fixed.\n> \n> My idea is to forbid EXPLAIN (ANALYZE) for ordinary users whenever\n> a statement uses either of these features.  But restricting it to\n> superusers would be too restrictive (with a superuser, you can never\n> observe RLS, since superusers are exempt) and it would also be\n> dangerous (you shouldn't perform DML on untrusted tables as superuser).\n> \n> So I thought we could restrict the use of EXPLAIN (ANALYZE) in these\n> situations to the members of a predefined role.  That could be a new\n> predefined role, but I think it might as well be \"pg_read_all_stats\",\n> since that role allows you to view sensitive data like the MCV in\n> pg_statistic, and EXPLAIN (ANALYZE) can be seen as provideing executor\n> statistics.\n\nAfter a discussion on the pgsql-security list ([email protected]),\nI am going to mark this patch as rejected.\n\nThe gist of that discussion was that even without EXPLAIN (ANALYZE),\nit is easy enough for a determined attacker who can run arbitrary\nSQL to subvert row-level security.\nTherefore, restricting EXPLAIN (ANALYZE) will do more harm than good,\nsince it will make analyzing query performance harder without a\nreal security gain.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 19 Sep 2024 11:58:33 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Restrict EXPLAIN (ANALYZE) for RLS and security_barrier views" } ]
[ { "msg_contents": "Hi,\r\n\r\nPlease find the draft of the 2024-05-09 release announcement.\r\n\r\nPlease review for corrections and any omissions that should be called \r\nout as part of this release.\r\n\r\nPlease provide feedback no later (and preferably sooner) than 2024-05-09 \r\n12:00 UTC.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Mon, 6 May 2024 13:44:05 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "2024-05-09 release announcement draft" }, { "msg_contents": "Op 5/6/24 om 19:44 schreef Jonathan S. Katz:\n> Hi,\n> \n> Please find the draft of the 2024-05-09 release announcement.\n\n\n'procedures that returns' should be\n'procedures that return'\n\n\n\n", "msg_date": "Mon, 6 May 2024 22:59:51 +0200", "msg_from": "Erik Rijkers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On Tue, 7 May 2024 at 05:44, Jonathan S. Katz <[email protected]> wrote:\n> Please provide feedback no later (and preferably sooner) than 2024-05-09\n> 12:00 UTC.\n\nThanks for the draft. Here's some feedback.\n\n> * Fix [`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html) from\n> multiple [`VALUES`](https://www.postgresql.org/docs/current/sql-values.html)\n> rows into a target column that is a domain over an array or composite type.\n> including requiring the [SELECT privilege](https://www.postgresql.org/docs/current/sql-grant.html)\n> on the target table when using [`MERGE`](https://www.postgresql.org/docs/current/sql-merge.html)\n> when using `MERGE ... DO NOTHING`.\n\nSomething looks wrong with the above. Are two separate items merged\ninto one? 52898c63e and a3f5d2056?\n\n> * Fix confusion for SQL-language procedures that returns a single composite-type\n> column.\n\nShould \"returns\" be singular here?\n\n> * Throw an error if an index is accessed while it is being reindexed.\n\n I know you want to keep these short and I understand the above is the\nsame wording from release notes, but these words sound like a terrible\noversite that we allow any concurrent query to still use the table\nwhile a reindex is in progress. Maybe we should give more detail\nthere so people don't think we made such a silly mistake. The release\nnote version I think does have enough words to allow the reader to\nunderstand that the mistake is more complex. Maybe we could add\nsomething here to make it sound like less of an embarrassing mistake?\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 09:08:57 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "Hi,\n\nIn <[email protected]>,<[email protected]>\n \"2024-05-09 release announcement draft,2024-05-09 release announcement draft\" on Mon, 6 May 2024 13:44:05 -0400,\n \"Jonathan S. Katz\" <[email protected]> wrote:\n\n> * Added optimization for certain operations where an installation has thousands\n> of roles.\n\nAdded ->\nAdd\n\n> * [Follow @postgresql on Twitter](https://twitter.com/postgresql)\n\nTwitter ->\nX\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Tue, 07 May 2024 06:36:17 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft,2024-05-09 release\n announcement draft" }, { "msg_contents": "On 5/6/24 5:08 PM, David Rowley wrote:\r\n> On Tue, 7 May 2024 at 05:44, Jonathan S. Katz <[email protected]> wrote:\r\n>> Please provide feedback no later (and preferably sooner) than 2024-05-09\r\n>> 12:00 UTC.\r\n> \r\n> Thanks for the draft. Here's some feedback.\r\n> \r\n>> * Fix [`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html) from\r\n>> multiple [`VALUES`](https://www.postgresql.org/docs/current/sql-values.html)\r\n>> rows into a target column that is a domain over an array or composite type.\r\n>> including requiring the [SELECT privilege](https://www.postgresql.org/docs/current/sql-grant.html)\r\n>> on the target table when using [`MERGE`](https://www.postgresql.org/docs/current/sql-merge.html)\r\n>> when using `MERGE ... DO NOTHING`.\r\n> \r\n> Something looks wrong with the above. Are two separate items merged\r\n> into one? 52898c63e and a3f5d2056?\r\n\r\nUgh, I see what happened. I was originally planning to combine them, and \r\nthen had one be the lede, then the other. Given I ended up consolidating \r\nquite a bit, I'll just have them each stand on their own. I'll fix this \r\nin the next draft (which I'll upload on my Tuesday).\r\n\r\n>> * Fix confusion for SQL-language procedures that returns a single composite-type\r\n>> column.\r\n> \r\n> Should \"returns\" be singular here?\r\n\r\nFixed.\r\n\r\n>> * Throw an error if an index is accessed while it is being reindexed.\r\n> \r\n> I know you want to keep these short and I understand the above is the\r\n> same wording from release notes, but these words sound like a terrible\r\n> oversite that we allow any concurrent query to still use the table\r\n> while a reindex is in progress.\r\n\r\nYeah, I was not happy with this one at all.\r\n\r\n Maybe we should give more detail\r\n> there so people don't think we made such a silly mistake. The release\r\n> note version I think does have enough words to allow the reader to\r\n> understand that the mistake is more complex. Maybe we could add\r\n> something here to make it sound like less of an embarrassing mistake?\r\n\r\nBased on this, I'd vote to just remove it from the release announcement.\r\n\r\nJonathan", "msg_date": "Mon, 6 May 2024 22:58:49 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On 5/6/24 5:36 PM, Sutou Kouhei wrote:\r\n> Hi,\r\n> \r\n> In <[email protected]>,<[email protected]>\r\n> \"2024-05-09 release announcement draft,2024-05-09 release announcement draft\" on Mon, 6 May 2024 13:44:05 -0400,\r\n> \"Jonathan S. Katz\" <[email protected]> wrote:\r\n> \r\n>> * Added optimization for certain operations where an installation has thousands\r\n>> of roles.\r\n> \r\n> Added ->\r\n> Add\r\n\r\nFixed.\r\n\r\n>> * [Follow @postgresql on Twitter](https://twitter.com/postgresql)\r\n> \r\n> Twitter ->\r\n> X\r\n\r\nI think this one is less clear, from browsing around. I think \r\n\"X/Twitter\" is considered acceptable, and regardless the domain is still \r\npointing to \"Twitter\". However, I'll go with the hybrid adjustment.\r\n\r\nJonathan", "msg_date": "Mon, 6 May 2024 23:02:27 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2024-05-09 release announcement draft,2024-05-09 release\n announcement draft" }, { "msg_contents": "\"Jonathan S. Katz\" <[email protected]> writes:\n> On 5/6/24 5:08 PM, David Rowley wrote:\n>>> * Throw an error if an index is accessed while it is being reindexed.\n\n>> Maybe we should give more detail\n>> there so people don't think we made such a silly mistake. The release\n>> note version I think does have enough words to allow the reader to\n>> understand that the mistake is more complex. Maybe we could add\n>> something here to make it sound like less of an embarrassing mistake?\n\n> Based on this, I'd vote to just remove it from the release announcement.\n\n+1. This is hardly a major bug fix --- it's just blocking off\nsomething that people shouldn't be doing in the first place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 May 2024 23:04:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On Tue, 7 May 2024 at 14:58, Jonathan S. Katz <[email protected]> wrote:\n> >> * Throw an error if an index is accessed while it is being reindexed.\n> >\n>\n> Based on this, I'd vote to just remove it from the release announcement.\n\nI'd prefer that over leaving the wording the way it is. Looking at\nthe test case in [1], it does not seem like a very likely thing for\npeople to hit. It basically requires someone to be telling lies about\na function's immutability.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Tue, 7 May 2024 15:05:18 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On 5/6/24 11:05 PM, David Rowley wrote:\r\n> On Tue, 7 May 2024 at 14:58, Jonathan S. Katz <[email protected]> wrote:\r\n>>>> * Throw an error if an index is accessed while it is being reindexed.\r\n>>>\r\n>>\r\n>> Based on this, I'd vote to just remove it from the release announcement.\r\n> \r\n> I'd prefer that over leaving the wording the way it is. Looking at\r\n> the test case in [1], it does not seem like a very likely thing for\r\n> people to hit. It basically requires someone to be telling lies about\r\n> a function's immutability.\r\n\r\nI opted for that; and it turned out the other fix was simple, so here's \r\nan updated draft.\r\n\r\nJonathan", "msg_date": "Mon, 6 May 2024 23:09:24 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On Tue, 7 May 2024 at 15:09, Jonathan S. Katz <[email protected]> wrote:\n> I opted for that; and it turned out the other fix was simple, so here's\n> an updated draft.\n\nThanks\n\n> * Fix [`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html) from\n> multiple [`VALUES`](https://www.postgresql.org/docs/current/sql-values.html)\n> rows into a target column that is a domain over an array or composite type.\n\nI know this is the same wording as Tom added in [1], I might just have\nfailed to comprehend something, but if I strip out the links and try\nto make sense of \"Fix INSERT from multiple VALUES rows into\", I just\ncan't figure out how to parse it. I'm pretty sure it means \"Fix\nmultiple-row VALUES clauses with INSERT statements when ...\", but I'm\nnot sure.\n\n> * Require the [SELECT privilege](https://www.postgresql.org/docs/current/sql-grant.html)\n> on the target table when using [`MERGE`](https://www.postgresql.org/docs/current/sql-merge.html)\n> when using `MERGE ... DO NOTHING`.\n\nI think the last line should just be \"with `NO NOTHING`\"\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7155cc4a60e7bfc837233b2dea2563a2edc673fd\n\n\n", "msg_date": "Tue, 7 May 2024 15:27:49 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I know this is the same wording as Tom added in [1], I might just have\n> failed to comprehend something, but if I strip out the links and try\n> to make sense of \"Fix INSERT from multiple VALUES rows into\", I just\n> can't figure out how to parse it. I'm pretty sure it means \"Fix\n> multiple-row VALUES clauses with INSERT statements when ...\", but I'm\n> not sure.\n\nThe problem happens in commands like\n\tINSERT INTO tab VALUES (1,2), (3,4), ...\nWe treat this separately from the single-VALUES-row case for\nefficiency reasons.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 May 2024 23:48:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On Tue, 7 May 2024 at 15:48, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I know this is the same wording as Tom added in [1], I might just have\n> > failed to comprehend something, but if I strip out the links and try\n> > to make sense of \"Fix INSERT from multiple VALUES rows into\", I just\n> > can't figure out how to parse it. I'm pretty sure it means \"Fix\n> > multiple-row VALUES clauses with INSERT statements when ...\", but I'm\n> > not sure.\n>\n> The problem happens in commands like\n> INSERT INTO tab VALUES (1,2), (3,4), ...\n> We treat this separately from the single-VALUES-row case for\n> efficiency reasons.\n\nYeah, I know about the multi-row VALUES. What I'm mostly struggling to\nparse is the \"from\" and the double plural of \"VALUES\" and \"rows\".\nAlso, why is it \"from\" and not \"with\"? I get that \"VALUES\" is a\nkeyword that happens to be plural, but try reading it out loud.\n\nWhy not \"Fix INSERT with multi-row VALUES clauses ...\"?\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 16:02:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Why not \"Fix INSERT with multi-row VALUES clauses ...\"?\n\nTo my mind, the VALUES clause is the data source for INSERT,\nso \"from\" seems appropriate. I'm not going to argue hard\nabout it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 00:16:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On Mon, May 06, 2024 at 11:09:24PM -0400, Jonathan S. Katz wrote:\n> * Avoid leaking a query result from [`psql`](https://www.postgresql.org/docs/current/app-psql.html)\n> after the query is cancelled.\n\nI'd delete this item about a psql-lifespan memory leak, because (a) it's so\nminor and (b) there are other reasonable readings of \"leak\" that would make\nthe change look more important.\n\n\n", "msg_date": "Tue, 7 May 2024 11:14:24 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On 5/7/24 12:16 AM, Tom Lane wrote:\r\n> David Rowley <[email protected]> writes:\r\n>> Why not \"Fix INSERT with multi-row VALUES clauses ...\"?\r\n> \r\n> To my mind, the VALUES clause is the data source for INSERT,\r\n> so \"from\" seems appropriate. I'm not going to argue hard\r\n> about it.\r\n\r\nOK, so I've read through this a few times and have sufficiently confused \r\nmyself. So, how about this:\r\n\r\n* Fix how \r\n[`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html) \r\nhandles multiple \r\n[`VALUES`](https://www.postgresql.org/docs/current/sql-values.html) rows \r\ninto a target column that is a domain over an array or composite type.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 8 May 2024 12:17:13 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On Thu, 9 May 2024 at 04:17, Jonathan S. Katz <[email protected]> wrote:\n> * Fix how\n> [`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html)\n> handles multiple\n> [`VALUES`](https://www.postgresql.org/docs/current/sql-values.html) rows\n> into a target column that is a domain over an array or composite type.\n\nMaybe it's only me who thinks the double plural of \"VALUES rows\" is\nhard to parse. If that's the case I'll just drop this as it's not that\nimportant.\n\nFWIW, I'd probably write:\n\n* Fix issue with\n[`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html)\nwith a multi-row\n[`VALUES`](https://www.postgresql.org/docs/current/sql-values.html) clause\nwhere a target column is a domain over an array or composite type.\n\nI'll argue no further with this.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 09:44:02 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2024-05-09 release announcement draft" }, { "msg_contents": "On 5/8/24 5:44 PM, David Rowley wrote:\r\n> On Thu, 9 May 2024 at 04:17, Jonathan S. Katz <[email protected]> wrote:\r\n>> * Fix how\r\n>> [`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html)\r\n>> handles multiple\r\n>> [`VALUES`](https://www.postgresql.org/docs/current/sql-values.html) rows\r\n>> into a target column that is a domain over an array or composite type.\r\n> \r\n> Maybe it's only me who thinks the double plural of \"VALUES rows\" is\r\n> hard to parse. If that's the case I'll just drop this as it's not that\r\n> important.\r\n> \r\n> FWIW, I'd probably write:\r\n> \r\n> * Fix issue with\r\n> [`INSERT`](https://www.postgresql.org/docs/current/sql-insert.html)\r\n> with a multi-row\r\n> [`VALUES`](https://www.postgresql.org/docs/current/sql-values.html) clause\r\n> where a target column is a domain over an array or composite type.\r\n\r\nI like your wording, and went with that.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 8 May 2024 18:04:15 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2024-05-09 release announcement draft" } ]
[ { "msg_contents": "Instead of needing to be explicit, we can just iterate the\npgstat_kind_infos array to find the memory locations to read into.\n\nThis was originally thought of by Andres in\n5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n\nNot a fix, per se, but it does remove a comment. Perhaps the discussion \nwill just lead to someone deleting the comment, and keeping the code \nas is. Either way, a win :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Mon, 06 May 2024 14:07:53 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Mon, May 06, 2024 at 02:07:53PM -0500, Tristan Partin wrote:\n> This was originally thought of by Andres in\n> 5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n\n+1 because you are removing a duplication between the order of the\nitems in PgStat_Kind and the order when these are read. I suspect\nthat nobody would mess up with the order if adding a stats kind with a\nfixed number of objects, but that makes maintenance slightly easier in\nthe long-term :)\n\n> Not a fix, per se, but it does remove a comment. Perhaps the discussion will\n> just lead to someone deleting the comment, and keeping the code as is.\n> Either way, a win :).\n\nYup. Let's leave that as something to do for v18.\n--\nMichael", "msg_date": "Tue, 7 May 2024 11:50:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Mon May 6, 2024 at 9:50 PM CDT, Michael Paquier wrote:\n> On Mon, May 06, 2024 at 02:07:53PM -0500, Tristan Partin wrote:\n> > This was originally thought of by Andres in\n> > 5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n>\n> +1 because you are removing a duplication between the order of the\n> items in PgStat_Kind and the order when these are read. I suspect\n> that nobody would mess up with the order if adding a stats kind with a\n> fixed number of objects, but that makes maintenance slightly easier in\n> the long-term :)\n>\n> > Not a fix, per se, but it does remove a comment. Perhaps the discussion will\n> > just lead to someone deleting the comment, and keeping the code as is.\n> > Either way, a win :).\n>\n> Yup. Let's leave that as something to do for v18.\n\nThanks for the feedback Michael. Should I just throw the patch in the \nnext commitfest so it doesn't get left behind?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 07 May 2024 00:44:51 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Tue, May 07, 2024 at 12:44:51AM -0500, Tristan Partin wrote:\n> Thanks for the feedback Michael. Should I just throw the patch in the next\n> commitfest so it doesn't get left behind?\n\nBetter to do so, yes. I have noted this thread in my TODO list, but\nwe're a couple of weeks away from the next CF and things tend to get\neasily forgotten.\n--\nMichael", "msg_date": "Tue, 7 May 2024 15:01:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Tue May 7, 2024 at 1:01 AM CDT, Michael Paquier wrote:\n> On Tue, May 07, 2024 at 12:44:51AM -0500, Tristan Partin wrote:\n> > Thanks for the feedback Michael. Should I just throw the patch in the next\n> > commitfest so it doesn't get left behind?\n>\n> Better to do so, yes. I have noted this thread in my TODO list, but\n> we're a couple of weeks away from the next CF and things tend to get\n> easily forgotten.\n\nAdded here: https://commitfest.postgresql.org/48/4978/\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 07 May 2024 12:47:18 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "Hi,\n\nOn 2024-05-06 14:07:53 -0500, Tristan Partin wrote:\n> Instead of needing to be explicit, we can just iterate the\n> pgstat_kind_infos array to find the memory locations to read into.\n\n> This was originally thought of by Andres in\n> 5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n> \n> Not a fix, per se, but it does remove a comment. Perhaps the discussion will\n> just lead to someone deleting the comment, and keeping the code as is.\n> Either way, a win :).\n\nI think it's a good idea. I'd really like to allow extensions to register new\ntypes of stats eventually. Stuff like pg_stat_statements having its own,\nfairly ... mediocre, stats storage shouldn't be necessary.\n\nDo we need to increase the stats version, I didn't check if the order we\ncurrently store things in and the numerical order of the stats IDs are the\nsame.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 May 2024 11:29:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Tue May 7, 2024 at 1:29 PM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2024-05-06 14:07:53 -0500, Tristan Partin wrote:\n> > Instead of needing to be explicit, we can just iterate the\n> > pgstat_kind_infos array to find the memory locations to read into.\n>\n> > This was originally thought of by Andres in\n> > 5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd.\n> > \n> > Not a fix, per se, but it does remove a comment. Perhaps the discussion will\n> > just lead to someone deleting the comment, and keeping the code as is.\n> > Either way, a win :).\n>\n> I think it's a good idea. I'd really like to allow extensions to register new\n> types of stats eventually. Stuff like pg_stat_statements having its own,\n> fairly ... mediocre, stats storage shouldn't be necessary.\n\nCould be useful for Neon in the future too.\n\n> Do we need to increase the stats version, I didn't check if the order we\n> currently store things in and the numerical order of the stats IDs are the\n> same.\n\nI checked the orders, and they looked the same.\n\n1. Archiver\n2. BgWriter\n3. Checkpointer\n4. IO\n5. SLRU\n6. WAL\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 07 May 2024 14:07:42 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Tue, May 07, 2024 at 02:07:42PM -0500, Tristan Partin wrote:\n> On Tue May 7, 2024 at 1:29 PM CDT, Andres Freund wrote:\n>> I think it's a good idea. I'd really like to allow extensions to register new\n>> types of stats eventually. Stuff like pg_stat_statements having its own,\n>> fairly ... mediocre, stats storage shouldn't be necessary.\n> \n> Could be useful for Neon in the future too.\n\nInteresting thing to consider. If you do that, I'm wondering if you\ncould, actually, lift the restriction on pg_stat_statements.max and\nmake it a SIGHUP so as it could be increased on-the-fly.. Hmm, just a\nthought in passing.\n\n>> Do we need to increase the stats version, I didn't check if the order we\n>> currently store things in and the numerical order of the stats IDs are the\n>> same.\n> \n> I checked the orders, and they looked the same.\n> \n> 1. Archiver\n> 2. BgWriter\n> 3. Checkpointer\n> 4. IO\n> 5. SLRU\n> 6. WAL\n\nYup, I've looked at that yesterday and the read order remains the same\nso I don't see a need for a version bump.\n--\nMichael", "msg_date": "Wed, 8 May 2024 10:21:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Wed, May 08, 2024 at 10:21:56AM +0900, Michael Paquier wrote:\n> Yup, I've looked at that yesterday and the read order remains the same\n> so I don't see a need for a version bump.\n\nWhile looking at this stuff again, I got no objections for it except a\nfew edits to make the new code more consistent with the surroundings,\nas in the loop, the use of pgstat_get_kind_info and an extra\nassertion.\n\nStill, I think that we could be more ambitious to make this area of\nthe code more pluggable in the future (fixed shared stats are not\ncovered by my proposal on the other thread about pluggable cumulative\nstats, perhaps it should but I could not think about a case for them).\n\nAnyway, the first thing is to apply the same pattern as the read part\nfor pgstat_write_statsfile(), based on the content in the local stats\nsnapshot PgStat_Snapshot.\n\nSo, how about trying to remove the dependency to the fixed shared\nstats structures in PgStat_ShmemControl and PgStat_Snapshot? I'd like\nto think that these should be replaced with an area allocated in\nshared memory and TopMemoryContext respectively, with PgStat_Snapshot\nand PgStat_ShmemControl pointing to these areas, with an allocated\nsize based on the information aggregated from the KindInfo Array. We\ncould also store the offset of the fixed areas in two extra arrays,\none for each of the two structures, indexed by KindInfo and of size\nPGSTAT_NUM_KINDS.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 1 Jul 2024 14:48:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" }, { "msg_contents": "On Mon, Jul 01, 2024 at 02:48:19PM +0900, Michael Paquier wrote:\n> So, how about trying to remove the dependency to the fixed shared\n> stats structures in PgStat_ShmemControl and PgStat_Snapshot? I'd like\n> to think that these should be replaced with an area allocated in\n> shared memory and TopMemoryContext respectively, with PgStat_Snapshot\n> and PgStat_ShmemControl pointing to these areas, with an allocated\n> size based on the information aggregated from the KindInfo Array. We\n> could also store the offset of the fixed areas in two extra arrays,\n> one for each of the two structures, indexed by KindInfo and of size\n> PGSTAT_NUM_KINDS.\n\nI have been poking at this area, and found a solution that should\nwork. The details will be posted on the pluggable stats thread with a\nrebased patch and these bits on top of the pluggable APIs:\nhttps://www.postgresql.org/message-id/Zmqm9j5EO0I4W8dx%40paquier.xyz\n\nSo let's move the talking there.\n--\nMichael", "msg_date": "Tue, 2 Jul 2024 12:23:34 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgstat_kind_infos to read fixed shared stats structs" } ]
[ { "msg_contents": "Hi PostgreSQL Community,\n\nI have been working on partitioned tables recently, and I have noticed\nsomething that doesn't seem correct with the EXPLAIN output of an\nupdate/delete query with a returning list.\n\nFor example, consider two partitioned tables, \"t1\" and \"t2,\" with\npartitions \"t11,\" \"t12,\" and \"t21,\" \"t22,\" respectively. The table\ndefinitions are as follows:\n\n```sql\npostgres=# \\d+ t1\n Partitioned table \"public.t1\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | | | plain |\n| |\n b | integer | | | | plain |\n| |\n c | integer | | | | plain |\n| |\nPartition key: RANGE (a)\nPartitions: t11 FOR VALUES FROM (0) TO (1000),\n t12 FOR VALUES FROM (1000) TO (10000)\n\npostgres=# \\d+ t2\n Partitioned table \"public.t2\"\n Column | Type | Collation | Nullable | Default | Storage | Compression\n| Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | | | plain |\n| |\n b | integer | | | | plain |\n| |\n c | integer | | | | plain |\n| |\nPartition key: RANGE (a)\nPartitions: t21 FOR VALUES FROM (0) TO (1000),\n t22 FOR VALUES FROM (1000) TO (10000)\n```\n\nThe EXPLAIN output for an update query with a returning list doesn't seem\ncorrect to me. Here are the examples (the part that doesn't seem right is\nhighlighted in bold):\n\n*Query1:*\n```\npostgres=# explain verbose update t1 set b = 10 from t2 where t1.a = t2.a\n returning t1.c;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n Update on public.t1 (cost=0.00..125187.88 rows=41616 width=14)\n *Output: t1_1.c -----> something not right??*\n Update on public.t11 t1_1\n Update on public.t12 t1_2\n -> Append (cost=0.00..125187.88 rows=41616 width=14)\n -> Nested Loop (cost=0.00..62489.90 rows=20808 width=14)\n Output: 10, t1_1.tableoid, t1_1.ctid\n Join Filter: (t1_1.a = t2_1.a)\n -> Seq Scan on public.t11 t1_1 (cost=0.00..30.40 rows=2040\nwidth=14)\n Output: t1_1.a, t1_1.tableoid, t1_1.ctid\n -> Materialize (cost=0.00..40.60 rows=2040 width=4)\n Output: t2_1.a\n -> Seq Scan on public.t21 t2_1 (cost=0.00..30.40\nrows=2040 width=4)\n Output: t2_1.a\n -> Nested Loop (cost=0.00..62489.90 rows=20808 width=14)\n Output: 10, t1_2.tableoid, t1_2.ctid\n Join Filter: (t1_2.a = t2_2.a)\n -> Seq Scan on public.t12 t1_2 (cost=0.00..30.40 rows=2040\nwidth=14)\n Output: t1_2.a, t1_2.tableoid, t1_2.ctid\n -> Materialize (cost=0.00..40.60 rows=2040 width=4)\n Output: t2_2.a\n -> Seq Scan on public.t22 t2_2 (cost=0.00..30.40\nrows=2040 width=4)\n Output: t2_2.a\n(23 rows)\n```\n\n*Query2:*\n\n*```*postgres=# explain verbose update t1 set b = 10 from t2 where t1.a =\nt2.a returning t2.c;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n Update on public.t1 (cost=0.00..125187.88 rows=41616 width=18)\n *Output: t2.c*\n Update on public.t11 t1_1\n Update on public.t12 t1_2\n -> Append (cost=0.00..125187.88 rows=41616 width=18)\n -> Nested Loop (cost=0.00..62489.90 rows=20808 width=18)\n Output: 10, t2_1.c, t1_1.tableoid, t1_1.ctid\n Join Filter: (t1_1.a = t2_1.a)\n -> Seq Scan on public.t11 t1_1 (cost=0.00..30.40 rows=2040\nwidth=14)\n Output: t1_1.a, t1_1.tableoid, t1_1.ctid\n -> Materialize (cost=0.00..40.60 rows=2040 width=8)\n Output: t2_1.c, t2_1.a\n -> Seq Scan on public.t21 t2_1 (cost=0.00..30.40\nrows=2040 width=8)\n Output: t2_1.c, t2_1.a\n -> Nested Loop (cost=0.00..62489.90 rows=20808 width=18)\n Output: 10, t2_2.c, t1_2.tableoid, t1_2.ctid\n Join Filter: (t1_2.a = t2_2.a)\n -> Seq Scan on public.t12 t1_2 (cost=0.00..30.40 rows=2040\nwidth=14)\n Output: t1_2.a, t1_2.tableoid, t1_2.ctid\n -> Materialize (cost=0.00..40.60 rows=2040 width=8)\n Output: t2_2.c, t2_2.a\n -> Seq Scan on public.t22 t2_2 (cost=0.00..30.40\nrows=2040 width=8)\n Output: t2_2.c, t2_2.a\n(23 rows)\n```\n\nAfter further investigation into the code, I noticed following:\n\n1. In the 'grouping_planner()' function, while generating paths for the\nfinal relation (\nhttps://github.com/postgres/postgres/blob/master/src/backend/optimizer/plan/planner.c#L1857),\nwe only take care of adjusting the append_rel_attributes in returningList\nfor resultRelation. Shouldn't we do that for other relations as well in\nquery? Example for *Query2* above, *adjust_appendrel_attrs_multilevel* is a\nno-op.\n2. After plan creation (\nhttps://github.com/postgres/postgres/blob/master/src/backend/optimizer/plan/createplan.c#L351),\nshouldn't we perform tlist labeling for the `returningList` as well? I\nsuspect this is resulting in incorrect output in *Query1*.\n\nI suspect that similar issues might also be present for `withCheckOptions`,\n`mergeActionList`, and `mergeJoinCondition`.\n\nI would appreciate it if the community could provide insights or\nclarifications regarding this observation.\n\nThank you for your time and consideration.\n\n\nRegards\nSaikiran Avula,\nSDE, Amazon Web Services.\n\nHi PostgreSQL Community,I have been working on partitioned tables recently, and I have noticed something that doesn't seem correct with the EXPLAIN output of an update/delete query with a returning list.For example, consider two partitioned tables, \"t1\" and \"t2,\" with partitions \"t11,\" \"t12,\" and \"t21,\" \"t22,\" respectively. The table definitions are as follows:```sqlpostgres=# \\d+ t1                                     Partitioned table \"public.t1\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              | b      | integer |           |          |         | plain   |             |              | c      | integer |           |          |         | plain   |             |              |Partition key: RANGE (a)Partitions: t11 FOR VALUES FROM (0) TO (1000),            t12 FOR VALUES FROM (1000) TO (10000)postgres=# \\d+ t2                                     Partitioned table \"public.t2\" Column |  Type   | Collation | Nullable | Default | Storage | Compression | Stats target | Description--------+---------+-----------+----------+---------+---------+-------------+--------------+------------- a      | integer |           |          |         | plain   |             |              | b      | integer |           |          |         | plain   |             |              | c      | integer |           |          |         | plain   |             |              |Partition key: RANGE (a)Partitions: t21 FOR VALUES FROM (0) TO (1000),            t22 FOR VALUES FROM (1000) TO (10000)```The EXPLAIN output for an update query with a returning list doesn't seem correct to me. Here are the examples (the part that doesn't seem right is highlighted in bold):Query1:```postgres=# explain verbose update t1 set b = 10 from t2 where t1.a = t2.a  returning t1.c;                                        QUERY PLAN                                        ------------------------------------------------------------------------------------------- Update on public.t1  (cost=0.00..125187.88 rows=41616 width=14)   Output: t1_1.c     -----> something not right??   Update on public.t11 t1_1   Update on public.t12 t1_2   ->  Append  (cost=0.00..125187.88 rows=41616 width=14)         ->  Nested Loop  (cost=0.00..62489.90 rows=20808 width=14)               Output: 10, t1_1.tableoid, t1_1.ctid               Join Filter: (t1_1.a = t2_1.a)               ->  Seq Scan on public.t11 t1_1  (cost=0.00..30.40 rows=2040 width=14)                     Output: t1_1.a, t1_1.tableoid, t1_1.ctid               ->  Materialize  (cost=0.00..40.60 rows=2040 width=4)                     Output: t2_1.a                     ->  Seq Scan on public.t21 t2_1  (cost=0.00..30.40 rows=2040 width=4)                           Output: t2_1.a         ->  Nested Loop  (cost=0.00..62489.90 rows=20808 width=14)               Output: 10, t1_2.tableoid, t1_2.ctid               Join Filter: (t1_2.a = t2_2.a)               ->  Seq Scan on public.t12 t1_2  (cost=0.00..30.40 rows=2040 width=14)                     Output: t1_2.a, t1_2.tableoid, t1_2.ctid               ->  Materialize  (cost=0.00..40.60 rows=2040 width=4)                     Output: t2_2.a                     ->  Seq Scan on public.t22 t2_2  (cost=0.00..30.40 rows=2040 width=4)                           Output: t2_2.a(23 rows)```Query2:```postgres=# explain verbose update t1 set b = 10 from t2 where t1.a = t2.a  returning t2.c;                                        QUERY PLAN                                        ------------------------------------------------------------------------------------------- Update on public.t1  (cost=0.00..125187.88 rows=41616 width=18)   Output: t2.c   Update on public.t11 t1_1   Update on public.t12 t1_2   ->  Append  (cost=0.00..125187.88 rows=41616 width=18)         ->  Nested Loop  (cost=0.00..62489.90 rows=20808 width=18)               Output: 10, t2_1.c, t1_1.tableoid, t1_1.ctid               Join Filter: (t1_1.a = t2_1.a)               ->  Seq Scan on public.t11 t1_1  (cost=0.00..30.40 rows=2040 width=14)                     Output: t1_1.a, t1_1.tableoid, t1_1.ctid               ->  Materialize  (cost=0.00..40.60 rows=2040 width=8)                     Output: t2_1.c, t2_1.a                     ->  Seq Scan on public.t21 t2_1  (cost=0.00..30.40 rows=2040 width=8)                           Output: t2_1.c, t2_1.a         ->  Nested Loop  (cost=0.00..62489.90 rows=20808 width=18)               Output: 10, t2_2.c, t1_2.tableoid, t1_2.ctid               Join Filter: (t1_2.a = t2_2.a)               ->  Seq Scan on public.t12 t1_2  (cost=0.00..30.40 rows=2040 width=14)                     Output: t1_2.a, t1_2.tableoid, t1_2.ctid               ->  Materialize  (cost=0.00..40.60 rows=2040 width=8)                     Output: t2_2.c, t2_2.a                     ->  Seq Scan on public.t22 t2_2  (cost=0.00..30.40 rows=2040 width=8)                           Output: t2_2.c, t2_2.a(23 rows)```After further investigation into the code, I noticed following:1. In the 'grouping_planner()' function, while generating paths for the final relation (https://github.com/postgres/postgres/blob/master/src/backend/optimizer/plan/planner.c#L1857), we only take care of adjusting the append_rel_attributes in returningList for resultRelation. Shouldn't we do that for other relations as well in query? Example for Query2 above, adjust_appendrel_attrs_multilevel is a no-op.2. After plan creation (https://github.com/postgres/postgres/blob/master/src/backend/optimizer/plan/createplan.c#L351), shouldn't we perform tlist labeling for the `returningList` as well? I suspect this is resulting in incorrect output in Query1.I suspect that similar issues might also be present for `withCheckOptions`, `mergeActionList`, and `mergeJoinCondition`.I would appreciate it if the community could provide insights or clarifications regarding this observation.Thank you for your time and consideration.RegardsSaikiran Avula,SDE, Amazon Web Services.", "msg_date": "Mon, 6 May 2024 20:56:56 +0100", "msg_from": "SAIKIRAN AVULA <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect explain output for updates/delete operations with\n returning-list on partitioned tables" }, { "msg_contents": "SAIKIRAN AVULA <[email protected]> writes:\n> I have been working on partitioned tables recently, and I have noticed\n> something that doesn't seem correct with the EXPLAIN output of an\n> update/delete query with a returning list.\n\nWhat do you think is not right exactly? The output has to use some\none of the correlation names for the partitioned table. I think\nit generally chooses the one corresponding to the first Append arm,\nbut really any would be good enough for EXPLAIN's purposes.\n\n> 1. In the 'grouping_planner()' function, while generating paths for the\n> final relation (\n> https://github.com/postgres/postgres/blob/master/src/backend/optimizer/plan/planner.c#L1857),\n> we only take care of adjusting the append_rel_attributes in returningList\n> for resultRelation. Shouldn't we do that for other relations as well in\n> query?\n\nIf the only difference is which way variables get labeled in EXPLAIN,\nI'd be kind of disinclined to spend extra cycles. But in any case,\nI rather suspect you'll find that this actively breaks things.\nWhether we change the varno on a Var isn't really optional, and there\nare cross-checks in setrefs.c to make sure things match up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 06 May 2024 17:18:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect explain output for updates/delete operations with\n returning-list on partitioned tables" }, { "msg_contents": "On Tue, 7 May 2024 at 09:18, Tom Lane <[email protected]> wrote:\n>\n> SAIKIRAN AVULA <[email protected]> writes:\n> > I have been working on partitioned tables recently, and I have noticed\n> > something that doesn't seem correct with the EXPLAIN output of an\n> > update/delete query with a returning list.\n>\n> What do you think is not right exactly? The output has to use some\n> one of the correlation names for the partitioned table. I think\n> it generally chooses the one corresponding to the first Append arm,\n> but really any would be good enough for EXPLAIN's purposes.\n\nAlso looks harmless to me. But just a slight correction, you're\ntalking about the deparse Append condition that's in\nset_deparse_plan(). Whereas the code that controls this for the\nreturningList is the following in nodeModifyTable.c:\n\n/*\n* Initialize result tuple slot and assign its rowtype using the first\n* RETURNING list. We assume the rest will look the same.\n*/\nmtstate->ps.plan->targetlist = (List *) linitial(node->returningLists);\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 10:58:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect explain output for updates/delete operations with\n returning-list on partitioned tables" } ]
[ { "msg_contents": "Hi all,\n\nRecently I dealt with a server where PAM had hung a connection\nindefinitely, suppressing our authentication timeout and preventing a\nclean shutdown. Worse, the xmin that was pinned by the opening\ntransaction cascaded to replicas and started messing things up\ndownstream.\n\nThe DBAs didn't know what was going on, because pg_stat_activity\ndoesn't report the authenticating connection or its open transaction.\nIt just looked like a Postgres bug. And while talking about it with\nEuler, he mentioned he'd seen similar \"invisible\" hangs with\nmisbehaving LDAP deployments. I think we can do better to show DBAs\nwhat's happening.\n\n0001, attached, changes InitPostgres() to report a nearly-complete\npgstat entry before entering client authentication, then fills it in\nthe rest of the way once we know who the user is. Here's a sample\nentry for a client that's hung during a SCRAM exchange:\n\n =# select * from pg_stat_activity where state = 'authenticating';\n -[ RECORD 1 ]----+------------------------------\n datid |\n datname |\n pid | 745662\n leader_pid |\n usesysid |\n usename |\n application_name |\n client_addr | 127.0.0.1\n client_hostname |\n client_port | 38304\n backend_start | 2024-05-06 11:25:23.905923-07\n xact_start |\n query_start |\n state_change |\n wait_event_type | Client\n wait_event | ClientRead\n state | authenticating\n backend_xid |\n backend_xmin | 784\n query_id |\n query |\n backend_type | client backend\n\n0002 goes even further, and adds wait events for various forms of\nexternal authentication, but it's not fully baked. The intent is for a\nDBA to be able to see when a bunch of connections are piling up\nwaiting for PAM/Kerberos/whatever. (I'm also motivated by my OAuth\npatchset, where there's a server-side plugin that we have no control\nover, and we'd want to be able to correctly point fingers at it if\nthings go wrong.)\n\n= Open Issues, Idle Thoughts =\n\nMaybe it's wishful thinking, but it'd be cool if a misbehaving\nauthentication exchange did not impact replicas in any way. Is there a\nway to make that opening transaction lighterweight?\n\n0001 may be a little too much code. There are only two parts of\npgstat_bestart() that need to be modified: omit the user ID, and fill\nin the state as 'authenticating' rather than unknown. I could just add\nthe `pre_auth` boolean to the signature of pgstat_bestart() directly,\nif we don't mind adjusting all the call sites. We could also avoid\nchanging the signature entirely, and just assume that we're\nauthenticating if SessionUserId isn't set. That felt like a little too\nmuch global magic to me, though.\n\nWould anyone like me to be more aggressive, and create a pgstat entry\nas soon as we have the opening transaction? Or... as soon as a\nconnection is made?\n\n0002 is abusing the \"IPC\" wait event class. If the general idea seems\nokay, maybe we could add an \"External\" class that encompasses the\ngeneral idea of \"it's not our fault, it's someone else's\"?\n\nI had trouble deciding how granular to make the areas that are covered\nby the new wait events. Ideally they would kick in only when we call\nout to an external system, but for some authentication types, that's a\nlot of calls to wrap. On the other extreme, we don't want to go too\nhigh in the call stack and accidentally nest wait events (such as\nthose generated during pq_getmessage()). What I have now is not very\nprincipled.\n\nI haven't decided how to test these patches. Seems like a potential\nuse case for injection points, but I think I'd need to preload an\ninjection library rather than using the existing extension. Does that\nseem like an okay way to go?\n\nThanks,\n--Jacob", "msg_date": "Mon, 6 May 2024 14:23:38 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Mon, May 06, 2024 at 02:23:38PM -0700, Jacob Champion wrote:\n> =# select * from pg_stat_activity where state = 'authenticating';\n> -[ RECORD 1 ]----+------------------------------\n> datid |\n> datname |\n> pid | 745662\n> leader_pid |\n> usesysid |\n> usename |\n> application_name |\n> client_addr | 127.0.0.1\n> client_hostname |\n> client_port | 38304\n> backend_start | 2024-05-06 11:25:23.905923-07\n> xact_start |\n> query_start |\n> state_change |\n> wait_event_type | Client\n> wait_event | ClientRead\n> state | authenticating\n> backend_xid |\n> backend_xmin | 784\n> query_id |\n> query |\n> backend_type | client backend\n\nThat looks like a reasonable user experience. Is any field newly-nullable?\n\n> = Open Issues, Idle Thoughts =\n> \n> Maybe it's wishful thinking, but it'd be cool if a misbehaving\n> authentication exchange did not impact replicas in any way. Is there a\n> way to make that opening transaction lighterweight?\n\nYou could release the xmin before calling PAM or LDAP. If you've copied all\nrelevant catalog content to local memory, that's fine to do. That said, it\nmay be more fruitful to arrange for authentication timeout to cut through PAM\netc. Hanging connection slots hurt even if they lack an xmin. I assume it\ntakes an immediate shutdown to fix them?\n\n> Would anyone like me to be more aggressive, and create a pgstat entry\n> as soon as we have the opening transaction? Or... as soon as a\n> connection is made?\n\nAll else being equal, I'd like backends to have one before taking any lmgr\nlock or snapshot.\n\n> I haven't decided how to test these patches. Seems like a potential\n> use case for injection points, but I think I'd need to preload an\n> injection library rather than using the existing extension. Does that\n> seem like an okay way to go?\n\nYes.\n\nThanks,\nnm\n\n\n", "msg_date": "Sun, 30 Jun 2024 10:48:12 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Sun, Jun 30, 2024 at 10:48 AM Noah Misch <[email protected]> wrote:\n> That looks like a reasonable user experience. Is any field newly-nullable?\n\nTechnically I think the answer is no, since backends such as walwriter\nalready have null database and user fields. It's new for a client\nbackend to have nulls there, though.\n\n> That said, it\n> may be more fruitful to arrange for authentication timeout to cut through PAM\n> etc.\n\nThat seems mostly out of our hands -- the misbehaving modules are free\nto ignore our signals (and do). Is there another way to force the\nissue?\n\n> Hanging connection slots hurt even if they lack an xmin.\n\nOh, would releasing the xmin not really move the needle, then?\n\n> I assume it\n> takes an immediate shutdown to fix them?\n\nThat's my understanding, yeah.\n\n> > Would anyone like me to be more aggressive, and create a pgstat entry\n> > as soon as we have the opening transaction? Or... as soon as a\n> > connection is made?\n>\n> All else being equal, I'd like backends to have one before taking any lmgr\n> lock or snapshot.\n\nI can look at this for the next patchset version.\n\n> > I haven't decided how to test these patches. Seems like a potential\n> > use case for injection points, but I think I'd need to preload an\n> > injection library rather than using the existing extension. Does that\n> > seem like an okay way to go?\n>\n> Yes.\n\nI misunderstood how injection points worked. No preload module needed,\nso v2 adds a waitpoint and a test along with a couple of needed tweaks\nto BackgroundPsql. I think 0001 should probably be applied\nindependently.\n\nThanks,\n--Jacob", "msg_date": "Mon, 8 Jul 2024 14:09:21 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Mon, Jul 08, 2024 at 02:09:21PM -0700, Jacob Champion wrote:\n> On Sun, Jun 30, 2024 at 10:48 AM Noah Misch <[email protected]> wrote:\n> > That said, it\n> > may be more fruitful to arrange for authentication timeout to cut through PAM\n> > etc.\n> \n> That seems mostly out of our hands -- the misbehaving modules are free\n> to ignore our signals (and do). Is there another way to force the\n> issue?\n\nTwo ways at least (neither of them cheap):\n- Invoke PAM in a subprocess, and SIGKILL that process if needed.\n- Modify the module to be interruptible.\n\n> > Hanging connection slots hurt even if they lack an xmin.\n> \n> Oh, would releasing the xmin not really move the needle, then?\n\nIt still moves the needle.\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:04:01 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Sun, Jun 30, 2024 at 10:48 AM Noah Misch <[email protected]> wrote:v\n> > Would anyone like me to be more aggressive, and create a pgstat entry\n> > as soon as we have the opening transaction? Or... as soon as a\n> > connection is made?\n>\n> All else being equal, I'd like backends to have one before taking any lmgr\n> lock or snapshot.\n\nv3-0003 pushes the pgstat creation as far back as I felt comfortable,\nright after the PGPROC registration by InitProcessPhase2(). That\nfunction does lock the ProcArray, but if it gets held forever due to\nsome bug, you won't be able to use pg_stat_activity to debug it\nanyway. And with this ordering, pg_stat_get_activity() will be able to\nretrieve the proc entry by PID without a race.\n\nThis approach ends up registering an early entry for more cases than\nthe original patchset. For example, autovacuum and other background\nworkers will now briefly get their own \"authenticating\" state, which\nseems like it could potentially confuse people. Should I rename the\nstate, or am I overthinking it?\n\n> You could release the xmin before calling PAM or LDAP. If you've copied all\n> relevant catalog content to local memory, that's fine to do.\n\nI played with the xmin problem a little bit, but I've shelved it for\nnow. There's probably a way to do that safely; I just don't understand\nenough about the invariants to do it. For example, there's a comment\nlater on that says\n\n * We established a catalog snapshot while reading pg_authid and/or\n * pg_database;\n\nand I'm a little nervous about invalidating the snapshot halfway\nthrough that process. Even if PAM and LDAP don't rely on pg_authid or\nother shared catalogs today, shouldn't they be allowed to in the\nfuture, without being coupled to InitPostgres implementation order?\nAnd I don't think we can move the pg_database checks before\nauthentication.\n\nAs for the other patches, I'll ping Andrew about 0001, and 0004\nremains in its original WIP state. Anyone excited about that wait\nevent idea?\n\nThanks!\n--Jacob", "msg_date": "Thu, 29 Aug 2024 13:44:01 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On 2024-08-29 Th 4:44 PM, Jacob Champion wrote:\n> As for the other patches, I'll ping Andrew about 0001,\n\n\nPatch 0001 looks sane to me.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-08-29 Th 4:44 PM, Jacob\n Champion wrote:\n\n\n\n\nAs for the other patches, I'll ping Andrew about 0001,\n\n\n\nPatch 0001 looks sane to me.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 30 Aug 2024 16:10:32 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Fri, Aug 30, 2024 at 04:10:32PM -0400, Andrew Dunstan wrote:\n> \n> On 2024-08-29 Th 4:44 PM, Jacob Champion wrote:\n> > As for the other patches, I'll ping Andrew about 0001,\n> \n> \n> Patch 0001 looks sane to me.\n\nSo does 0002 to me. I'm not much a fan of the addition of\npgstat_bestart_pre_auth(), which is just a shortcut to set a different\nstate in the backend entry to tell that it is authenticating. Is\nauthenticating the term for this state of the process startups,\nactually? Could it be more transparent to use a \"startup\" or\n\"starting\"\" state instead that gets also used by pgstat_bestart() in\nthe case of the patch where !pre_auth?\n\nThe addition of the new wait event states in 0004 is a good idea,\nindeed, and these can be seen in pg_stat_activity once we get out of\nPGSTAT_END_WRITE_ACTIVITY() (err.. Right?).\n--\nMichael", "msg_date": "Mon, 2 Sep 2024 09:10:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Sun, Sep 1, 2024 at 5:10 PM Michael Paquier <[email protected]> wrote:\n> On Fri, Aug 30, 2024 at 04:10:32PM -0400, Andrew Dunstan wrote:\n> > Patch 0001 looks sane to me.\n> So does 0002 to me.\n\nThanks both!\n\n> I'm not much a fan of the addition of\n> pgstat_bestart_pre_auth(), which is just a shortcut to set a different\n> state in the backend entry to tell that it is authenticating. Is\n> authenticating the term for this state of the process startups,\n> actually? Could it be more transparent to use a \"startup\" or\n> \"starting\"\" state instead\n\nYeah, I think I should rename that. Especially if we adopt new wait\nstates to make it obvious where we're stuck.\n\n\"startup\", \"starting\", \"initializing\", \"connecting\"...?\n\n> that gets also used by pgstat_bestart() in\n> the case of the patch where !pre_auth?\n\nTo clarify, do you want me to just add the new boolean directly to\npgstat_bestart()'s parameter list?\n\n> The addition of the new wait event states in 0004 is a good idea,\n> indeed,\n\nThanks! Any thoughts on the two open questions for it?:\n1) Should we add a new wait event class rather than reusing IPC?\n2) Is the level at which I've inserted calls to\npgstat_report_wait_start()/_end() sane and maintainable?\n\n> and these can be seen in pg_stat_activity once we get out of\n> PGSTAT_END_WRITE_ACTIVITY() (err.. Right?).\n\nIt doesn't look like pgstat_report_wait_start() uses that machinery.\n\n--Jacob\n\n\n", "msg_date": "Tue, 3 Sep 2024 14:47:57 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Tue, Sep 03, 2024 at 02:47:57PM -0700, Jacob Champion wrote:\n> On Sun, Sep 1, 2024 at 5:10 PM Michael Paquier <[email protected]> wrote:\n>> that gets also used by pgstat_bestart() in\n>> the case of the patch where !pre_auth?\n> \n> To clarify, do you want me to just add the new boolean directly to\n> pgstat_bestart()'s parameter list?\n\nNo. My question was about splitting pgstat_bestart() and\npgstat_bestart_pre_auth() in a cleaner way, because authenticated\nconnections finish by calling both, meaning that we do twice the same\nsetup for backend entries depending on the authentication path taken.\nThat seems like a waste.\n\n>> The addition of the new wait event states in 0004 is a good idea,\n>> indeed,\n> \n> Thanks! Any thoughts on the two open questions for it?:\n> 1) Should we add a new wait event class rather than reusing IPC?\n\nA new category would be more adapted. IPC is not adapted because are\nnot waiting for another server process. Perhaps just use a new\n\"Authentication\" class, as in \"The server is waiting for an\nauthentication operation to complete\"?\n\n> 2) Is the level at which I've inserted calls to\n> pgstat_report_wait_start()/_end() sane and maintainable?\n\nThese don't worry me. You are adding twelve event points with only 5\nnew wait names. Couldn't it be better to have a one-one mapping\ninstead, adding twelve entries in wait_event_names.txt?\n\nI am not really on board with the test based on injection points\nproposed, though. It checks that the \"authenticating\" flag is set in\npg_stat_activity, but it does nothing else. That seems limited. Or\nare you planning for more?\n--\nMichael", "msg_date": "Tue, 10 Sep 2024 14:29:57 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Tue, Sep 10, 2024 at 02:29:57PM +0900, Michael Paquier wrote:\n> You are adding twelve event points with only 5\n> new wait names. Couldn't it be better to have a one-one mapping\n> instead, adding twelve entries in wait_event_names.txt?\n\nNo, I think the patch's level of detail is better. One shouldn't expect the\ntwo ldap_simple_bind_s() calls to have different-enough performance\ncharacteristics to justify exposing that level of detail to the DBA.\nldap_search_s() and InitializeLDAPConnection() differ more, but the DBA mostly\njust needs to know the scale of their LDAP responsiveness problem.\n\n(Someday, it might be good to expose the file:line and/or backtrace associated\nwith a wait, like we do for ereport(). As a way to satisfy rare needs for\nmore detail, I'd prefer that over giving every pgstat_report_wait_start() a\ndifferent name.)\n\n\n", "msg_date": "Tue, 10 Sep 2024 10:27:12 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Tue, Sep 10, 2024 at 1:27 PM Noah Misch <[email protected]> wrote:\n> On Tue, Sep 10, 2024 at 02:29:57PM +0900, Michael Paquier wrote:\n> > You are adding twelve event points with only 5\n> > new wait names. Couldn't it be better to have a one-one mapping\n> > instead, adding twelve entries in wait_event_names.txt?\n>\n> No, I think the patch's level of detail is better. One shouldn't expect the\n> two ldap_simple_bind_s() calls to have different-enough performance\n> characteristics to justify exposing that level of detail to the DBA.\n> ldap_search_s() and InitializeLDAPConnection() differ more, but the DBA mostly\n> just needs to know the scale of their LDAP responsiveness problem.\n>\n> (Someday, it might be good to expose the file:line and/or backtrace associated\n> with a wait, like we do for ereport(). As a way to satisfy rare needs for\n> more detail, I'd prefer that over giving every pgstat_report_wait_start() a\n> different name.)\n\nI think unique names are a good idea. If a user doesn't care about the\ndifference between sdgjsA and sdjgsB, they can easily ignore the\ntrailing suffix, and IME, people typically do that without really\nstopping to think about it. If on the other hand the two are lumped\ntogether as sdjgs and a user needs to distinguish them, they can't. So\nI see unique names as having much more upside than downside.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 14:51:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Tue, Sep 10, 2024 at 02:51:23PM -0400, Robert Haas wrote:\n> On Tue, Sep 10, 2024 at 1:27 PM Noah Misch <[email protected]> wrote:\n> > On Tue, Sep 10, 2024 at 02:29:57PM +0900, Michael Paquier wrote:\n> > > You are adding twelve event points with only 5\n> > > new wait names. Couldn't it be better to have a one-one mapping\n> > > instead, adding twelve entries in wait_event_names.txt?\n> >\n> > No, I think the patch's level of detail is better. One shouldn't expect the\n> > two ldap_simple_bind_s() calls to have different-enough performance\n> > characteristics to justify exposing that level of detail to the DBA.\n> > ldap_search_s() and InitializeLDAPConnection() differ more, but the DBA mostly\n> > just needs to know the scale of their LDAP responsiveness problem.\n> >\n> > (Someday, it might be good to expose the file:line and/or backtrace associated\n> > with a wait, like we do for ereport(). As a way to satisfy rare needs for\n> > more detail, I'd prefer that over giving every pgstat_report_wait_start() a\n> > different name.)\n> \n> I think unique names are a good idea. If a user doesn't care about the\n> difference between sdgjsA and sdjgsB, they can easily ignore the\n> trailing suffix, and IME, people typically do that without really\n> stopping to think about it. If on the other hand the two are lumped\n> together as sdjgs and a user needs to distinguish them, they can't. So\n> I see unique names as having much more upside than downside.\n\nI agree a person can ignore the distinction, but that requires the person to\nbe consuming the raw event list. It's reasonable to tell your monitoring tool\nto give you the top N wait events. Individual AuthnLdap* events may all miss\nthe cut even though their aggregate would have made the cut. Before you know\nto teach that monitoring tool to group AuthnLdap* together, it won't show you\nany of those names.\n\nI felt commit c789f0f also chose sub-optimally in this respect, particularly\nwith the DblinkGetConnect/DblinkConnect pair. I didn't feel strongly enough\nto complain at the time, but a rule of \"each wait event appears in one\npgstat_report_wait_start()\" would be a rule I don't want. One needs\nfamiliarity with the dblink implementation internals to grasp the\nDblinkGetConnect/DblinkConnect distinction, and a plausible refactor of dblink\nwould make those names cease to fit. I see this level of fine-grained naming\nas making the event name a sort of stable proxy for FILE:LINE. I'd value\nexposing such a proxy, all else being equal, but I don't think wait event\nnames like AuthLdapBindLdapbinddn/AuthLdapBindUser are the right way. Wait\nevent names should be more independent of today's code-level details.\n\n\n", "msg_date": "Tue, 10 Sep 2024 13:58:50 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Tue, Sep 10, 2024 at 01:58:50PM -0700, Noah Misch wrote:\n> On Tue, Sep 10, 2024 at 02:51:23PM -0400, Robert Haas wrote:\n>> I think unique names are a good idea. If a user doesn't care about the\n>> difference between sdgjsA and sdjgsB, they can easily ignore the\n>> trailing suffix, and IME, people typically do that without really\n>> stopping to think about it. If on the other hand the two are lumped\n>> together as sdjgs and a user needs to distinguish them, they can't. So\n>> I see unique names as having much more upside than downside.\n> \n> I agree a person can ignore the distinction, but that requires the person to\n> be consuming the raw event list. It's reasonable to tell your monitoring tool\n> to give you the top N wait events. Individual AuthnLdap* events may all miss\n> the cut even though their aggregate would have made the cut. Before you know\n> to teach that monitoring tool to group AuthnLdap* together, it won't show you\n> any of those names.\n\nThat's a fair point. I use a bunch of aggregates with group bys for\nany monitoring queries looking for event point patterns. In my\nexperience, when dealing with enough connections, patterns show up\nanyway even if there is noise because some of the events that I was\nlooking for are rather short-term, like a sync events interleaving\nwith locks storing an average of the events into a secondary table\nwith some INSERT SELECT.\n\n> I felt commit c789f0f also chose sub-optimally in this respect, particularly\n> with the DblinkGetConnect/DblinkConnect pair. I didn't feel strongly enough\n> to complain at the time, but a rule of \"each wait event appears in one\n> pgstat_report_wait_start()\" would be a rule I don't want. One needs\n> familiarity with the dblink implementation internals to grasp the\n> DblinkGetConnect/DblinkConnect distinction, and a plausible refactor of dblink\n> would make those names cease to fit. I see this level of fine-grained naming\n> as making the event name a sort of stable proxy for FILE:LINE. I'd value\n> exposing such a proxy, all else being equal, but I don't think wait event\n> names like AuthLdapBindLdapbinddn/AuthLdapBindUser are the right way. Wait\n> event names should be more independent of today's code-level details.\n\nDepends. I'd rather choose more granularity to know exactly which\npart of the code I am dealing with, especially in the case of this\nthread where these are embedded around external function calls. If,\nfor example, one notices that a stack of pg_stat_activity scans are\ncomplaining about a specific step in the authentication process, it is\ngoing to offer a much better hint than having to guess which part of\nthe authentication step is slow, like in LDAP.\n\nWait event additions are also kind of cheap in terms of maintenance in\ncore, creating a new translation cost. So I also think there are more\nupsides to be wilder here with more points and more granularity.\n--\nMichael", "msg_date": "Wed, 11 Sep 2024 07:33:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Tue, Sep 10, 2024 at 4:58 PM Noah Misch <[email protected]> wrote:\n> ... a rule of \"each wait event appears in one\n> pgstat_report_wait_start()\" would be a rule I don't want.\n\nAs the original committer of the wait event stuff, I intended for the\nrule that you do not want to be the actual rule. However, I see that I\ndidn't spell that out anywhere in the commit message, or the commit\nitself.\n\n> I see this level of fine-grained naming\n> as making the event name a sort of stable proxy for FILE:LINE. I'd value\n> exposing such a proxy, all else being equal, but I don't think wait event\n> names like AuthLdapBindLdapbinddn/AuthLdapBindUser are the right way. Wait\n> event names should be more independent of today's code-level details.\n\nI don't agree with that. One of the most difficult parts of supporting\nPostgreSQL, in my experience, is that it's often very difficult to\nfind out what has gone wrong when a system starts behaving badly. It\nis often necessary to ask customers to install a debugger and do stuff\nwith it, or give them an instrumented build, in order to determine the\nroot cause of a problem that in some cases is not even particularly\ncomplicated. While needing to refer to specific source code details\nmay not be a common experience for the typical end user, it is\nextremely common for me. This problem commonly arises with error\nmessages, because we have lots of error messages that are exactly the\nsame, although thankfully it has become less common due to \"could not\nfind tuple for THINGY %u\" no longer being a message that no longer\ntypically reaches users. But even when someone has a complaint about\nan error message and there are multiple instances of that error\nmessage, I know that:\n\n(1) I can ask them to set the error verbosity to verbose. I don't have\nthat option for wait events.\n\n(2) The primary function of the error message is to be understandable\nto the user, which means that it needs to be written in plain English.\nThe primary function of a wait event is to make it possible to\nunderstand the behavior of the system and troubleshoot problems, and\nit becomes much less effective as soon as it starts saying that thing\nA and thing B are so similar that nobody will ever care about the\ndistinction. It is very hard to be certain of that. When somebody\nreports that they've got a whole bunch of wait events on some wait\nevent that nobody has ever complained about before, I want to go look\nat the code in that specific place and try to figure out what's\nhappening. If I have to start imagining possible scenarios based on 2\nor more call sites, or if I have to start by getting them to install a\nmodified build with those properly split apart and trying to reproduce\nthe problem, it's a lot harder.\n\nIn my experience, the number of distinct wait events that a particular\ninstallation experiences is rarely very large. It is probably measured\nin dozens. A user who wishes to disregard the distinction between\nsimilarly-named wait events won't find it prohibitively difficult to\nlook over the list of all the wait events they ever see and decide\nwhich ones they'd like to merge for reporting purposes. But a user who\nreally needs things separated out and finds that they aren't is simply\nout of luck.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Sep 2024 09:00:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Mon, Sep 9, 2024 at 10:30 PM Michael Paquier <[email protected]> wrote:\n> No. My question was about splitting pgstat_bestart() and\n> pgstat_bestart_pre_auth() in a cleaner way, because authenticated\n> connections finish by calling both, meaning that we do twice the same\n> setup for backend entries depending on the authentication path taken.\n> That seems like a waste.\n\nI can try to separate them out. I'm a little wary of messing with the\nCRITICAL_SECTION guarantees, though. I thought the idea was that you\nfilled in the entire struct to prevent tearing. (If I've misunderstood\nthat, please let me know :D)\n\n> Perhaps just use a new\n> \"Authentication\" class, as in \"The server is waiting for an\n> authentication operation to complete\"?\n\nSounds good.\n\n> Couldn't it be better to have a one-one mapping\n> instead, adding twelve entries in wait_event_names.txt?\n\n(I have no strong opinions on this myself, but while the debate is\nongoing, I'll work on a version of the patch with more detailed wait\nevents. It's easy to collapse them again if that gets the most votes.)\n\n> I am not really on board with the test based on injection points\n> proposed, though. It checks that the \"authenticating\" flag is set in\n> pg_stat_activity, but it does nothing else. That seems limited. Or\n> are you planning for more?\n\nI can test for specific contents of the entry, if you'd like. My\nprimary goal was to test that an entry shows up if that part of the\ncode hangs. I think a regression would otherwise go completely\nunnoticed.\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Wed, 11 Sep 2024 14:29:49 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Wed, Sep 11, 2024 at 02:29:49PM -0700, Jacob Champion wrote:\n> On Mon, Sep 9, 2024 at 10:30 PM Michael Paquier <[email protected]> wrote:\n>> No. My question was about splitting pgstat_bestart() and\n>> pgstat_bestart_pre_auth() in a cleaner way, because authenticated\n>> connections finish by calling both, meaning that we do twice the same\n>> setup for backend entries depending on the authentication path taken.\n>> That seems like a waste.\n> \n> I can try to separate them out. I'm a little wary of messing with the\n> CRITICAL_SECTION guarantees, though. I thought the idea was that you\n> filled in the entire struct to prevent tearing. (If I've misunderstood\n> that, please let me know :D)\n\nHm, yeah. We surely should be careful about the consequences of that.\nSetting up twice the structure as the patch proposes is kind of\na weird concept, but it feels to me that we should split that and set\nthe fields in the pre-auth step and ignore the irrelevant ones, then\ncomplete the rest in a second step. We are going to do that anyway if\nwe want to be able to have backend entries earlier in the\nauthentication phase.\n\n>> Couldn't it be better to have a one-one mapping\n>> instead, adding twelve entries in wait_event_names.txt?\n> \n> (I have no strong opinions on this myself, but while the debate is\n> ongoing, I'll work on a version of the patch with more detailed wait\n> events. It's easy to collapse them again if that gets the most votes.)\n\nThanks. Robert is arguing upthread about more granularity, which is\nalso what I understand is the original intention of the wait events.\nNoah has a different view. Let's see where it goes but I've given my\nopinion.\n\n> I can test for specific contents of the entry, if you'd like. My\n> primary goal was to test that an entry shows up if that part of the\n> code hangs. I think a regression would otherwise go completely\n> unnoticed.\n\nPerhaps that would be useful, not sure. Based on my first\nimpressions, I'd tend to say no to these extra test cycles, but I'm\nokay to be proved wrong, as well.\n--\nMichael", "msg_date": "Thu, 12 Sep 2024 08:42:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" }, { "msg_contents": "On Wed, Sep 11, 2024 at 09:00:33AM -0400, Robert Haas wrote:\n> On Tue, Sep 10, 2024 at 4:58 PM Noah Misch <[email protected]> wrote:\n> > ... a rule of \"each wait event appears in one\n> > pgstat_report_wait_start()\" would be a rule I don't want.\n> \n> As the original committer of the wait event stuff, I intended for the\n> rule that you do not want to be the actual rule. However, I see that I\n> didn't spell that out anywhere in the commit message, or the commit\n> itself.\n> \n> > I see this level of fine-grained naming\n> > as making the event name a sort of stable proxy for FILE:LINE. I'd value\n> > exposing such a proxy, all else being equal, but I don't think wait event\n> > names like AuthLdapBindLdapbinddn/AuthLdapBindUser are the right way. Wait\n> > event names should be more independent of today's code-level details.\n> \n> I don't agree with that. One of the most difficult parts of supporting\n> PostgreSQL, in my experience, is that it's often very difficult to\n> find out what has gone wrong when a system starts behaving badly. It\n> is often necessary to ask customers to install a debugger and do stuff\n> with it, or give them an instrumented build, in order to determine the\n> root cause of a problem that in some cases is not even particularly\n> complicated. While needing to refer to specific source code details\n> may not be a common experience for the typical end user, it is\n> extremely common for me. This problem commonly arises with error\n> messages\n\nThat is a problem. Half the time, error verbosity doesn't disambiguate enough\nfor me, and I need backtrace_functions. I now find it hard to believe how\nlong we coped without backtrace_functions.\n\nI withdraw the objection to \"each wait event appears in one\npgstat_report_wait_start()\".\n\n\n", "msg_date": "Fri, 13 Sep 2024 07:56:21 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_activity: make slow/hanging authentication more\n visible" } ]
[ { "msg_contents": "Hi PostgreSQL Community,\n\nI would like to bring to your attention an observation regarding the\nplanner's behavior for foreign table update/delete operations. It appears\nthat the planner adds rowmarks (ROW_MARK_COPY) for non-target tables, which\nI believe is unnecessary when using the postgres-fdw. This is because\npostgres-fdw performs early locking on tuples belonging to the target\nforeign table by utilizing the SELECT FOR UPDATE clause.\n\nIn an attempt to address this, I tried implementing late locking. However,\nthis approach still doesn't work as intended because the API assumes that\nforeign table rows can be re-fetched using TID (ctid). This assumption is\ninvalid for partitioned tables on the foreign server. Additionally, the\ncommit afb9249d06f47d7a6d4a89fea0c3625fe43c5a5d, which introduced late\nlocking for foreign tables, mentions that the benefits of late locking\nagainst a remote server are unclear, as the extra round trips required are\nlikely to outweigh any potential concurrency improvements.\n\nTo address this issue, I have taken the initiative to create a patch that\nprevents the addition of rowmarks for non-target tables when the target\ntable is using early locking. I would greatly appreciate it if you could\nreview the patch and provide any feedback or insights I may be overlooking.\n\nExample query plan with my change: (foreign scan doesn't fetch whole row\nfor bar).\n\npostgres=# \\d+ bar\n Foreign table \"public.bar\"\n Column | Type | Collation | Nullable | Default | FDW options |\nStorage | Stats target | Description\n--------+---------+-----------+----------+---------+--------------------+---------+--------------+-------------\n b1 | integer | | | | (column_name 'b1') |\nplain | |\n b2 | integer | | | | (column_name 'b2') |\nplain | |\nServer: postgres_1\nFDW options: (schema_name 'public', table_name 'bar')\n\nrouter=# \\d+ foo\n Foreign table \"public.foo\"\n Column | Type | Collation | Nullable | Default | FDW options |\nStorage | Stats target | Description\n--------+---------+-----------+----------+---------+--------------------+---------+--------------+-------------\n f1 | integer | | | | (column_name 'f1') |\nplain | |\n f2 | integer | | | | (column_name 'f2') |\nplain | |\nServer: postgres_2\nFDW options: (schema_name 'public', table_name 'foo')\n\npostgres=# explain verbose update foo set f1 = b1 from bar where f2=b2;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------\n Update on public.foo (cost=200.00..48713.72 rows=0 width=0)\n Remote SQL: UPDATE public.foo SET f1 = $2 WHERE ctid = $1\n -> Nested Loop (cost=200.00..48713.72 rows=15885 width=42)\n Output: bar.b1, foo.ctid, foo.*\n Join Filter: (foo.f2 = bar.b2)\n -> Foreign Scan on public.bar (cost=100.00..673.20 rows=2560\nwidth=8)\n Output: bar.b1, bar.b2\n Remote SQL: SELECT b1, b2 FROM public.bar\n -> Materialize (cost=100.00..389.23 rows=1241 width=42)\n Output: foo.ctid, foo.*, foo.f2\n -> Foreign Scan on public.foo (cost=100.00..383.02\nrows=1241 width=42)\n Output: foo.ctid, foo.*, foo.f2\n Remote SQL: SELECT f1, f2, ctid FROM public.foo FOR\nUPDATE\n(13 rows)\n\n\nThank you for your time and consideration.\n\n\nRegards\nSaikiran Avula\nSDE, Amazon Web Services.", "msg_date": "Mon, 6 May 2024 23:10:33 +0100", "msg_from": "SAIKIRAN AVULA <[email protected]>", "msg_from_op": true, "msg_subject": "Skip adding row-marks for non target tables when result relation is\n foreign table." }, { "msg_contents": "On Mon, 2024-05-06 at 23:10 +0100, SAIKIRAN AVULA wrote:\n> I would like to bring to your attention an observation regarding the\n> planner's behavior for foreign table update/delete operations. It\n> appears that the planner adds rowmarks (ROW_MARK_COPY) for non-target\n> tables, which I believe is unnecessary when using the postgres-fdw.\n> This is because postgres-fdw performs early locking on tuples\n> belonging to the target foreign table by utilizing the SELECT FOR\n> UPDATE clause.\n\nI agree with your reasoning here. If it reads the row with SELECT FOR\nUPDATE, what's the purpose of row marks?\n\nThe cost of ROW_MARK_COPY is that it brings the whole tuple along\nrather than a reference. I assume you are concerned about wide tables\ninvolved in the join or is there another concern?\n\n> In an attempt to address this, I tried implementing late locking.\n\nFor others in the thread, see:\n\nhttps://www.postgresql.org/docs/current/fdw-row-locking.html\n\n> However, this approach still doesn't work as intended because the API\n> assumes that foreign table rows can be re-fetched using TID (ctid).\n> This assumption is invalid for partitioned tables on the foreign\n> server.\n\nIt looks like it's a \"Datum rowid\", but is currently only allowed to be\na ctid, which can't identify the partition. I wonder how much work it\nwould be to fix this?\n\n> Additionally, the commit afb9249d06f47d7a6d4a89fea0c3625fe43c5a5d,\n> which introduced late locking for foreign tables, mentions that the\n> benefits of late locking against a remote server are unclear, as the\n> extra round trips required are likely to outweigh any potential\n> concurrency improvements.\n\nThe extra round trip only happens when EPQ finds a newer version of the\ntuple, which should be the exceptional case. I'm not sure how this\nbalances out, but to me late locking still seems preferable. Early\nlocking is a huge performance hit in some cases (locking many more rows\nthan necessary).\n\nEarly locking is also a violation of the documentation here:\n\n\"When a locking clause appears at the top level of a SELECT query, the\nrows that are locked are exactly those that are returned by the query;\nin the case of a join query, the rows locked are those that contribute\nto returned join rows.\"\n\nhttps://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE\n\n> To address this issue, I have taken the initiative to create a patch\n> that prevents the addition of rowmarks for non-target tables when the\n> target table is using early locking. I would greatly appreciate it if\n> you could review the patch and provide any feedback or insights I may\n> be overlooking.\n\nA couple comments:\n\n* You're using GetFdwRoutineForRelation() with makecopy=false, and then\nclosing the relation. If the rd_fdwroutine was already set previously,\nthen the returned pointer will point into the relcache, which may be\ninvalid after closing the relation. I'd probably pass makecopy=true and\nthen free it. (Weirdly if you pass makecopy=false, you may or may not\nget a copy, so there's no way to know whether to free it or not.)\n\n* Core postgres doesn't really choose early locking. If\nRefetchForeignRow is not defined, then late locking is impossible, so\nit assumes that early locking is happening. That assumption is true for\npostgres_fdw, but might not be for other FDWs. What if an FDW doesn't\ndo early locking and also doesn't define RefetchForeignRow?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 21 May 2024 18:12:49 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Skip adding row-marks for non target tables when result\n relation is foreign table." }, { "msg_contents": "On Wed, May 22, 2024 at 10:13 AM Jeff Davis <[email protected]> wrote:\n> On Mon, 2024-05-06 at 23:10 +0100, SAIKIRAN AVULA wrote:\n> > Additionally, the commit afb9249d06f47d7a6d4a89fea0c3625fe43c5a5d,\n> > which introduced late locking for foreign tables, mentions that the\n> > benefits of late locking against a remote server are unclear, as the\n> > extra round trips required are likely to outweigh any potential\n> > concurrency improvements.\n>\n> The extra round trip only happens when EPQ finds a newer version of the\n> tuple, which should be the exceptional case. I'm not sure how this\n> balances out, but to me late locking still seems preferable. Early\n> locking is a huge performance hit in some cases (locking many more rows\n> than necessary).\n\nI might be missing something, but I think the extra round trip happens\nfor each foreign row locked using the RefetchForeignRow() API in\nExecLockRows(), so I think the overhead is caused in the normal case.\n\n> Early locking is also a violation of the documentation here:\n>\n> \"When a locking clause appears at the top level of a SELECT query, the\n> rows that are locked are exactly those that are returned by the query;\n> in the case of a join query, the rows locked are those that contribute\n> to returned join rows.\"\n\nYeah, but I think this holds true for SELECT queries postgres_fdw\nsends to the remote side. :)\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 9 Aug 2024 17:35:11 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Skip adding row-marks for non target tables when result relation\n is foreign table." }, { "msg_contents": "On Fri, 2024-08-09 at 17:35 +0900, Etsuro Fujita wrote:\n> I might be missing something, but I think the extra round trip\n> happens\n> for each foreign row locked using the RefetchForeignRow() API in\n> ExecLockRows(), so I think the overhead is caused in the normal case.\n\nIs there any sample code that implements late locking for a FDW? I'm\nnot quite clear on how it's supposed to work.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 14 Aug 2024 17:56:46 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Skip adding row-marks for non target tables when result\n relation is foreign table." }, { "msg_contents": "On Thu, Aug 15, 2024 at 9:56 AM Jeff Davis <[email protected]> wrote:\n> Is there any sample code that implements late locking for a FDW? I'm\n> not quite clear on how it's supposed to work.\n\nSee the patch in [1]. It would not apply to HEAD anymore, though.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Thu, 15 Aug 2024 20:45:44 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Skip adding row-marks for non target tables when result relation\n is foreign table." } ]
[ { "msg_contents": "Robert pointed out [1] that the planner fails if we have $SUBJECT,\nbecause tidpath.c can seize on the RLS-derived ctid constraint\ninstead of the CurrentOfExpr. Since the executor can only handle\nCurrentOfExpr in a TidScan's tidquals, that leads to a confusing\nruntime error.\n\nHere's a patch for that.\n\nHowever ... along the way to testing it, I found that you can only\nget such an RLS qual to work if it accepts \"(InvalidBlockNumber,0)\",\nbecause that's what the ctid field will look like in a\nnot-yet-stored-to-disk tuple. That's sufficiently weird, and so\nunduly in bed with undocumented implementation details, that I can't\nimagine anyone is actually using such an RLS condition or ever will.\nSo maybe this is not really worth fixing. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmobwgL1XyV4uyUd26Nxff5WVA7%2B9XUED4yjpvft83_MBAw%40mail.gmail.com", "msg_date": "Mon, 06 May 2024 19:31:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "WHERE CURRENT OF with RLS quals that are ctid conditions" }, { "msg_contents": "On Mon, May 6, 2024 at 7:31 PM Tom Lane <[email protected]> wrote:\n> Robert pointed out [1] that the planner fails if we have $SUBJECT,\n> because tidpath.c can seize on the RLS-derived ctid constraint\n> instead of the CurrentOfExpr. Since the executor can only handle\n> CurrentOfExpr in a TidScan's tidquals, that leads to a confusing\n> runtime error.\n>\n> Here's a patch for that.\n>\n> However ... along the way to testing it, I found that you can only\n> get such an RLS qual to work if it accepts \"(InvalidBlockNumber,0)\",\n> because that's what the ctid field will look like in a\n> not-yet-stored-to-disk tuple. That's sufficiently weird, and so\n> unduly in bed with undocumented implementation details, that I can't\n> imagine anyone is actually using such an RLS condition or ever will.\n> So maybe this is not really worth fixing. Thoughts?\n\nHmm, I thought the RLS condition needed to accept the old and new\nTIDs, but not (InvalidBlockNumber,0). I might well have misunderstood,\nthough.\n\nAs to whether this is worth fixing, I think it is, but it might not be\nworth back-patching the fix. Also, I'd really like to get disable_cost\nout of the picture here. That would require more code reorganization\nthan you've done here, but I think it would be worthwhile. I suppose\nthat could also be done as a separate patch, but I wonder if that\ndoesn't just amount to changing approximately the same code twice.\n\nOr maybe it doesn't, and this is worth doing on its own. I'm not sure;\nI haven't coded what I have in mind yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 May 2024 09:47:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE CURRENT OF with RLS quals that are ctid conditions" }, { "msg_contents": "On Tue, May 7, 2024 at 9:47 AM Robert Haas <[email protected]> wrote:\n> As to whether this is worth fixing, I think it is, but it might not be\n> worth back-patching the fix. Also, I'd really like to get disable_cost\n> out of the picture here. That would require more code reorganization\n> than you've done here, but I think it would be worthwhile. I suppose\n> that could also be done as a separate patch, but I wonder if that\n> doesn't just amount to changing approximately the same code twice.\n>\n> Or maybe it doesn't, and this is worth doing on its own. I'm not sure;\n> I haven't coded what I have in mind yet.\n\nNever mind all this. I think what I have in mind requires doing what\nyou did first. So if you're happy with what you've got, I'd go for it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 May 2024 10:05:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE CURRENT OF with RLS quals that are ctid conditions" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, May 6, 2024 at 7:31 PM Tom Lane <[email protected]> wrote:\n>> So maybe this is not really worth fixing. Thoughts?\n\n> Hmm, I thought the RLS condition needed to accept the old and new\n> TIDs, but not (InvalidBlockNumber,0). I might well have misunderstood,\n> though.\n\nIf you leave the (InvalidBlockNumber,0) alternative out of the RLS\ncondition, my patch's test case fails because the row \"doesn't\nsatisfy the RLS condition\" (I forget the exact error message, but\nit was more or less that).\n\n> As to whether this is worth fixing, I think it is, but it might not be\n> worth back-patching the fix. Also, I'd really like to get disable_cost\n> out of the picture here. That would require more code reorganization\n> than you've done here, but I think it would be worthwhile. I suppose\n> that could also be done as a separate patch, but I wonder if that\n> doesn't just amount to changing approximately the same code twice.\n\nNo, because the disable_cost stuff is nowhere near here. In any case,\nwhat we were talking about was suppressing creation of competing\nnon-TIDScan paths. It's still going to be incumbent on tidpath.c to\ncreate a correct path, and as things stand it won't for this case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 10:05:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE CURRENT OF with RLS quals that are ctid conditions" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Never mind all this. I think what I have in mind requires doing what\n> you did first. So if you're happy with what you've got, I'd go for it.\n\nOK. HEAD-only sounds like a good compromise. Barring objections,\nI'll do that later today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 10:16:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE CURRENT OF with RLS quals that are ctid conditions" } ]
[ { "msg_contents": "hi,\n\nSELECT table_name, column_name, is_updatable\n FROM information_schema.columns\n WHERE table_name LIKE E'r_\\\\_view%'\n ORDER BY table_name, ordinal_position;\n\nat d1d286d83c0eed695910cb20d970ea9bea2e5001,\nthis query in src/test/regress/sql/updatable_views.sql\nmakes regress tests fail. maybe other query also,\nbut this is the first one that invokes the server crash.\n\n\nsrc3=# SELECT table_name, column_name, is_updatable\n FROM information_schema.columns\n WHERE table_name LIKE E'r_\\\\_view%'\n ORDER BY table_name, ordinal_position;\nTRAP: failed Assert(\"bms_is_valid_set(a)\"), File:\n\"../../Desktop/pg_src/src3/postgres/src/backend/nodes/bitmapset.c\",\nLine: 515, PID: 158266\npostgres: jian src3 [local] SELECT(ExceptionalCondition+0x106)[0x5579188c0b6f]\npostgres: jian src3 [local] SELECT(bms_is_member+0x56)[0x5579183581c7]\npostgres: jian src3 [local]\nSELECT(join_clause_is_movable_to+0x72)[0x557918439711]\npostgres: jian src3 [local] SELECT(+0x73e26c)[0x5579183a126c]\npostgres: jian src3 [local] SELECT(create_index_paths+0x3b8)[0x55791839d0ce]\npostgres: jian src3 [local] SELECT(+0x719b4d)[0x55791837cb4d]\npostgres: jian src3 [local] SELECT(+0x719400)[0x55791837c400]\npostgres: jian src3 [local] SELECT(+0x718e90)[0x55791837be90]\npostgres: jian src3 [local] SELECT(make_one_rel+0x187)[0x55791837bac5]\npostgres: jian src3 [local] SELECT(query_planner+0x4e8)[0x5579183d2dc2]\npostgres: jian src3 [local] SELECT(+0x7734ad)[0x5579183d64ad]\npostgres: jian src3 [local] SELECT(subquery_planner+0x14b9)[0x5579183d57e4]\npostgres: jian src3 [local] SELECT(standard_planner+0x365)[0x5579183d379e]\npostgres: jian src3 [local] SELECT(planner+0x81)[0x5579183d3426]\npostgres: jian src3 [local] SELECT(pg_plan_query+0xbb)[0x5579186100c8]\npostgres: jian src3 [local] SELECT(pg_plan_queries+0x11a)[0x5579186102de]\npostgres: jian src3 [local] SELECT(+0x9ad8f1)[0x5579186108f1]\npostgres: jian src3 [local] SELECT(PostgresMain+0xd4a)[0x557918618603]\npostgres: jian src3 [local] SELECT(+0x9a76a8)[0x55791860a6a8]\npostgres: jian src3 [local]\nSELECT(postmaster_child_launch+0x14d)[0x5579184d3430]\npostgres: jian src3 [local] SELECT(+0x879c28)[0x5579184dcc28]\npostgres: jian src3 [local] SELECT(+0x875278)[0x5579184d8278]\npostgres: jian src3 [local] SELECT(PostmasterMain+0x205f)[0x5579184d7837]\n\n\n version\n--------------------------------------------------------------------------------\n PostgreSQL 17devel_debug_build on x86_64-linux, compiled by gcc-11.4.0, 64-bit\n\n\nmeson config:\n-Dpgport=5458 \\\n-Dplperl=enabled \\\n-Dplpython=enabled \\\n-Dssl=openssl \\\n-Dldap=enabled \\\n-Dlibxml=enabled \\\n-Dlibxslt=enabled \\\n-Duuid=e2fs \\\n-Dzstd=enabled \\\n-Dlz4=enabled \\\n-Dcassert=true \\\n-Db_coverage=true \\\n-Dicu=enabled \\\n-Dbuildtype=debug \\\n-Dwerror=true \\\n-Dc_args='-Wunused-variable\n-Wuninitialized\n-Werror=maybe-uninitialized\n-Wreturn-type\n-DWRITE_READ_PARSE_PLAN_TREES\n-DREALLOCATE_BITMAPSETS\n-DCOPY_PARSE_PLAN_TREES\n-DRAW_EXPRESSION_COVERAGE_TEST -fno-omit-frame-pointer' \\\n-Ddocs_pdf=disabled \\\n-Ddocs_html_style=website \\\n-Dllvm=disabled \\\n-Dtap_tests=enabled \\\n-Dextra_version=_debug_build\n\n\nThis commit: d1d286d83c0eed695910cb20d970ea9bea2e5001\nRevert: Remove useless self-joins make it fail.\n\nthe preceding commit (81b2252e609cfa74550dd6804949485c094e4b85)\nwon't make the regress fail.\n\ni also found that not specifying c_args: `-DREALLOCATE_BITMAPSETS`,\nthe regress test won't fail.\n\n\nlater, i found out that `select 1 from information_schema.columns`\nwould also crash the server.\n\ninformation_schema.columns view is very complex.\nI get the view information_schema.columns definitions,\nomit unnecessary const and where qual parts of the it\nso the minimum reproducer is:\n\nSELECT 1\nFROM (((((\n (pg_attribute a LEFT JOIN pg_attrdef ad ON (((a.attrelid =\nad.adrelid) AND (a.attnum = ad.adnum))))\nJOIN (pg_class c JOIN pg_namespace nc ON (c.relnamespace = nc.oid)) ON\n(a.attrelid = c.oid))\nJOIN (pg_type t JOIN pg_namespace nt ON ((t.typnamespace = nt.oid)))\nON (a.atttypid = t.oid))\nLEFT JOIN (pg_type bt JOIN pg_namespace nbt ON (bt.typnamespace =\nnbt.oid)) ON ( t.typbasetype = bt.oid ))\nLEFT JOIN (pg_collation co JOIN pg_namespace nco ON ( co.collnamespace\n= nco.oid)) ON (a.attcollation = co.oid))\nLEFT JOIN (pg_depend dep JOIN pg_sequence seq ON (dep.objid =\nseq.seqrelid )) ON (((dep.refobjid = c.oid) AND (dep.refobjsubid =\na.attnum))));\n\n\n", "msg_date": "Tue, 7 May 2024 11:34:49 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS make\n server crash, regress test fail." }, { "msg_contents": "On Tue, May 7, 2024 at 11:35 AM jian he <[email protected]> wrote:\n\n> hi,\n>\n> SELECT table_name, column_name, is_updatable\n> FROM information_schema.columns\n> WHERE table_name LIKE E'r_\\\\_view%'\n> ORDER BY table_name, ordinal_position;\n>\n> at d1d286d83c0eed695910cb20d970ea9bea2e5001,\n> this query in src/test/regress/sql/updatable_views.sql\n> makes regress tests fail. maybe other query also,\n> but this is the first one that invokes the server crash.\n\n\nThank you for the report. I looked at this a little bit and I think\nhere is what happened. In deconstruct_distribute_oj_quals we call\ndistribute_quals_to_rels using the uncopied sjinfo->syn_lefthand as\nouterjoin_nonnullable, which eventually becomes rinfo->outer_relids.\nLater on, when we remove useless left joins, we modify\nsjinfo->syn_lefthand using bms_del_member and recycle\nsjinfo->syn_lefthand. And that causes the rinfo->outer_relids becomes\ninvalid, and finally triggers this issue in join_clause_is_movable_to.\n\nMaybe we want to bms_copy sjinfo->syn_lefthand first before using it as\nnonnullable_rels.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -1888,7 +1888,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root,\n qualscope = bms_union(sjinfo->syn_lefthand, sjinfo->syn_righthand);\n qualscope = bms_add_member(qualscope, sjinfo->ojrelid);\n ojscope = bms_union(sjinfo->min_lefthand, sjinfo->min_righthand);\n- nonnullable_rels = sjinfo->syn_lefthand;\n+ nonnullable_rels = bms_copy(sjinfo->syn_lefthand);\n\nI will take a closer look in the afternoon.\n\nThanks\nRichard\n\nOn Tue, May 7, 2024 at 11:35 AM jian he <[email protected]> wrote:hi,\n\nSELECT table_name, column_name, is_updatable\n  FROM information_schema.columns\n WHERE table_name LIKE E'r_\\\\_view%'\n ORDER BY table_name, ordinal_position;\n\nat d1d286d83c0eed695910cb20d970ea9bea2e5001,\nthis query in src/test/regress/sql/updatable_views.sql\nmakes regress tests fail. maybe other query also,\nbut this is the first one that invokes the server crash.Thank you for the report.  I looked at this a little bit and I thinkhere is what happened.  In deconstruct_distribute_oj_quals we calldistribute_quals_to_rels using the uncopied sjinfo->syn_lefthand asouterjoin_nonnullable, which eventually becomes rinfo->outer_relids.Later on, when we remove useless left joins, we modifysjinfo->syn_lefthand using bms_del_member and recyclesjinfo->syn_lefthand.  And that causes the rinfo->outer_relids becomesinvalid, and finally triggers this issue in join_clause_is_movable_to.Maybe we want to bms_copy sjinfo->syn_lefthand first before using it asnonnullable_rels.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -1888,7 +1888,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root,    qualscope = bms_union(sjinfo->syn_lefthand, sjinfo->syn_righthand);    qualscope = bms_add_member(qualscope, sjinfo->ojrelid);    ojscope = bms_union(sjinfo->min_lefthand, sjinfo->min_righthand);-   nonnullable_rels = sjinfo->syn_lefthand;+   nonnullable_rels = bms_copy(sjinfo->syn_lefthand);I will take a closer look in the afternoon.ThanksRichard", "msg_date": "Tue, 7 May 2024 12:47:02 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, 7 May 2024 at 16:47, Richard Guo <[email protected]> wrote:\n> --- a/src/backend/optimizer/plan/initsplan.c\n> +++ b/src/backend/optimizer/plan/initsplan.c\n> @@ -1888,7 +1888,7 @@ deconstruct_distribute_oj_quals(PlannerInfo *root,\n> qualscope = bms_union(sjinfo->syn_lefthand, sjinfo->syn_righthand);\n> qualscope = bms_add_member(qualscope, sjinfo->ojrelid);\n> ojscope = bms_union(sjinfo->min_lefthand, sjinfo->min_righthand);\n> - nonnullable_rels = sjinfo->syn_lefthand;\n> + nonnullable_rels = bms_copy(sjinfo->syn_lefthand);\n\nI was busy looking at this too and I came to the same conclusion.\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 17:00:17 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> Thank you for the report. I looked at this a little bit and I think\n> here is what happened. In deconstruct_distribute_oj_quals we call\n> distribute_quals_to_rels using the uncopied sjinfo->syn_lefthand as\n> outerjoin_nonnullable, which eventually becomes rinfo->outer_relids.\n> Later on, when we remove useless left joins, we modify\n> sjinfo->syn_lefthand using bms_del_member and recycle\n> sjinfo->syn_lefthand. And that causes the rinfo->outer_relids becomes\n> invalid, and finally triggers this issue in join_clause_is_movable_to.\n\nHmm, the SJE code didn't really touch any of this logic, so why\ndidn't we see the failure before?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 01:01:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, 7 May 2024 at 17:01, Tom Lane <[email protected]> wrote:\n>\n> Richard Guo <[email protected]> writes:\n> > Thank you for the report. I looked at this a little bit and I think\n> > here is what happened. In deconstruct_distribute_oj_quals we call\n> > distribute_quals_to_rels using the uncopied sjinfo->syn_lefthand as\n> > outerjoin_nonnullable, which eventually becomes rinfo->outer_relids.\n> > Later on, when we remove useless left joins, we modify\n> > sjinfo->syn_lefthand using bms_del_member and recycle\n> > sjinfo->syn_lefthand. And that causes the rinfo->outer_relids becomes\n> > invalid, and finally triggers this issue in join_clause_is_movable_to.\n>\n> Hmm, the SJE code didn't really touch any of this logic, so why\n> didn't we see the failure before?\n\nThe bms_free() occurs in remove_rel_from_query() on:\n\nsjinf->syn_lefthand = bms_del_member(sjinf->syn_lefthand, relid);\n\nI've not looked, but I assumed the revert must have removed some\ncommon code that was added and reverted with SJE.\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 17:11:57 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, 7 May 2024 at 17:11, David Rowley <[email protected]> wrote:\n> sjinf->syn_lefthand = bms_del_member(sjinf->syn_lefthand, relid);\n>\n> I've not looked, but I assumed the revert must have removed some\n> common code that was added and reverted with SJE.\n\nYeah, before the revert, that did:\n\n- sjinf->syn_lefthand = replace_relid(sjinf->syn_lefthand, relid, subst);\n\nThat replace code seems to have always done a bms_copy()\n\n-static Bitmapset *\n-replace_relid(Relids relids, int oldId, int newId)\n-{\n- if (oldId < 0)\n- return relids;\n-\n- /* Delete relid without substitution. */\n- if (newId < 0)\n- return bms_del_member(bms_copy(relids), oldId);\n-\n- /* Substitute newId for oldId. */\n- if (bms_is_member(oldId, relids))\n- return bms_add_member(bms_del_member(bms_copy(relids), oldId), newId);\n-\n- return relids;\n-}\n\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 17:15:11 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, 7 May 2024 at 15:35, jian he <[email protected]> wrote:\n> i also found that not specifying c_args: `-DREALLOCATE_BITMAPSETS`,\n> the regress test won't fail.\n\nIt would be good to get some build farm coverage of this so we don't\nhave to rely on manual testing. I wonder if it's a good idea to just\ndefine REALLOCATE_BITMAPSETS when #ifdef CLOBBER_FREED_MEMORY... or if\nwe should ask on the buildfarm-members list if someone wouldn't mind\ndefining it?\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 17:21:44 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Yeah, before the revert, that did:\n> - sjinf->syn_lefthand = replace_relid(sjinf->syn_lefthand, relid, subst);\n> That replace code seems to have always done a bms_copy()\n\nHmm, not always; see e0477837c.\n\nWhat I'm trying to figure out here is whether we have a live bug\nin this area in released branches; and if so, why we've not seen\nreports of that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 01:28:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, 7 May 2024 at 17:28, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Yeah, before the revert, that did:\n> > - sjinf->syn_lefthand = replace_relid(sjinf->syn_lefthand, relid, subst);\n> > That replace code seems to have always done a bms_copy()\n>\n> Hmm, not always; see e0477837c.\n\nIt was the discussion on that thread that led to the invention of\nREALLOCATE_BITMAPSETS\n\n> What I'm trying to figure out here is whether we have a live bug\n> in this area in released branches; and if so, why we've not seen\n> reports of that.\n\nWe could check what portions of REALLOCATE_BITMAPSETS are\nbackpatchable. It may not be applicable very far back because of v16's\n00b41463c. The bms_del_member() would have left a zero set rather than\ndoing bms_free() prior to that commit. There could be a bug in v16.\n\nDavid\n\n\n", "msg_date": "Tue, 7 May 2024 17:46:12 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, May 7, 2024 at 1:22 PM David Rowley <[email protected]> wrote:\n\n> It would be good to get some build farm coverage of this so we don't\n> have to rely on manual testing. I wonder if it's a good idea to just\n> define REALLOCATE_BITMAPSETS when #ifdef CLOBBER_FREED_MEMORY... or if\n> we should ask on the buildfarm-members list if someone wouldn't mind\n> defining it?\n\n\n+1 to have build farm coverage of REALLOCATE_BITMAPSETS. This flag\nseems quite useful.\n\nThanks\nRichard\n\nOn Tue, May 7, 2024 at 1:22 PM David Rowley <[email protected]> wrote:\nIt would be good to get some build farm coverage of this so we don't\nhave to rely on manual testing.  I wonder if it's a good idea to just\ndefine REALLOCATE_BITMAPSETS when #ifdef CLOBBER_FREED_MEMORY... or if\nwe should ask on the buildfarm-members list if someone wouldn't mind\ndefining it?+1 to have build farm coverage of REALLOCATE_BITMAPSETS.  This flagseems quite useful.ThanksRichard", "msg_date": "Tue, 7 May 2024 18:05:03 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, May 7, 2024 at 1:46 PM David Rowley <[email protected]> wrote:\n\n> On Tue, 7 May 2024 at 17:28, Tom Lane <[email protected]> wrote:\n> > What I'm trying to figure out here is whether we have a live bug\n> > in this area in released branches; and if so, why we've not seen\n> > reports of that.\n>\n> We could check what portions of REALLOCATE_BITMAPSETS are\n> backpatchable. It may not be applicable very far back because of v16's\n> 00b41463c. The bms_del_member() would have left a zero set rather than\n> doing bms_free() prior to that commit. There could be a bug in v16.\n\n\nI also think there might be a bug in v16, as long as\n'sjinfo->syn_lefthand' and 'rinfo->outer_relids' are referencing the\nsame bitmapset and the content of this bitmapset is altered through\n'sjinfo->syn_lefthand' without 'rinfo->outer_relids' being aware of\nthese changes. I tried to compose a query that can trigger this bug but\nfailed though.\n\nAnother thing that comes to my mind is that this issue shows that\nRestrictInfo.outer_relids could contain references to removed rels and\njoins, and RestrictInfo.outer_relids could be referenced after the\nremoval of useless left joins. Currently we do not have a mechanism to\nclean out the bits in outer_relids during outer join removal. That is\nto say, RestrictInfo.outer_relids might be referenced while including\nrels that should have been removed. I'm not sure if this is a problem.\n\nThanks\nRichard\n\nOn Tue, May 7, 2024 at 1:46 PM David Rowley <[email protected]> wrote:On Tue, 7 May 2024 at 17:28, Tom Lane <[email protected]> wrote:\n> What I'm trying to figure out here is whether we have a live bug\n> in this area in released branches; and if so, why we've not seen\n> reports of that.\n\nWe could check what portions of REALLOCATE_BITMAPSETS are\nbackpatchable. It may not be applicable very far back because of v16's\n00b41463c. The bms_del_member() would have left a zero set rather than\ndoing bms_free() prior to that commit.  There could be a bug in v16.I also think there might be a bug in v16, as long as'sjinfo->syn_lefthand' and 'rinfo->outer_relids' are referencing thesame bitmapset and the content of this bitmapset is altered through'sjinfo->syn_lefthand' without 'rinfo->outer_relids' being aware ofthese changes.  I tried to compose a query that can trigger this bug butfailed though.Another thing that comes to my mind is that this issue shows thatRestrictInfo.outer_relids could contain references to removed rels andjoins, and RestrictInfo.outer_relids could be referenced after theremoval of useless left joins.  Currently we do not have a mechanism toclean out the bits in outer_relids during outer join removal.  That isto say, RestrictInfo.outer_relids might be referenced while includingrels that should have been removed.  I'm not sure if this is a problem.ThanksRichard", "msg_date": "Tue, 7 May 2024 18:18:54 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, May 7, 2024 at 1:19 PM Richard Guo <[email protected]> wrote:\n> On Tue, May 7, 2024 at 1:46 PM David Rowley <[email protected]> wrote:\n>>\n>> On Tue, 7 May 2024 at 17:28, Tom Lane <[email protected]> wrote:\n>> > What I'm trying to figure out here is whether we have a live bug\n>> > in this area in released branches; and if so, why we've not seen\n>> > reports of that.\n>>\n>> We could check what portions of REALLOCATE_BITMAPSETS are\n>> backpatchable. It may not be applicable very far back because of v16's\n>> 00b41463c. The bms_del_member() would have left a zero set rather than\n>> doing bms_free() prior to that commit. There could be a bug in v16.\n>\n>\n> I also think there might be a bug in v16, as long as\n> 'sjinfo->syn_lefthand' and 'rinfo->outer_relids' are referencing the\n> same bitmapset and the content of this bitmapset is altered through\n> 'sjinfo->syn_lefthand' without 'rinfo->outer_relids' being aware of\n> these changes. I tried to compose a query that can trigger this bug but\n> failed though.\n\nCan sjinfo->syn_lefthand became empty set after bms_del_member()? If\nso, rinfo->outer_relids will become an invalid pointer. If so, it's\nobviously a bug, while it still might be very hard to make this\ntrigger a segfault.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Tue, 7 May 2024 14:42:16 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Tue, May 7, 2024 at 8:29 AM Tom Lane <[email protected]> wrote:\n> David Rowley <[email protected]> writes:\n> > Yeah, before the revert, that did:\n> > - sjinf->syn_lefthand = replace_relid(sjinf->syn_lefthand, relid, subst);\n> > That replace code seems to have always done a bms_copy()\n>\n> Hmm, not always; see e0477837c.\n>\n> What I'm trying to figure out here is whether we have a live bug\n> in this area in released branches; and if so, why we've not seen\n> reports of that.\n\nI didn't yet spot a particular bug. But this place looks dangerous,\nand it's very hard to prove there is no bug. Even if there is no bug,\nit seems very easy to unintentionally add a bug here. Should we just\naccept to always do bms_copy()?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Tue, 7 May 2024 14:45:48 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On 2024-05-07 Tu 06:05, Richard Guo wrote:\n>\n> On Tue, May 7, 2024 at 1:22 PM David Rowley <[email protected]> wrote:\n>\n> It would be good to get some build farm coverage of this so we don't\n> have to rely on manual testing.  I wonder if it's a good idea to just\n> define REALLOCATE_BITMAPSETS when #ifdef CLOBBER_FREED_MEMORY... or if\n> we should ask on the buildfarm-members list if someone wouldn't mind\n> defining it?\n>\n>\n> +1 to have build farm coverage of REALLOCATE_BITMAPSETS. This flag\n> seems quite useful.\n>\n>\n\nI have added it to the CPPFLAGS on prion.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-07 Tu 06:05, Richard Guo\n wrote:\n\n\n\n\n\n\n\nOn Tue, May 7, 2024 at\n 1:22 PM David Rowley <[email protected]>\n wrote:\n\n It would be good to get some build farm coverage of this so\n we don't\n have to rely on manual testing.  I wonder if it's a good\n idea to just\n define REALLOCATE_BITMAPSETS when #ifdef\n CLOBBER_FREED_MEMORY... or if\n we should ask on the buildfarm-members list if someone\n wouldn't mind\n defining it?\n\n\n+1 to have build farm coverage of REALLOCATE_BITMAPSETS. \n This flag\n seems quite useful.\n\n\n\n\n\n\n\n\nI have added it to the CPPFLAGS on prion.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 7 May 2024 08:00:25 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-05-07 Tu 06:05, Richard Guo wrote:\n>> +1 to have build farm coverage of REALLOCATE_BITMAPSETS. This flag\n>> seems quite useful.\n\n> I have added it to the CPPFLAGS on prion.\n\n... and as expected, prion fell over.\n\nI find that Richard's proposed fix makes the core regression tests\npass, but we still fail check-world. So I'm afraid we need something\nmore aggressive, like the attached which makes make_restrictinfo\ncopy all its input bitmapsets. Without that, we still have sharing\nof bitmapsets across different RestrictInfos, which seems pretty\nscary given what we now see about the effects of 00b41463c. This\nseems annoyingly expensive, but maybe there's little choice?\n\nGiven this, we could remove ad-hoc bms_copy calls from the callers\nof make_restrictinfo, distribute_quals_to_rels, etc. I didn't go\nlooking for possible wins of that sort; there's unlikely to be a\nlot of them.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 May 2024 14:20:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Wed, 8 May 2024 at 06:20, Tom Lane <[email protected]> wrote:\n> I find that Richard's proposed fix makes the core regression tests\n> pass, but we still fail check-world. So I'm afraid we need something\n> more aggressive, like the attached which makes make_restrictinfo\n> copy all its input bitmapsets. Without that, we still have sharing\n> of bitmapsets across different RestrictInfos, which seems pretty\n> scary given what we now see about the effects of 00b41463c. This\n> seems annoyingly expensive, but maybe there's little choice?\n\nWe could make the policy copy-on-modify. If you put bms_copy around\nthe bms_del_member() calls in remove_rel_from_query(), does it pass\nthen?\n\nDavid\n\n\n", "msg_date": "Wed, 8 May 2024 10:35:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Wed, 8 May 2024 at 10:35, David Rowley <[email protected]> wrote:\n>\n> On Wed, 8 May 2024 at 06:20, Tom Lane <[email protected]> wrote:\n> > I find that Richard's proposed fix makes the core regression tests\n> > pass, but we still fail check-world. So I'm afraid we need something\n> > more aggressive, like the attached which makes make_restrictinfo\n> > copy all its input bitmapsets. Without that, we still have sharing\n> > of bitmapsets across different RestrictInfos, which seems pretty\n> > scary given what we now see about the effects of 00b41463c. This\n> > seems annoyingly expensive, but maybe there's little choice?\n>\n> We could make the policy copy-on-modify. If you put bms_copy around\n> the bms_del_member() calls in remove_rel_from_query(), does it pass\n> then?\n\nerr, I mean bms_copy() the set before passing to bms_del_member().\n\nDavid\n\n\n", "msg_date": "Wed, 8 May 2024 10:37:26 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 8 May 2024 at 06:20, Tom Lane <[email protected]> wrote:\n>> I find that Richard's proposed fix makes the core regression tests\n>> pass, but we still fail check-world. So I'm afraid we need something\n>> more aggressive, like the attached which makes make_restrictinfo\n>> copy all its input bitmapsets. Without that, we still have sharing\n>> of bitmapsets across different RestrictInfos, which seems pretty\n>> scary given what we now see about the effects of 00b41463c. This\n>> seems annoyingly expensive, but maybe there's little choice?\n\n> We could make the policy copy-on-modify. If you put bms_copy around\n> the bms_del_member() calls in remove_rel_from_query(), does it pass\n> then?\n\nDidn't test, but that route seems awfully invasive and fragile: how\nwill we find all the places to modify, or ensure that the policy\nis followed by future patches?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 18:40:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Wed, 8 May 2024 at 10:40, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > We could make the policy copy-on-modify. If you put bms_copy around\n> > the bms_del_member() calls in remove_rel_from_query(), does it pass\n> > then?\n>\n> Didn't test, but that route seems awfully invasive and fragile: how\n> will we find all the places to modify, or ensure that the policy\n> is followed by future patches?\n\nREALLOCATE_BITMAPSETS was invented for this and IMO, it found exactly\nthe problem it was invented to find.\n\nCopy-on-modify is our policy for node mutation. Why is it ok there but\nawfully fragile here?\n\nDavid\n\n\n", "msg_date": "Wed, 8 May 2024 10:47:32 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 8 May 2024 at 10:40, Tom Lane <[email protected]> wrote:\n>> Didn't test, but that route seems awfully invasive and fragile: how\n>> will we find all the places to modify, or ensure that the policy\n>> is followed by future patches?\n\n> REALLOCATE_BITMAPSETS was invented for this and IMO, it found exactly\n> the problem it was invented to find.\n\nNot in a way that gives me any confidence that we found *all* the\nproblems. If check-world finds a problem that the core tests did not,\nthen there's no reason to think there aren't still more issues that\ncheck-world happened not to trip over either.\n\n> Copy-on-modify is our policy for node mutation. Why is it ok there but\n> awfully fragile here?\n\nIt's only partly our policy: there are all those places where we don't\ndo it that way. The main problem that I see for trying to be 100%\nconsistent in that way is that once you modify a sub-node, full\ncopy-on-modify dictates replacing every ancestor node all the way to\nthe top of the tree. That's clearly impractical in the planner data\nstructures. So where are we going to stop exactly?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 18:55:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Wed, 8 May 2024 at 10:55, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > REALLOCATE_BITMAPSETS was invented for this and IMO, it found exactly\n> > the problem it was invented to find.\n>\n> Not in a way that gives me any confidence that we found *all* the\n> problems.\n\nHere are some statements I believe to be true:\n1. If REALLOCATE_BITMAPSETS is defined then modifications to a\nBitmapset will make a copy and free the original.\n2. If a query runs successfully without REALLOCATE_BITMAPSETS and\nAssert fails due to an invalid Bitmapset when REALLOCATE_BITMAPSETS is\ndefined, then we have > 1 pointer pointing to the same set and not all\nof them are being updated when the members are added/removed.\n\nGiven the above, I can't see what Bitmapset sharing problems we won't\nfind with REALLOCATE_BITMAPSETS.\n\nCan you share the exact scenario you're worried that we won't find so\nI can understand your concern?\n\nDavid\n\n\n", "msg_date": "Wed, 8 May 2024 11:03:16 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 8 May 2024 at 10:55, Tom Lane <[email protected]> wrote:\n>> Not in a way that gives me any confidence that we found *all* the\n>> problems.\n\n> Here are some statements I believe to be true:\n> 1. If REALLOCATE_BITMAPSETS is defined then modifications to a\n> Bitmapset will make a copy and free the original.\n> 2. If a query runs successfully without REALLOCATE_BITMAPSETS and\n> Assert fails due to an invalid Bitmapset when REALLOCATE_BITMAPSETS is\n> defined, then we have > 1 pointer pointing to the same set and not all\n> of them are being updated when the members are added/removed.\n\n> Given the above, I can't see what Bitmapset sharing problems we won't\n> find with REALLOCATE_BITMAPSETS.\n\nAnything where the trouble spots are in a code path we fail to\nexercise with our available test suites. If you think there are\nno such code paths, I'm sorry to disillusion you.\n\nI spent a little bit of time wondering if we could find problems in a\nmore static way by marking bitmapset fields as \"const\", but I fear\nthat would create a huge number of false positives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 19:18:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "I traced down the other failure I was seeing in check-world, and\nfound that it came from deconstruct_distribute doing this:\n\n distribute_quals_to_rels(root, my_quals,\n jtitem,\n sjinfo,\n root->qual_security_level,\n jtitem->qualscope,\n ojscope, jtitem->nonnullable_rels,\n NULL, /* incompatible_relids */\n true, /* allow_equivalence */\n false, false, /* not clones */\n postponed_oj_qual_list);\n\nwhere jtitem->nonnullable_rels is the same as the jtitem's left_rels,\nwhich ends up as the syn_lefthand of the join's SpecialJoinInfo, and\nthen when remove_rel_from_query tries to adjust the syn_lefthand it\nbreaks the outer_relids of whatever RestrictInfos got made here.\n\nI was able to fix that by not letting jtitem->nonnullable_rels be\nthe same as left_rels. The attached alternative_1.patch does pass\ncheck-world. But I find it mighty unprincipled: the JoinTreeItem\ndata structures are full of shared relid sets, so why is this\nparticular sharing not OK? I still don't have any confidence that\nthere aren't more problems.\n\nAlong about here I started to wonder how come we are only seeing\nSpecialJoinInfo-vs-RestrictInfo sharing as a problem, when surely\nthere is plenty of cross-RestrictInfo sharing going on as well.\n(The above call is perfectly capable of making a lot of RestrictInfos,\nall with the same outer_relids.) That thought led me to look at\nremove_rel_from_restrictinfo, and darn if I didn't find this:\n\n /*\n * The clause_relids probably aren't shared with anything else, but let's\n * copy them just to be sure.\n */\n rinfo->clause_relids = bms_copy(rinfo->clause_relids);\n ...\n /* Likewise for required_relids */\n rinfo->required_relids = bms_copy(rinfo->required_relids);\n\nSo the reason we don't see cross-RestrictInfo breakage is that\nanalyzejoins.c is careful not to modify the original relid sets\nwhen modifying a RestrictInfo. (This comment is clearly wrong.)\n\nAnd that leads to the thought that we can fix our current sharing\nproblems by similarly avoiding overwriting the original sets\nin remove_rel_from_query. The attached alternative-2.patch\nattacks it that way, and also passes check-world.\n\nI like alternative-2.patch a lot better, not least because it\nonly adds cycles when join removal actually fires. Basically\nthis is putting the onus on the data structure modifier to\ncope with shared bitmapsets, rather than trying to say that\nsharing is disallowed.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 08 May 2024 14:24:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "BTW, now that I've wrapped my head around what's happening here,\nI believe that -DREALLOCATE_BITMAPSETS is introducing a bug where\nthere was none before. The changes that left-join removal makes\nwon't cause any of these sets to go to empty, so the bms_del_member\ncalls won't free the sets but just modify them in-place. And the\nsame change will/should be made in every relevant relid set, so\nthe fact that the sets may be shared isn't hurting anything.\n\nThis is, of course, pretty fragile and I'm totally on board with\nmaking it safer. But there's no live bug in released branches,\nand not in HEAD either unless you add -DREALLOCATE_BITMAPSETS.\nThat accounts for the lack of related field reports, and it means\nwe don't really need to back-patch anything.\n\nThis conclusion also reinforces my previously-vague feeling that\nwe should not consider making -DREALLOCATE_BITMAPSETS the default in\ndebug builds, as was proposed upthread. It's really a fundamentally\ndifferent behavior, and I strongly suspect that it can mask bugs as\nwell as introduce them (by hiding sharing in cases that'd be less\nbenign than this turns out to be). I'd rather not do development on\ntop of bitmapset infrastructure that acts entirely different from\nproduction bitmapset infrastructure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 14:49:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Thu, 9 May 2024 at 06:49, Tom Lane <[email protected]> wrote:\n> BTW, now that I've wrapped my head around what's happening here,\n> I believe that -DREALLOCATE_BITMAPSETS is introducing a bug where\n> there was none before. The changes that left-join removal makes\n> won't cause any of these sets to go to empty, so the bms_del_member\n> calls won't free the sets but just modify them in-place. And the\n> same change will/should be made in every relevant relid set, so\n> the fact that the sets may be shared isn't hurting anything.\n\nFWIW, it just feels like we're willing to accept that the\nbms_del_member() is not updating all copies of the set in this case as\nthat particular behaviour is ok for this particular case. I know\nyou're not proposing this, but I don't think that would warrant\nrelaxing REALLOCATE_BITMAPSETS to not reallocate Bitmapsets on\nbms_del_member() and bms_del_members().\n\nIf all we have to do to make -DREALLOCATE_BITMAPSETS builds happy in\nmake check-world is to add a bms_copy inside the bms_del_member()\ncalls in remove_rel_from_query(), then I think it's a small price to\npay to allow us to maintain the additional coverage that\nREALLOCATE_BITMAPSETS provides. That seems like a small price to pay\nwhen the gains are removing an entire join.\n\n> This conclusion also reinforces my previously-vague feeling that\n> we should not consider making -DREALLOCATE_BITMAPSETS the default in\n> debug builds, as was proposed upthread. It's really a fundamentally\n> different behavior, and I strongly suspect that it can mask bugs as\n> well as introduce them (by hiding sharing in cases that'd be less\n> benign than this turns out to be). I'd rather not do development on\n> top of bitmapset infrastructure that acts entirely different from\n> production bitmapset infrastructure.\n\nMy primary interest in this feature is using it to catch bugs that\nwe're unlikely to ever hit in the regression tests. For example, the\nplanner works when there are <= 63 RTEs but falls over when there are\n64 because some bms_add_member() must reallocate more memory to store\nthe 64th RTI in a Bitmapset. I'd like to have something to make it\nmore likely we'll find bugs like this before the release instead of\nsomeone having a crash when they run some obscure query shape\ncontaining > 63 RTEs 2 or 4 years after the release.\n\nI'm happy Andrew added this to prion. Thanks for doing that.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 10:02:06 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Thu, 9 May 2024 at 06:24, Tom Lane <[email protected]> wrote:\n> I like alternative-2.patch a lot better, not least because it\n> only adds cycles when join removal actually fires. Basically\n> this is putting the onus on the data structure modifier to\n> cope with shared bitmapsets, rather than trying to say that\n> sharing is disallowed.\n>\n> Thoughts?\n\nI'm fine with this one as it's the same as what I already mentioned\nearlier. I had imagined doing bms_del_member(bms_copy ... but maybe\nthe compiler is able to optimise away the additional store. Likely, it\ndoes not matter much as pallocing memory likely adds far more overhead\nanyway.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 10:07:22 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Thu, 9 May 2024 at 06:49, Tom Lane <[email protected]> wrote:\n>> BTW, now that I've wrapped my head around what's happening here,\n>> I believe that -DREALLOCATE_BITMAPSETS is introducing a bug where\n>> there was none before. The changes that left-join removal makes\n>> won't cause any of these sets to go to empty, so the bms_del_member\n>> calls won't free the sets but just modify them in-place. And the\n>> same change will/should be made in every relevant relid set, so\n>> the fact that the sets may be shared isn't hurting anything.\n\n> FWIW, it just feels like we're willing to accept that the\n> bms_del_member() is not updating all copies of the set in this case as\n> that particular behaviour is ok for this particular case. I know\n> you're not proposing this,\n\nNo, I'm not. I was just trying to explain how come there's not\na visible bug. I quite agree that this is too fragile to leave\nas-is going forward. (One thing I'm wondering about is whether\nwe should back-patch despite the lack of visible bug, just in\ncase some future back-patch relies on the safer behavior.)\n\n>> This conclusion also reinforces my previously-vague feeling that\n>> we should not consider making -DREALLOCATE_BITMAPSETS the default in\n>> debug builds, as was proposed upthread.\n\n> My primary interest in this feature is using it to catch bugs that\n> we're unlikely to ever hit in the regression tests. For example, the\n> planner works when there are <= 63 RTEs but falls over when there are\n> 64 because some bms_add_member() must reallocate more memory to store\n> the 64th RTI in a Bitmapset. I'd like to have something to make it\n> more likely we'll find bugs like this before the release instead of\n> someone having a crash when they run some obscure query shape\n> containing > 63 RTEs 2 or 4 years after the release.\n\nAgain, I think -DREALLOCATE_BITMAPSETS adds a valuable testing weapon.\nBut if we make that the default in debug builds, then we'll get next\ndoor to zero testing of the behavior without it, and that seems like\na really bad idea given how different the behavior is.\n\n(Speaking of which, I wonder how many buildfarm members build without\n--enable-cassert. The answer could well be \"zero\", and that's likely\nnot good.)\n\n> I'm happy Andrew added this to prion. Thanks for doing that.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 18:16:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I'm fine with this one as it's the same as what I already mentioned\n> earlier. I had imagined doing bms_del_member(bms_copy ... but maybe\n> the compiler is able to optimise away the additional store. Likely, it\n> does not matter much as pallocing memory likely adds far more overhead\n> anyway.\n\nI actually wrote it that way to start with, but undid it after\nnoticing that the existing code in remove_rel_from_restrictinfo\ndoes it in separate steps, and thinking that that was good for\nboth separation of concerns and a cleaner git history. I too\ncan't believe that an extra fetch will be noticeable compared\nto the cost of the adjacent bms_xxx operations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 18:40:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "On Thu, May 9, 2024 at 6:40 AM Tom Lane <[email protected]> wrote:\n\n> David Rowley <[email protected]> writes:\n> > I'm fine with this one as it's the same as what I already mentioned\n> > earlier. I had imagined doing bms_del_member(bms_copy ... but maybe\n> > the compiler is able to optimise away the additional store. Likely, it\n> > does not matter much as pallocing memory likely adds far more overhead\n> > anyway.\n>\n> I actually wrote it that way to start with, but undid it after\n> noticing that the existing code in remove_rel_from_restrictinfo\n> does it in separate steps, and thinking that that was good for\n> both separation of concerns and a cleaner git history. I too\n> can't believe that an extra fetch will be noticeable compared\n> to the cost of the adjacent bms_xxx operations.\n\n\nI also think it seems better to do bms_copy in separate steps, not only\nbecause this keeps consistent with the existing code in\nremove_rel_from_restrictinfo, but also because we need to do\nbms_del_member twice for each lefthand/righthand relid set.\n\nSpeaking of consistency, do you think it would improve the code's\nreadability if we rearrange the code in remove_rel_from_query so that\nthe modifications of the same relid set are grouped together, just like\nwhat we do in remove_rel_from_restrictinfo? I mean something like:\n\n sjinf->min_lefthand = bms_copy(sjinf->min_lefthand);\n sjinf->min_lefthand = bms_del_member(sjinf->min_lefthand, relid);\n sjinf->min_lefthand = bms_del_member(sjinf->min_lefthand, ojrelid);\n\n ...\n\nThanks\nRichard\n\nOn Thu, May 9, 2024 at 6:40 AM Tom Lane <[email protected]> wrote:David Rowley <[email protected]> writes:\n> I'm fine with this one as it's the same as what I already mentioned\n> earlier.  I had imagined doing bms_del_member(bms_copy ... but maybe\n> the compiler is able to optimise away the additional store. Likely, it\n> does not matter much as pallocing memory likely adds far more overhead\n> anyway.\n\nI actually wrote it that way to start with, but undid it after\nnoticing that the existing code in remove_rel_from_restrictinfo\ndoes it in separate steps, and thinking that that was good for\nboth separation of concerns and a cleaner git history.  I too\ncan't believe that an extra fetch will be noticeable compared\nto the cost of the adjacent bms_xxx operations.I also think it seems better to do bms_copy in separate steps, not onlybecause this keeps consistent with the existing code inremove_rel_from_restrictinfo, but also because we need to dobms_del_member twice for each lefthand/righthand relid set.Speaking of consistency, do you think it would improve the code'sreadability if we rearrange the code in remove_rel_from_query so thatthe modifications of the same relid set are grouped together, just likewhat we do in remove_rel_from_restrictinfo?  I mean something like:   sjinf->min_lefthand = bms_copy(sjinf->min_lefthand);   sjinf->min_lefthand = bms_del_member(sjinf->min_lefthand, relid);   sjinf->min_lefthand = bms_del_member(sjinf->min_lefthand, ojrelid);   ...ThanksRichard", "msg_date": "Thu, 9 May 2024 10:43:31 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> I also think it seems better to do bms_copy in separate steps, not only\n> because this keeps consistent with the existing code in\n> remove_rel_from_restrictinfo, but also because we need to do\n> bms_del_member twice for each lefthand/righthand relid set.\n\nYeah. Of course, we don't need a bms_copy() in the second one,\nbut that'd just add even more asymmetry and chance for confusion.\n\n> Speaking of consistency, do you think it would improve the code's\n> readability if we rearrange the code in remove_rel_from_query so that\n> the modifications of the same relid set are grouped together, just like\n> what we do in remove_rel_from_restrictinfo?\n\nI left it alone, just because it didn't seem worth cluttering \"git\nblame\" here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 May 2024 11:07:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revert: Remove useless self-joins *and* -DREALLOCATE_BITMAPSETS\n make server crash, regress test fail." } ]
[ { "msg_contents": "pg_dump -Fc |pg_restore -l -N schema:\n\n| 2; 3079 18187 EXTENSION - pg_buffercache \n\nWithout -N schema also shows:\n\n| 2562; 0 0 COMMENT - EXTENSION pg_buffercache \n\nI mean literal s-c-h-e-m-a, but I suppose anything else will work the\nsame.\n\nBTW, I noticed that pg_restore -v shows that duplicate dependencies can be\nstored. We see things like this (and worse).\n\n| 4284; 1259 191439414 VIEW public wmg_server_view telsasoft\n| ; depends on: 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 612 23087 612\n\nI see that's possible not only for views, but also tables.\nThat's probaably wasteful of CPU, at least.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 May 2024 07:28:58 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "pg_restore -N loses extension comment" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> pg_dump -Fc |pg_restore -l -N schema:\n> | 2; 3079 18187 EXTENSION - pg_buffercache \n> Without -N schema also shows:\n> | 2562; 0 0 COMMENT - EXTENSION pg_buffercache \n\nHmm, but what happens if you actually do the restore?\n\nI think this may be a bug in -l mode: ProcessArchiveRestoreOptions\nsaves the result of _tocEntryRequired in te->reqs, but PrintTOCSummary\ndoesn't, and that will bollix its subsequent _tocEntryRequired checks\nfor \"dependent\" TOC entries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 May 2024 09:52:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore -N loses extension comment" }, { "msg_contents": "I wrote:\n> I think this may be a bug in -l mode: ProcessArchiveRestoreOptions\n> saves the result of _tocEntryRequired in te->reqs, but PrintTOCSummary\n> doesn't, and that will bollix its subsequent _tocEntryRequired checks\n> for \"dependent\" TOC entries.\n\nYeah ... the attached seems to fix it.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 07 May 2024 10:49:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore -N loses extension comment" } ]
[ { "msg_contents": "Hi,\n\nNazir Bilal Yavuz <byavuz81(at)gmail(dot)com> wrote:\n\n>Any kind of feedback would be appreciated.\n\nI know it's coming from copy-and-paste, but\nI believe the flags could be:\n- dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);\n+ dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL |\nPG_BINARY);\n\nThe flags:\nO_WRONLY | O_TRUNC\n\nAllow the OS to make some optimizations, if you deem it possible.\n\nThe flag O_RDWR indicates that the file can be read, which is not true in\nthis case.\nThe destination file will just be written.\n\nbest regards,\nRanier Vilela\n\nHi,\nNazir Bilal Yavuz <byavuz81(at)gmail(dot)com> wrote:\n>Any kind of feedback would be appreciated.I know it's coming from copy-and-paste, butI believe the flags could be:- dstfd = OpenTransientFile(tofile, O_RDWR | O_CREAT | O_EXCL | PG_BINARY);+ dstfd = OpenTransientFile(tofile, O_CREAT | O_WRONLY | O_TRUNC | O_EXCL | PG_BINARY);\nThe flags:O_WRONLY | O_TRUNCAllow the OS to make some optimizations, if you deem it possible.The flag O_RDWR indicates that the file can be read, which is not true in this case. The destination file will just be written.best regards,Ranier Vilela", "msg_date": "Tue, 7 May 2024 10:28:19 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE DATABASE with filesystem cloning" } ]
[ { "msg_contents": "In this commit:\n\n\tcommit 34768ee3616\n\tAuthor: Peter Eisentraut <[email protected]>\n\tDate: Sun Mar 24 07:37:13 2024 +0100\n\t\n\t Add temporal FOREIGN KEY contraints\n\t\n\t Add PERIOD clause to foreign key constraint definitions. This is\n\t supported for range and multirange types. Temporal foreign keys check\n\t for range containment instead of equality.\n\t\n\t This feature matches the behavior of the SQL standard temporal foreign\n\t keys, but it works on PostgreSQL's native ranges instead of SQL's\n\t \"periods\", which don't exist in PostgreSQL (yet).\n\t\n\t Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT}\n\t are not supported yet.\n\t\n\t Author: Paul A. Jungwirth <[email protected]>\n\t Reviewed-by: Peter Eisentraut <[email protected]>\n\t Reviewed-by: jian he <[email protected]>\n\t Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n\nthis text was added to create_table.sgml:\n\n\tIn addition, the referenced table must have a primary\n\tkey or unique constraint declared with <literal>WITHOUT\n-->\tOVERLAPS</literal>. Finally, if one side of the foreign key\n-->\tuses <literal>PERIOD</literal>, the other side must too. If the\n\t<replaceable class=\"parameter\">refcolumn</replaceable> list is\n\tomitted, the <literal>WITHOUT OVERLAPS</literal> part of the\n\tprimary key is treated as if marked with <literal>PERIOD</literal>.\n\nIn the two marked lines, it says \"if one side of the foreign key uses\nPERIOD, the other side must too.\" However, looking at the example\nqueries, it seems like if the foreign side has PERIOD, the primary side\nmust have WITHOUT OVERLAPS, not PERIOD.\n\nDoes this doc text need correcting?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 7 May 2024 10:54:30 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "PERIOD foreign key feature" }, { "msg_contents": "On Tue, May 7, 2024 at 7:54 AM Bruce Momjian <[email protected]> wrote:\n\n> In this commit:\n>\n> commit 34768ee3616\n> Author: Peter Eisentraut <[email protected]>\n> Date: Sun Mar 24 07:37:13 2024 +0100\n>\n> Add temporal FOREIGN KEY contraints\n>\n> Add PERIOD clause to foreign key constraint definitions. This\n> is\n> supported for range and multirange types. Temporal foreign\n> keys check\n> for range containment instead of equality.\n>\n> This feature matches the behavior of the SQL standard temporal\n> foreign\n> keys, but it works on PostgreSQL's native ranges instead of\n> SQL's\n> \"periods\", which don't exist in PostgreSQL (yet).\n>\n> Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET\n> DEFAULT}\n> are not supported yet.\n>\n> Author: Paul A. Jungwirth <[email protected]>\n> Reviewed-by: Peter Eisentraut <[email protected]>\n> Reviewed-by: jian he <[email protected]>\n> Discussion:\n> https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n>\n> this text was added to create_table.sgml:\n>\n> In addition, the referenced table must have a primary\n> key or unique constraint declared with <literal>WITHOUT\n> --> OVERLAPS</literal>. Finally, if one side of the foreign key\n> --> uses <literal>PERIOD</literal>, the other side must too. If the\n> <replaceable class=\"parameter\">refcolumn</replaceable> list is\n> omitted, the <literal>WITHOUT OVERLAPS</literal> part of the\n> primary key is treated as if marked with <literal>PERIOD</literal>.\n>\n> In the two marked lines, it says \"if one side of the foreign key uses\n> PERIOD, the other side must too.\" However, looking at the example\n> queries, it seems like if the foreign side has PERIOD, the primary side\n> must have WITHOUT OVERLAPS, not PERIOD.\n>\n> Does this doc text need correcting?\n>\n>\nThe text is factually correct, though a bit hard to parse.\n\n\"the other side\" refers to the part after \"REFERENCES\":\n\nFOREIGN KEY ( column_name [, ... ] [, PERIOD column_name ] ) REFERENCES\nreftable [ ( refcolumn [, ... ] [, PERIOD column_name ] ) ]\n\n***(shouldn't the second occurrence be [, PERIOD refcolum] ?)\n\nThe text is pointing out that since the refcolumn specification is optional\nyou may very well not see a second PERIOD keyword in the clause. Instead\nit will be inferred from the PK.\n\nMaybe:\n\nFinally, if the foreign key has a PERIOD column_name specification the\ncorresponding refcolumn, if present, must also be marked PERIOD. If the\nrefcolumn clause is omitted, and thus the reftable's primary key constraint\nchosen, the primary key must have its final column marked WITHOUT OVERLAPS.\n\nDavid J.\n\nOn Tue, May 7, 2024 at 7:54 AM Bruce Momjian <[email protected]> wrote:In this commit:\n\n        commit 34768ee3616\n        Author: Peter Eisentraut <[email protected]>\n        Date:   Sun Mar 24 07:37:13 2024 +0100\n\n            Add temporal FOREIGN KEY contraints\n\n            Add PERIOD clause to foreign key constraint definitions.  This is\n            supported for range and multirange types.  Temporal foreign keys check\n            for range containment instead of equality.\n\n            This feature matches the behavior of the SQL standard temporal foreign\n            keys, but it works on PostgreSQL's native ranges instead of SQL's\n            \"periods\", which don't exist in PostgreSQL (yet).\n\n            Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT}\n            are not supported yet.\n\n            Author: Paul A. Jungwirth <[email protected]>\n            Reviewed-by: Peter Eisentraut <[email protected]>\n            Reviewed-by: jian he <[email protected]>\n            Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com\n\nthis text was added to create_table.sgml:\n\n        In addition, the referenced table must have a primary\n        key or unique constraint declared with <literal>WITHOUT\n-->     OVERLAPS</literal>.  Finally, if one side of the foreign key\n-->     uses <literal>PERIOD</literal>, the other side must too.  If the\n        <replaceable class=\"parameter\">refcolumn</replaceable> list is\n        omitted, the <literal>WITHOUT OVERLAPS</literal> part of the\n        primary key is treated as if marked with <literal>PERIOD</literal>.\n\nIn the two marked lines, it says \"if one side of the foreign key uses\nPERIOD, the other side must too.\"  However, looking at the example\nqueries, it seems like if the foreign side has PERIOD, the primary side\nmust have WITHOUT OVERLAPS, not PERIOD.\n\nDoes this doc text need correcting?The text is factually correct, though a bit hard to parse.\"the other side\" refers to the part after \"REFERENCES\":FOREIGN KEY ( column_name [, ... ] [, PERIOD column_name ] ) REFERENCES reftable [ ( refcolumn [, ... ] [, PERIOD column_name ] ) ]***(shouldn't the second occurrence be [, PERIOD refcolum] ?)The text is pointing out that since the refcolumn specification is optional you may very well not see a second PERIOD keyword in the clause.  Instead it will be inferred from the PK.Maybe:Finally, if the foreign key has a PERIOD column_name specification the corresponding refcolumn, if present, must also be marked PERIOD.  If the refcolumn clause is omitted, and thus the reftable's primary key constraint chosen, the primary key must have its final column marked WITHOUT OVERLAPS.David J.", "msg_date": "Tue, 7 May 2024 08:23:52 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERIOD foreign key feature" }, { "msg_contents": "On 5/7/24 08:23, David G. Johnston wrote:\n> On Tue, May 7, 2024 at 7:54 AM Bruce Momjian <[email protected] <mailto:[email protected]>> wrote:\n> In the two marked lines, it says \"if one side of the foreign key uses\n> PERIOD, the other side must too.\"  However, looking at the example\n> queries, it seems like if the foreign side has PERIOD, the primary side\n> must have WITHOUT OVERLAPS, not PERIOD.\n> \n> Does this doc text need correcting?\n> \n> \n> The text is factually correct, though a bit hard to parse.\n> \n> \"the other side\" refers to the part after \"REFERENCES\":\n> \n> FOREIGN KEY ( column_name [, ... ] [, PERIOD column_name ] ) REFERENCES reftable [ ( refcolumn [, \n> ... ] [, PERIOD column_name ] ) ]\n> \n> ***(shouldn't the second occurrence be [, PERIOD refcolum] ?)\n> \n> The text is pointing out that since the refcolumn specification is optional you may very well not \n> see a second PERIOD keyword in the clause.  Instead it will be inferred from the PK.\n> \n> Maybe:\n> \n> Finally, if the foreign key has a PERIOD column_name specification the corresponding refcolumn, if \n> present, must also be marked PERIOD.  If the refcolumn clause is omitted, and thus the reftable's \n> primary key constraint chosen, the primary key must have its final column marked WITHOUT OVERLAPS.\n\nYes, David is correct here on all points. I like his suggestion to clarify the language here also. \nIf you need a patch from me let me know, but I assume it's something a committer can just make happen?\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]\n\n\n", "msg_date": "Tue, 7 May 2024 09:43:58 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERIOD foreign key feature" }, { "msg_contents": "On 07.05.24 18:43, Paul Jungwirth wrote:\n> On 5/7/24 08:23, David G. Johnston wrote:\n>> On Tue, May 7, 2024 at 7:54 AM Bruce Momjian <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>     In the two marked lines, it says \"if one side of the foreign key uses\n>>     PERIOD, the other side must too.\"  However, looking at the example\n>>     queries, it seems like if the foreign side has PERIOD, the primary \n>> side\n>>     must have WITHOUT OVERLAPS, not PERIOD.\n>>\n>>     Does this doc text need correcting?\n>>\n>>\n>> The text is factually correct, though a bit hard to parse.\n>>\n>> \"the other side\" refers to the part after \"REFERENCES\":\n>>\n>> FOREIGN KEY ( column_name [, ... ] [, PERIOD column_name ] ) \n>> REFERENCES reftable [ ( refcolumn [, ... ] [, PERIOD column_name ] ) ]\n>>\n>> ***(shouldn't the second occurrence be [, PERIOD refcolum] ?)\n>>\n>> The text is pointing out that since the refcolumn specification is \n>> optional you may very well not see a second PERIOD keyword in the \n>> clause.  Instead it will be inferred from the PK.\n>>\n>> Maybe:\n>>\n>> Finally, if the foreign key has a PERIOD column_name specification the \n>> corresponding refcolumn, if present, must also be marked PERIOD.  If \n>> the refcolumn clause is omitted, and thus the reftable's primary key \n>> constraint chosen, the primary key must have its final column marked \n>> WITHOUT OVERLAPS.\n> \n> Yes, David is correct here on all points. I like his suggestion to \n> clarify the language here also. If you need a patch from me let me know, \n> but I assume it's something a committer can just make happen?\n\nIn principle yes, but it's also very helpful if someone produces an \nactual patch file, with complete commit message, credits, mailing list \nlink, etc.\n\n\n\n", "msg_date": "Wed, 8 May 2024 14:29:34 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERIOD foreign key feature" }, { "msg_contents": "On Wed, May 8, 2024 at 02:29:34PM +0200, Peter Eisentraut wrote:\n> > > Finally, if the foreign key has a PERIOD column_name specification\n> > > the corresponding refcolumn, if present, must also be marked\n> > > PERIOD.  If the refcolumn clause is omitted, and thus the reftable's\n> > > primary key constraint chosen, the primary key must have its final\n> > > column marked WITHOUT OVERLAPS.\n> > \n> > Yes, David is correct here on all points. I like his suggestion to\n> > clarify the language here also. If you need a patch from me let me know,\n> > but I assume it's something a committer can just make happen?\n> \n> In principle yes, but it's also very helpful if someone produces an actual\n> patch file, with complete commit message, credits, mailing list link, etc.\n\nI am ready to do the work, but waited a day for Peter to reply, since he\nwas the author of the text.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 8 May 2024 10:44:17 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERIOD foreign key feature" }, { "msg_contents": "On 5/8/24 07:44, Bruce Momjian wrote:\n> On Wed, May 8, 2024 at 02:29:34PM +0200, Peter Eisentraut wrote:\n>>> Yes, David is correct here on all points. I like his suggestion to\n>>> clarify the language here also. If you need a patch from me let me know,\n>>> but I assume it's something a committer can just make happen?\n>>\n>> In principle yes, but it's also very helpful if someone produces an actual\n>> patch file, with complete commit message, credits, mailing list link, etc.\n> \n> I am ready to do the work, but waited a day for Peter to reply, since he\n> was the author of the text.\n\nHere is a patch for this.\n\nYours,\n\n-- \nPaul ~{:-)\[email protected]", "msg_date": "Wed, 8 May 2024 20:47:45 -0700", "msg_from": "Paul Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERIOD foreign key feature" }, { "msg_contents": "On Wed, May 8, 2024 at 08:47:45PM -0700, Paul Jungwirth wrote:\n> On 5/8/24 07:44, Bruce Momjian wrote:\n> > On Wed, May 8, 2024 at 02:29:34PM +0200, Peter Eisentraut wrote:\n> > > > Yes, David is correct here on all points. I like his suggestion to\n> > > > clarify the language here also. If you need a patch from me let me know,\n> > > > but I assume it's something a committer can just make happen?\n> > > \n> > > In principle yes, but it's also very helpful if someone produces an actual\n> > > patch file, with complete commit message, credits, mailing list link, etc.\n> > \n> > I am ready to do the work, but waited a day for Peter to reply, since he\n> > was the author of the text.\n> \n> Here is a patch for this.\n\nThanks, patch applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 16:34:21 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERIOD foreign key feature" } ]
[ { "msg_contents": "Hi,\n\nAs you may know, aggregates like SELECT MIN(unique1) FROM tenk1; are\nrewritten as SELECT unique1 FROM tenk1 ORDER BY unique1 USING < LIMIT\n1; by using the optional sortop field in the aggregator.\nHowever, this optimization is disabled for clauses that in itself have\nan ORDER BY clause such as `MIN(unique1 ORDER BY <anything>), because\n<anything> can cause reordering of distinguisable values like 1.0 and\n1.00, which then causes measurable differences in the output. In the\ngeneral case, that's a good reason to not apply this optimization, but\nin some cases, we could still apply the index optimization.\n\nOne of those cases is fixed in the attached patch: if we order by the\nsame column that we're aggregating, using the same order class as the\naggregate's sort operator (i.e. the aggregate's sortop is in the same\nbtree opclass as the ORDER BY's sort operator), then we can still use\nthe index operation: The sort behaviour isn't changed, thus we can\napply the optimization.\n\nPFA the small patch that implements this.\n\nNote that we can't blindly accept just any ordering by the same\ncolumn: If we had an opclass that sorted numeric values by the length\nof the significant/mantissa, then that'd provide a different (and\ndistinct) ordering from that which is expected by the normal\nmin()/max() aggregates for numeric, which could cause us to return\narguably incorrect results for the aggregate expression.\n\nAlternatively, the current code could be changed to build indexed\npaths that append the SORT BY paths to the aggregate's sort operator,\nbut that'd significantly increase complexity here.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Wed, 8 May 2024 12:13:29 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n\n> PFA the small patch that implements this.\n\nI don't have enough knowledge to have an opinion on most of the patch\nother than it looks okay at a glance, but the list API usage could be\nupdated to more modern variants:\n\n> diff --git a/src/backend/optimizer/plan/planagg.c b/src/backend/optimizer/plan/planagg.c\n> index afb5445b77..d8479fe286 100644\n> --- a/src/backend/optimizer/plan/planagg.c\n> +++ b/src/backend/optimizer/plan/planagg.c\n> @@ -253,6 +253,16 @@ can_minmax_aggs(PlannerInfo *root, List **context)\n> \t\tif (list_length(aggref->args) != 1)\n> \t\t\treturn false;\t\t/* it couldn't be MIN/MAX */\n> \n> +\t\t/*\n> +\t\t * We might implement the optimization when a FILTER clause is present\n> +\t\t * by adding the filter to the quals of the generated subquery. For\n> +\t\t * now, just punt.\n> +\t\t */\n> +\t\tif (aggref->aggfilter != NULL)\n> +\t\t\treturn false;\n> +\n> +\t\tcurTarget = (TargetEntry *) linitial(aggref->args);\n\nThis could be linitial_node(TargetEntry, aggref->args).\n\n> +\t\t\tif (list_length(aggref->aggorder) > 1)\n> +\t\t\t\treturn false;\n> +\n> +\t\t\torderClause = castNode(SortGroupClause, linitial(aggref->aggorder));\n\nThis could be linitial_node(SortGroupClause, aggref->aggorder).\n\n> +\t\t\tif (orderClause->sortop != aggsortop)\n> +\t\t\t{\n> +\t\t\t\tList *btclasses;\n> +\t\t\t\tListCell *cell;\n> +\t\t\t\tbool\tmatch = false;\n> +\n> +\t\t\t\tbtclasses = get_op_btree_interpretation(orderClause->sortop);\n> +\n> +\t\t\t\tforeach(cell, btclasses)\n> +\t\t\t\t{\n> +\t\t\t\t\tOpBtreeInterpretation *interpretation;\n> +\t\t\t\t\tinterpretation = (OpBtreeInterpretation *) lfirst(cell);\n\nThis could be foreach_ptr(OpBtreeInterpretation, interpretation, btclasses),\nwhich also eliminates the need for the explicit `ListCell *` variable\nand lfirst() call.\n\n- ilmari\n\n\n", "msg_date": "Wed, 08 May 2024 11:58:55 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On Wed, 8 May 2024 at 22:13, Matthias van de Meent\n<[email protected]> wrote:\n> As you may know, aggregates like SELECT MIN(unique1) FROM tenk1; are\n> rewritten as SELECT unique1 FROM tenk1 ORDER BY unique1 USING < LIMIT\n> 1; by using the optional sortop field in the aggregator.\n> However, this optimization is disabled for clauses that in itself have\n> an ORDER BY clause such as `MIN(unique1 ORDER BY <anything>), because\n> <anything> can cause reordering of distinguisable values like 1.0 and\n> 1.00, which then causes measurable differences in the output. In the\n> general case, that's a good reason to not apply this optimization, but\n> in some cases, we could still apply the index optimization.\n\nI wonder if we should also consider as an alternative to this to just\nhave an aggregate support function, similar to\nSupportRequestOptimizeWindowClause that just nullifies the aggorder /\naggdistinct fields for Min/Max aggregates on types where there's no\npossible difference in output when calling the transition function on\nrows in a different order.\n\nWould that apply in enough cases for you?\n\nI think it would rule out Min(numeric) and Max(numeric). We were\ncareful not to affect the number of decimal places in the numeric\noutput when using the moving aggregate inverse transition\ninfrastructure for WindowFuncs, so I agree we should maintain an\nability to control the aggregate transition order for numeric. (See\ndo_numeric_discard's maxScale if check)\n\nI don't think floating point types have the same issues here. At least\n+1.0 is greater than -1.0.\n\nAre there any strange collation rules that would cause issues if we\ndid this with the text types?\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 12:26:08 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On Thu, 9 May 2024 at 12:26, David Rowley <[email protected]> wrote:\n> I wonder if we should also consider as an alternative to this to just\n> have an aggregate support function, similar to\n> SupportRequestOptimizeWindowClause that just nullifies the aggorder /\n> aggdistinct fields for Min/Max aggregates on types where there's no\n> possible difference in output when calling the transition function on\n> rows in a different order.\n>\n> Would that apply in enough cases for you?\n\nOne additional thought is that the above method would also help\neliminate redundant sorting in queries with a GROUP BY clause.\nWhereas, the can_minmax_aggs optimisation is not applied in that case.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 13:08:05 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On Thu, 9 May 2024 at 13:08, David Rowley <[email protected]> wrote:\n> One additional thought is that the above method would also help\n> eliminate redundant sorting in queries with a GROUP BY clause.\n> Whereas, the can_minmax_aggs optimisation is not applied in that case.\n\nAnother argument for using this method is that\nSupportRequestOptimizeAggref could allow future unrelated\noptimisations such as swapping count(<non-nullable-col>) for count(*).\nWhere <non-nullable-col> is a NOT NULL column and isn't nullable by\nany outer join. Doing that could speed some queries up quite a bit as\nit may mean fewer columns to deform from the tuple. You could imagine\na fact table with many columns and a few dimensions, often the\ndimension columns that you'd expect to use in GROUP BY would appear\nbefore the columns you'd aggregate on.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 15:28:45 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On 5/8/24 17:13, Matthias van de Meent wrote:\n> As you may know, aggregates like SELECT MIN(unique1) FROM tenk1; are\n> rewritten as SELECT unique1 FROM tenk1 ORDER BY unique1 USING < LIMIT\n> 1; by using the optional sortop field in the aggregator.\n> However, this optimization is disabled for clauses that in itself have\n> an ORDER BY clause such as `MIN(unique1 ORDER BY <anything>), because\n> <anything> can cause reordering of distinguisable values like 1.0 and\n> 1.00, which then causes measurable differences in the output. In the\n> general case, that's a good reason to not apply this optimization, but\n> in some cases, we could still apply the index optimization.\nThanks for the job! I guess you could be more brave and push down also \nFILTER statement.\n> \n> One of those cases is fixed in the attached patch: if we order by the\n> same column that we're aggregating, using the same order class as the\n> aggregate's sort operator (i.e. the aggregate's sortop is in the same\n> btree opclass as the ORDER BY's sort operator), then we can still use\n> the index operation: The sort behaviour isn't changed, thus we can\n> apply the optimization.\n> \n> PFA the small patch that implements this.\n> \n> Note that we can't blindly accept just any ordering by the same\n> column: If we had an opclass that sorted numeric values by the length\n> of the significant/mantissa, then that'd provide a different (and\n> distinct) ordering from that which is expected by the normal\n> min()/max() aggregates for numeric, which could cause us to return\n> arguably incorrect results for the aggregate expression.\nAs I see, the code:\naggsortop = fetch_agg_sort_op(aggref->aggfnoid);\nif (!OidIsValid(aggsortop))\n return false;\t\t/* not a MIN/MAX aggregate */\n\nused twice and can be evaluated earlier to avoid duplicated code.\n\nAlso, I'm unsure about the necessity of looking through the btree \nclasses. Maybe just to check the commutator to the sortop, like in the \ndiff attached? Or could you provide an example to support your approach?\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Wed, 17 Jul 2024 10:28:58 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On 5/9/24 08:08, David Rowley wrote:\n> On Thu, 9 May 2024 at 12:26, David Rowley <[email protected]> wrote:\n>> I wonder if we should also consider as an alternative to this to just\n>> have an aggregate support function, similar to\n>> SupportRequestOptimizeWindowClause that just nullifies the aggorder /\n>> aggdistinct fields for Min/Max aggregates on types where there's no\n>> possible difference in output when calling the transition function on\n>> rows in a different order.\n>>\n>> Would that apply in enough cases for you?\n> \n> One additional thought is that the above method would also help\n> eliminate redundant sorting in queries with a GROUP BY clause.\n> Whereas, the can_minmax_aggs optimisation is not applied in that case.\nI generally like the idea of a support function. But as I can see, the \ncan_minmax_aggs() rejects if any of the aggregates don't pass the \nchecks. The prosupport feature is designed to be applied to each \nfunction separately. How do you think to avoid it?\nAlso, I don't clearly understand the case you mentioned here - does it \nmean that you want to nullify orders for other aggregate types if they \nare the same as the incoming order?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 12:12:04 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov <[email protected]> wrote:\n>\n> On 5/8/24 17:13, Matthias van de Meent wrote:\n> > As you may know, aggregates like SELECT MIN(unique1) FROM tenk1; are\n> > rewritten as SELECT unique1 FROM tenk1 ORDER BY unique1 USING < LIMIT\n> > 1; by using the optional sortop field in the aggregator.\n> > However, this optimization is disabled for clauses that in itself have\n> > an ORDER BY clause such as `MIN(unique1 ORDER BY <anything>), because\n> > <anything> can cause reordering of distinguisable values like 1.0 and\n> > 1.00, which then causes measurable differences in the output. In the\n> > general case, that's a good reason to not apply this optimization, but\n> > in some cases, we could still apply the index optimization.\n>\n> Thanks for the job! I guess you could be more brave and push down also\n> FILTER statement.\n\nWhile probably not impossible, I wasn't planning on changing this code\nwith new optimizations; just expanding the applicability of the\ncurrent optimizations.\n\nNote that the \"aggfilter\" clause was not new, but moved up in the code\nto make sure we use this local information to bail out (if applicable)\nbefore trying to use the catalogs for bail-out information.\n\n> >\n> > One of those cases is fixed in the attached patch: if we order by the\n> > same column that we're aggregating, using the same order class as the\n> > aggregate's sort operator (i.e. the aggregate's sortop is in the same\n> > btree opclass as the ORDER BY's sort operator), then we can still use\n> > the index operation: The sort behaviour isn't changed, thus we can\n> > apply the optimization.\n> >\n> > PFA the small patch that implements this.\n> >\n> > Note that we can't blindly accept just any ordering by the same\n> > column: If we had an opclass that sorted numeric values by the length\n> > of the significant/mantissa, then that'd provide a different (and\n> > distinct) ordering from that which is expected by the normal\n> > min()/max() aggregates for numeric, which could cause us to return\n> > arguably incorrect results for the aggregate expression.\n>\n> As I see, the code:\n> aggsortop = fetch_agg_sort_op(aggref->aggfnoid);\n> if (!OidIsValid(aggsortop))\n> return false; /* not a MIN/MAX aggregate */\n>\n> used twice and can be evaluated earlier to avoid duplicated code.\n\nThe code is structured like this to make sure we only start accessing\ncatalogs once we know that all other reasons to bail out from this\noptimization indicate we can apply the opimization. You'll notice that\nI've tried to put the cheapest checks that only use caller-supplied\ninformation first, and catalog accesses only after that.\n\nIf the fetch_agg_sort_op clause would be deduplicated, it would either\nincrease code complexity to handle both aggref->aggorder paths, or it\nwould increase the cost of planning MAX(a ORDER BY b) because of the\nnewly added catalog access.\n\n> Also, I'm unsure about the necessity of looking through the btree\n> classes. Maybe just to check the commutator to the sortop, like in the\n> diff attached? Or could you provide an example to support your approach?\n\nI think it could work, but I'd be hesitant to rely on that, as\ncommutator registration is optional (useful, but never required for\nbtree operator classes' operators). Looking at the btree operator\nclass, which is the definition of sortability in PostgreSQL, seems\nmore suitable and correct.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 17 Jul 2024 11:33:05 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On 17/7/2024 16:33, Matthias van de Meent wrote:\n> On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov <[email protected]> wrote:\n>> Thanks for the job! I guess you could be more brave and push down also\n>> FILTER statement.\n> \n> While probably not impossible, I wasn't planning on changing this code\n> with new optimizations; just expanding the applicability of the\n> current optimizations.\nGot it>> As I see, the code:\n>> aggsortop = fetch_agg_sort_op(aggref->aggfnoid);\n>> if (!OidIsValid(aggsortop))\n>> return false; /* not a MIN/MAX aggregate */\n>>\n>> used twice and can be evaluated earlier to avoid duplicated code.\n> \n> The code is structured like this to make sure we only start accessing\n> catalogs once we know that all other reasons to bail out from this\n> optimization indicate we can apply the opimization. You'll notice that\n> I've tried to put the cheapest checks that only use caller-supplied\n> information first, and catalog accesses only after that.\n> \n> If the fetch_agg_sort_op clause would be deduplicated, it would either\n> increase code complexity to handle both aggref->aggorder paths, or it\n> would increase the cost of planning MAX(a ORDER BY b) because of the\n> newly added catalog access.\nIMO it looks like a micro optimisation. But I agree, it is more about \ncode style - let the committer decide what is better.>> Also, I'm unsure \nabout the necessity of looking through the btree\n>> classes. Maybe just to check the commutator to the sortop, like in the\n>> diff attached? Or could you provide an example to support your approach?\n> \n> I think it could work, but I'd be hesitant to rely on that, as\n> commutator registration is optional (useful, but never required for\n> btree operator classes' operators). Looking at the btree operator\n> class, which is the definition of sortability in PostgreSQL, seems\n> more suitable and correct.\nHm, I dubious about that. Can you provide an example which my variant \nwill not pass but your does that correctly?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 21:09:49 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On Wed, 17 Jul 2024 at 17:12, Andrei Lepikhov <[email protected]> wrote:\n> I generally like the idea of a support function. But as I can see, the\n> can_minmax_aggs() rejects if any of the aggregates don't pass the\n> checks. The prosupport feature is designed to be applied to each\n> function separately. How do you think to avoid it?\n\nYou wouldn't avoid it. The prosupport function would be called once\nfor each Aggref in the query. Why is that a problem?\n\n> Also, I don't clearly understand the case you mentioned here - does it\n> mean that you want to nullify orders for other aggregate types if they\n> are the same as the incoming order?\n\nNo, I mean unconditionally nullify Aggref->aggorder and\nAggref->aggdistinct for aggregate functions where ORDER BY / DISTINCT\nin the Aggref makes no difference to the result. I think that's ok for\nmax() and min() for everything besides NUMERIC. For aggorder, we'd\nhave to *not* optimise sum() and avg() for floating point types as\nthat could change the result. sum() and avg() for INT2, INT4 and INT8\nseems fine. I'd need to check, but I think sum(numeric) is ok too as\nthe dscale should end up the same regardless of the order. Obviously,\naggdistinct can't be changed for sum() and avg() on any type.\n\nIt seems also possible to adjust count(non-nullable-var) into\ncount(*), which, if done early enough in planning could help\nsignificantly by both reducing evaluation during execution, but also\npossibly reduce tuple deformation if that Var has a higher varattno\nthan anything else in the relation. That would require checking\nvarnullingrels is empty and the respective RelOptInfo's notnullattnums\nmentions the Var.\n\nDavid\n\n\n", "msg_date": "Thu, 18 Jul 2024 10:03:24 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On 17/7/2024 16:33, Matthias van de Meent wrote:\n> On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov <[email protected]> wrote:\n>> As I see, the code:\n>> aggsortop = fetch_agg_sort_op(aggref->aggfnoid);\n>> if (!OidIsValid(aggsortop))\n>> return false; /* not a MIN/MAX aggregate */\n>>\n>> used twice and can be evaluated earlier to avoid duplicated code.\n> \n> The code is structured like this to make sure we only start accessing\n> catalogs once we know that all other reasons to bail out from this\n> optimization indicate we can apply the opimization. You'll notice that\n> I've tried to put the cheapest checks that only use caller-supplied\n> information first, and catalog accesses only after that.\nAfter additional research I think I get the key misunderstanding why you \ndid so:\nAs I see, the checks:\nif (list_length(aggref->aggorder) > 1)\n return false;\nif (orderClause->tleSortGroupRef != curTarget->ressortgroupref)\n return false;\n\nnot needed at all. You already have check:\nif (list_length(aggref->args) != 1)\nand this tells us, that if we have ordering like MIN(x ORDER BY <smth>), \nthis <smth> ordering contains only aggregate argument x. Because if it \ncontained some expression, the transformAggregateCall() would add this \nexpression to agg->args by calling the transformSortClause() routine.\nThe tleSortGroupRef is just exactly ressortgroupref - no need to recheck \nit one more time. Of course, it is suitable only for MIN/MAX aggregates, \nbut we discuss only them right now. Am I wrong?\nIf you want, you can place it as assertions (see the diff in attachment).\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Thu, 18 Jul 2024 12:18:24 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On Wed, 17 Jul 2024 at 16:09, Andrei Lepikhov <[email protected]> wrote:\n>\n> On 17/7/2024 16:33, Matthias van de Meent wrote:\n> > On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov <[email protected]> wrote:\n> >> Thanks for the job! I guess you could be more brave and push down also\n> >> FILTER statement.\n> >\n> > While probably not impossible, I wasn't planning on changing this code\n> > with new optimizations; just expanding the applicability of the\n> > current optimizations.\n> Got it>> As I see, the code:\n> >> aggsortop = fetch_agg_sort_op(aggref->aggfnoid);\n> >> if (!OidIsValid(aggsortop))\n> >> return false; /* not a MIN/MAX aggregate */\n> >>\n> >> used twice and can be evaluated earlier to avoid duplicated code.\n> >\n> > The code is structured like this to make sure we only start accessing\n> > catalogs once we know that all other reasons to bail out from this\n> > optimization indicate we can apply the opimization. You'll notice that\n> > I've tried to put the cheapest checks that only use caller-supplied\n> > information first, and catalog accesses only after that.\n> >\n> > If the fetch_agg_sort_op clause would be deduplicated, it would either\n> > increase code complexity to handle both aggref->aggorder paths, or it\n> > would increase the cost of planning MAX(a ORDER BY b) because of the\n> > newly added catalog access.\n> IMO it looks like a micro optimisation. But I agree, it is more about\n> code style - let the committer decide what is better.>> Also, I'm unsure\n> about the necessity of looking through the btree\n> >> classes. Maybe just to check the commutator to the sortop, like in the\n> >> diff attached? Or could you provide an example to support your approach?\n> >\n> > I think it could work, but I'd be hesitant to rely on that, as\n> > commutator registration is optional (useful, but never required for\n> > btree operator classes' operators). Looking at the btree operator\n> > class, which is the definition of sortability in PostgreSQL, seems\n> > more suitable and correct.\n> Hm, I dubious about that. Can you provide an example which my variant\n> will not pass but your does that correctly?\n\nHere is one:\n\n\"\"\"\nCREATE OPERATOR @@> (\n function=int4gt, leftarg=int4, rightarg=int4\n); CREATE OPERATOR @@>= (\n function=int4ge, leftarg=int4, rightarg=int4\n); CREATE OPERATOR @@= (\n function=int4eq, leftarg=int4, rightarg=int4\n); CREATE OPERATOR @@<= (\n function=int4le, leftarg=int4, rightarg=int4\n); CREATE OPERATOR @@< (\n function=int4lt, leftarg=int4, rightarg=int4\n);\n\nCREATE OPERATOR CLASS my_int_ops\n FOR TYPE int\n USING btree AS\n OPERATOR 1 @<@,\n OPERATOR 2 @<=@,\n OPERATOR 3 @=@,\n OPERATOR 4 @>=@,\n OPERATOR 5 @>@,\n FUNCTION 1 btint4cmp;\n\nCREATE AGGREGATE my_int_max (\n BASETYPE = int4,\n SFUNC = int4larger,\n STYPE = int4,\n SORTOP = @>@\n);\n\nCREATE TABLE my_table AS\nSELECT id::int4 FROM generate_series(1, 10000) id;\n\nCREATE INDEX ON my_table (id my_int_ops);\n\nSELECT my_int_max(id ORDER BY id USING @<@ ) from my_table;\n\"\"\"\n\nBecause the @<@ and @>@ operators are not registered as commutative,\nit couldn't apply the optimization in your patch, while the btree\noperator check does allow it to apply the optimization.\n\nAside: Arguably, checking for commutator operators would not be\nincorrect when looking at it from \"applied operators\" point of view,\nbut if that commutative operator isn't registered as opposite ordering\nof the same btree opclass, then we'd probably break some assumptions\nof some aggregate's sortop - it could be registered with another\nopclass, and that could cause us to select a different btree opclass\n(thus: ordering) than is indicated to be required by the aggregate;\nthe thing we're trying to protect against here.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 18 Jul 2024 14:49:15 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On 18/7/2024 19:49, Matthias van de Meent wrote:\n> On Wed, 17 Jul 2024 at 16:09, Andrei Lepikhov <[email protected]> wrote:\n>>\n>> On 17/7/2024 16:33, Matthias van de Meent wrote:\n>>> On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov <[email protected]> wrote:\n> Because the @<@ and @>@ operators are not registered as commutative,\n> it couldn't apply the optimization in your patch, while the btree\n> operator check does allow it to apply the optimization.\nOk, I got it.\nAnd next issue: I think it would be better to save cycles than to free \nsome piece of memory, so why not to break the foreach cycle if you \nalready matched the opfamily?\nAlso, in the patch attached I added your smoothed test to the aggregates.sql\n> \n> Aside: Arguably, checking for commutator operators would not be\n> incorrect when looking at it from \"applied operators\" point of view,\n> but if that commutative operator isn't registered as opposite ordering\n> of the same btree opclass, then we'd probably break some assumptions\n> of some aggregate's sortop - it could be registered with another\n> opclass, and that could cause us to select a different btree opclass\n> (thus: ordering) than is indicated to be required by the aggregate;\n> the thing we're trying to protect against hereYes, I also think if someone doesn't register < as a commutator to >, it \nmay mean they do it intentionally: may be it is a bit different \nsortings? - this subject is too far from my experience and I can agree \nwith your approach.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Fri, 19 Jul 2024 10:47:02 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" }, { "msg_contents": "On 18/7/2024 14:49, Matthias van de Meent wrote:\n> Aside: Arguably, checking for commutator operators would not be\n> incorrect when looking at it from \"applied operators\" point of view,\n> but if that commutative operator isn't registered as opposite ordering\n> of the same btree opclass, then we'd probably break some assumptions\n> of some aggregate's sortop - it could be registered with another\n> opclass, and that could cause us to select a different btree opclass\n> (thus: ordering) than is indicated to be required by the aggregate;\n> the thing we're trying to protect against here.\nHi,\nThis thread stands idle. At the same time, the general idea of this \npatch and the idea of utilising prosupport functions look promising. Are \nyou going to develop this feature further?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 08:25:53 +0200", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Expand applicability of aggregate's sortop optimization" } ]
[ { "msg_contents": "I happened to notice that the comment for AlterObjectNamespace_oid\nclaims that\n\n * ... it doesn't have to deal with certain special cases\n * such as not wanting to process array types --- those should never\n * be direct members of an extension anyway.\n\nThis struck me as probably broken in the wake of e5bc9454e\n(Explicitly list dependent types as extension members in pg_depend),\nand sure enough a moment's worth of testing showed it is:\n\nregression=# create schema s1;\nCREATE SCHEMA\nregression=# create extension cube with schema s1;\nCREATE EXTENSION\nregression=# create schema s2;\nCREATE SCHEMA\nregression=# alter extension cube set schema s2;\nERROR: cannot alter array type s1.cube[]\nHINT: You can alter type s1.cube, which will alter the array type as well.\n\nSo we need to do something about that; and the fact that this escaped\ntesting shows that our coverage for ALTER EXTENSION SET SCHEMA is\npretty lame.\n\nThe attached patch fixes up the code and adds a new test to\nthe test_extensions module. The fix basically is to skip the\npg_depend entries for dependent types, assuming that they'll\nget dealt with when we process their parent objects.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 08 May 2024 17:52:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER EXTENSION SET SCHEMA versus dependent types" }, { "msg_contents": "On Wed, May 08, 2024 at 05:52:31PM -0400, Tom Lane wrote:\n> The attached patch fixes up the code and adds a new test to\n> the test_extensions module. The fix basically is to skip the\n> pg_depend entries for dependent types, assuming that they'll\n> get dealt with when we process their parent objects.\n\nLooks reasonable to me. The added test coverage seems particularly\nvaluable. If I really wanted to nitpick, I might complain about the three\nconsecutive Boolean parameters for AlterTypeNamespaceInternal(), which\nmakes lines like\n\n+\t\tAlterTypeNamespaceInternal(arrayOid, nspOid, true, false, true,\n+\t\t\t\t\t\t\t\t objsMoved);\n\ndifficult to interpret. But that's not necessarily the fault of this patch\nand probably needn't block it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 May 2024 18:33:20 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER EXTENSION SET SCHEMA versus dependent types" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Looks reasonable to me. The added test coverage seems particularly\n> valuable. If I really wanted to nitpick, I might complain about the three\n> consecutive Boolean parameters for AlterTypeNamespaceInternal(), which\n> makes lines like\n\n> +\t\tAlterTypeNamespaceInternal(arrayOid, nspOid, true, false, true,\n> +\t\t\t\t\t\t\t\t objsMoved);\n\n> difficult to interpret. But that's not necessarily the fault of this patch\n> and probably needn't block it.\n\nI considered merging ignoreDependent and errorOnTableType into a\nsingle 3-valued enum, but didn't think it was worth the trouble\ngiven the very small number of callers; also it wasn't quite clear\nhow to map that to AlterTypeNamespace_oid's API. Perhaps a little\nmore thought is appropriate though.\n\nOne positive reason for increasing the number of parameters is that\nthat will be a clear API break for any outside callers, if there\nare any. If I just replace a bool with an enum, such callers might\nor might not get any indication that they need to take a fresh\nlook.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 19:42:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER EXTENSION SET SCHEMA versus dependent types" }, { "msg_contents": "On Wed, May 08, 2024 at 07:42:18PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Looks reasonable to me. The added test coverage seems particularly\n>> valuable. If I really wanted to nitpick, I might complain about the three\n>> consecutive Boolean parameters for AlterTypeNamespaceInternal(), which\n>> makes lines like\n> \n>> +\t\tAlterTypeNamespaceInternal(arrayOid, nspOid, true, false, true,\n>> +\t\t\t\t\t\t\t\t objsMoved);\n> \n>> difficult to interpret. But that's not necessarily the fault of this patch\n>> and probably needn't block it.\n> \n> I considered merging ignoreDependent and errorOnTableType into a\n> single 3-valued enum, but didn't think it was worth the trouble\n> given the very small number of callers; also it wasn't quite clear\n> how to map that to AlterTypeNamespace_oid's API. Perhaps a little\n> more thought is appropriate though.\n> \n> One positive reason for increasing the number of parameters is that\n> that will be a clear API break for any outside callers, if there\n> are any. If I just replace a bool with an enum, such callers might\n> or might not get any indication that they need to take a fresh\n> look.\n\nAgreed. Another option could be to just annotate the arguments with the\nparameter names.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 May 2024 18:54:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER EXTENSION SET SCHEMA versus dependent types" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, May 08, 2024 at 07:42:18PM -0400, Tom Lane wrote:\n>> One positive reason for increasing the number of parameters is that\n>> that will be a clear API break for any outside callers, if there\n>> are any. If I just replace a bool with an enum, such callers might\n>> or might not get any indication that they need to take a fresh\n>> look.\n\n> Agreed. Another option could be to just annotate the arguments with the\n> parameter names.\n\nAt the call sites you mean? Sure, I can do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 May 2024 19:57:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER EXTENSION SET SCHEMA versus dependent types" }, { "msg_contents": "On Wed, May 08, 2024 at 07:57:55PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Agreed. Another option could be to just annotate the arguments with the\n>> parameter names.\n> \n> At the call sites you mean? Sure, I can do that.\n\nYes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 8 May 2024 19:01:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER EXTENSION SET SCHEMA versus dependent types" } ]
[ { "msg_contents": "Fix overread in JSON parsing errors for incomplete byte sequences\n\njson_lex_string() relies on pg_encoding_mblen_bounded() to point to the\nend of a JSON string when generating an error message, and the input it\nuses is not guaranteed to be null-terminated.\n\nIt was possible to walk off the end of the input buffer by a few bytes\nwhen the last bytes consist of an incomplete multi-byte sequence, as\ntoken_terminator would point to a location defined by\npg_encoding_mblen_bounded() rather than the end of the input. This\ncommit switches token_terminator so as the error uses data up to the\nend of the JSON input.\n\nMore work should be done so as this code could rely on an equivalent of\nreport_invalid_encoding() so as incorrect byte sequences can show in\nerror messages in a readable form. This requires work for at least two\ncases in the JSON parsing API: an incomplete token and an invalid escape\nsequence. A more complete solution may be too invasive for a backpatch,\nso this is left as a future improvement, taking care of the overread\nfirst.\n\nA test is added on HEAD as test_json_parser makes this issue\nstraight-forward to check.\n\nNote that pg_encoding_mblen_bounded() no longer has any callers. This\nwill be removed on HEAD with a separate commit, as this is proving to\nencourage unsafe coding.\n\nAuthor: Jacob Champion\nDiscussion: https://postgr.es/m/CAOYmi+ncM7pwLS3AnKCSmoqqtpjvA8wmCdoBtKA3ZrB2hZG6zA@mail.gmail.com\nBackpatch-through: 13\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/855517307db8efd397d49163d65a4fd3bdcc41bc\n\nModified Files\n--------------\nsrc/common/jsonapi.c | 4 ++--\nsrc/test/modules/test_json_parser/t/002_inline.pl | 8 ++++++++\n2 files changed, 10 insertions(+), 2 deletions(-)", "msg_date": "Thu, 09 May 2024 03:46:36 +0000", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On 2024-May-09, Michael Paquier wrote:\n\n> Fix overread in JSON parsing errors for incomplete byte sequences\n\nI'm getting this error in the new test:\n\nt/002_inline.pl ........................ 1/? \n# Failed test 'incomplete UTF-8 sequence, chunk size 3: correct error output'\n# at t/002_inline.pl line 134.\n# 'Escape sequence \"\\�1+2\" is invalid.'\n# doesn't match '(?^:(Token|Escape sequence) \"\"?\\\\\\x{F5}\" is invalid)'\n# Looks like you failed 1 test of 850.\n\nNot sure what's going on here, or why it fails for me while the\nbuildfarm is all happy.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n", "msg_date": "Fri, 10 May 2024 13:59:39 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On 2024-May-10, Alvaro Herrera wrote:\n\n> Not sure what's going on here, or why it fails for me while the\n> buildfarm is all happy.\n\nAh, I ran 'git clean -dfx' and now it works correctly. I must have had\nan incomplete rebuild.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n", "msg_date": "Fri, 10 May 2024 14:23:09 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On Fri, May 10, 2024 at 02:23:09PM +0200, Alvaro Herrera wrote:\n> Ah, I ran 'git clean -dfx' and now it works correctly. I must have had\n> an incomplete rebuild.\n\nI am going to assume that this is an incorrect build. It seems to me\nthat src/common/ was compiled with a past version not sufficient to\nmake the new test pass as more bytes got pushed to the error output as\nthe pre-855517307db8 code could point to some random junk.\n--\nMichael", "msg_date": "Mon, 13 May 2024 12:22:02 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On 10.05.24 14:23, Alvaro Herrera wrote:\n> On 2024-May-10, Alvaro Herrera wrote:\n> \n>> Not sure what's going on here, or why it fails for me while the\n>> buildfarm is all happy.\n> \n> Ah, I ran 'git clean -dfx' and now it works correctly. I must have had\n> an incomplete rebuild.\n\nI saw the same thing. The problem is that there is incomplete \ndependency information in the makefiles (not meson) between src/common/ \nand what is using it. So whenever anything changes in src/common/, you \npretty much have to do a forced rebuild of everything.\n\n\n\n", "msg_date": "Tue, 14 May 2024 10:39:36 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On Tue, May 14, 2024 at 10:39:36AM +0200, Peter Eisentraut wrote:\n> I saw the same thing. The problem is that there is incomplete dependency\n> information in the makefiles (not meson) between src/common/ and what is\n> using it. So whenever anything changes in src/common/, you pretty much have\n> to do a forced rebuild of everything.\n\nIs that a recent regression? I have some blurry memories from\nworking on these areas that changing src/common/ reflected on the\ncompiled pieces effectively at some point.\n--\nMichael", "msg_date": "Wed, 15 May 2024 09:00:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On 15.05.24 02:00, Michael Paquier wrote:\n> On Tue, May 14, 2024 at 10:39:36AM +0200, Peter Eisentraut wrote:\n>> I saw the same thing. The problem is that there is incomplete dependency\n>> information in the makefiles (not meson) between src/common/ and what is\n>> using it. So whenever anything changes in src/common/, you pretty much have\n>> to do a forced rebuild of everything.\n> \n> Is that a recent regression? I have some blurry memories from\n> working on these areas that changing src/common/ reflected on the\n> compiled pieces effectively at some point.\n\nOne instance of this problem that I can reproduce at least back to PG12 is\n\n1. touch src/common/exec.c\n2. make -C src/bin/pg_dump\n\nThis will rebuild libpgcommon, but it will not relink pg_dump.\n\n\n", "msg_date": "Wed, 15 May 2024 08:15:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" }, { "msg_contents": "On Tue, May 14, 2024 at 11:15 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 15.05.24 02:00, Michael Paquier wrote:\n> > Is that a recent regression? I have some blurry memories from\n> > working on these areas that changing src/common/ reflected on the\n> > compiled pieces effectively at some point.\n>\n> One instance of this problem that I can reproduce at least back to PG12 is\n>\n> 1. touch src/common/exec.c\n> 2. make -C src/bin/pg_dump\n>\n> This will rebuild libpgcommon, but it will not relink pg_dump.\n\nI remember src/common/unicode changes having similar trouble, as well [1].\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CAFBsxsGZTwzDnTs=TVM38CCTPP3Y0D3=h+UiWt8M83D5THHf9A@mail.gmail.com\n\n\n", "msg_date": "Wed, 15 May 2024 11:22:58 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix overread in JSON parsing errors for incomplete byte\n sequence" } ]
[ { "msg_contents": "Hello hackers,\n\nLooking at a recent failure on the buildfarm:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2024-04-30%2020%3A48%3A34\n\n# poll_query_until timed out executing this query:\n# SELECT archived_count FROM pg_stat_archiver\n# expecting this output:\n# 1\n# last actual query output:\n# 0\n# with stderr:\n# Looks like your test exited with 29 just after 4.\n[23:01:41] t/020_archive_status.pl ..............\nDubious, test returned 29 (wstat 7424, 0x1d00)\nFailed 12/16 subtests\n\nwith the following error in the log:\n2024-04-30 22:57:27.931 CEST [83115:1] LOG:  archive command failed with exit code 1\n2024-04-30 22:57:27.931 CEST [83115:2] DETAIL:  The failed archive command was: cp \n\"pg_wal/000000010000000000000001_does_not_exist\" \"000000010000000000000001_does_not_exist\"\n...\n2024-04-30 22:57:28.070 CEST [47962:2] [unknown] LOG:  connection authorized: user=pgbf database=postgres \napplication_name=020_archive_status.pl\n2024-04-30 22:57:28.072 CEST [47962:3] 020_archive_status.pl LOG: statement: SELECT archived_count FROM pg_stat_archiver\n2024-04-30 22:57:28.073 CEST [83115:3] LOG:  could not send to statistics collector: Resource temporarily unavailable\n\nand the corresponding code (on REL_13_STABLE):\nstatic void\npgstat_send(void *msg, int len)\n{\n     int         rc;\n\n     if (pgStatSock == PGINVALID_SOCKET)\n         return;\n\n     ((PgStat_MsgHdr *) msg)->m_size = len;\n\n     /* We'll retry after EINTR, but ignore all other failures */\n     do\n     {\n         rc = send(pgStatSock, msg, len, 0);\n     } while (rc < 0 && errno == EINTR);\n\n#ifdef USE_ASSERT_CHECKING\n     /* In debug builds, log send failures ... */\n     if (rc < 0)\n         elog(LOG, \"could not send to statistics collector: %m\");\n#endif\n}\n\nI wonder, whether this retry should be performed after EAGAIN (Resource\ntemporarily unavailable), EWOULDBLOCK as well.\n\nWith a simple send() wrapper (PFA) activated with LD_PRELOAD, I could\nreproduce this failure easily when running\n`make -s check -C src/test/recovery/ PROVE_TESTS=\"t/020*\"` on\nREL_13_STABLE:\nt/020_archive_status.pl .. 1/16 # poll_query_until timed out executing this query:\n# SELECT archived_count FROM pg_stat_archiver\n# expecting this output:\n# 1\n# last actual query output:\n# 0\n# with stderr:\n# Looks like your test exited with 29 just after 4.\nt/020_archive_status.pl .. Dubious, test returned 29 (wstat 7424, 0x1d00)\nFailed 12/16 subtests\n\nI also reproduced another failure (that lacks useful diagnostics, unfortunately):\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2022-11-10%2015%3A30%3A16\n...\nt/020_archive_status.pl .. 8/16 # poll_query_until timed out executing this query:\n# SELECT last_archived_wal FROM pg_stat_archiver\n# expecting this output:\n# 000000010000000000000002\n# last actual query output:\n# 000000010000000000000001\n# with stderr:\n# Looks like your test exited with 29 just after 13.\nt/020_archive_status.pl .. Dubious, test returned 29 (wstat 7424, 0x1d00)\nFailed 3/16 subtests\n...\n\nThe \"n == 64\" condition in the cranky send() is needed to aim exactly\nthese failures. Without this restriction the test (and also `make check`)\njust hangs because of:\n             if (errno == EINTR)\n                 continue;       /* Ok if we were interrupted */\n\n             /*\n              * Ok if no data writable without blocking, and the socket is in\n              * non-blocking mode.\n              */\n             if (errno == EAGAIN ||\n                 errno == EWOULDBLOCK)\n             {\n                 return 0;\n             }\nin internal_flush_buffer().\n\nOn the other hand, even with:\nint\nsend(int s, const void *buf, size_t n, int flags)\n{\n     if (rand() % 10000 == 0)\n     {\n         errno = EINTR;\n         return -1;\n     }\n     return real_send(s, buf, n, flags);\n}\n\n`make check` fails with many miscellaneous errors...\n\nBest regards,\nAlexander", "msg_date": "Thu, 9 May 2024 07:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Non-systematic handling of EINTR/EAGAIN/EWOULDBLOCK" } ]
[ { "msg_contents": "I have committed the first draft of the PG 17 release notes; you can\nsee the results here:\n\n\thttps://momjian.us/pgsql_docs/release-17.html\n\nIt will be improved until the final release. The item count is 188,\nwhich is similar to recent releases:\n\n\trelease-10: 189\n\trelease-11: 170\n\trelease-12: 180\n\trelease-13: 178\n\trelease-14: 220\n\trelease-15: 184\n\trelease-16: 206\n\trelease-17: 188\n\nI welcome feedback. For some reason it was an easier job than usual.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 00:03:50 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 9 May 2024 at 16:04, Bruce Momjian <[email protected]> wrote:\n> I welcome feedback. For some reason it was an easier job than usual.\n\nThanks for working on that.\n\n> +2023-11-02 [cac169d68] Increase DEFAULT_FDW_TUPLE_COST from 0.01 to 0.2\n\n> +Double the default foreign data wrapper tuple cost (David Rowley, Umair Shahid)\n\nThat's 20x rather than 2x.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 16:44:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi Bruce,\n\nA minor formatting issue in the start below. Bullet is not required here.\n\nE.1.1. Overview\n<https://momjian.us/pgsql_docs/release-17.html#RELEASE-17-HIGHLIGHTS>\n\nPostgreSQL 17 contains many new features and enhancements, including:\n\n -\n\n\nThe above items and other new features of PostgreSQL 17 are explained in\nmore detail in the sections below.\nRegards,\nIkram\n\n\n\nOn Thu, May 9, 2024 at 9:45 AM David Rowley <[email protected]> wrote:\n\n> On Thu, 9 May 2024 at 16:04, Bruce Momjian <[email protected]> wrote:\n> > I welcome feedback. For some reason it was an easier job than usual.\n>\n> Thanks for working on that.\n>\n> > +2023-11-02 [cac169d68] Increase DEFAULT_FDW_TUPLE_COST from 0.01 to 0.2\n>\n> > +Double the default foreign data wrapper tuple cost (David Rowley, Umair\n> Shahid)\n>\n> That's 20x rather than 2x.\n>\n> David\n>\n>\n>\n\n-- \nMuhammad Ikram\n\nHi Bruce,A minor formatting issue in the start below. Bullet is not required here.E.1.1. Overview PostgreSQL 17 contains many new features and enhancements, including:The above items and other new features of PostgreSQL 17 are explained in more detail in the sections below.Regards,IkramOn Thu, May 9, 2024 at 9:45 AM David Rowley <[email protected]> wrote:On Thu, 9 May 2024 at 16:04, Bruce Momjian <[email protected]> wrote:\n> I welcome feedback.  For some reason it was an easier job than usual.\n\nThanks for working on that.\n\n> +2023-11-02 [cac169d68] Increase DEFAULT_FDW_TUPLE_COST from 0.01 to 0.2\n\n> +Double the default foreign data wrapper tuple cost (David Rowley, Umair Shahid)\n\nThat's 20x rather than 2x.\n\nDavid\n\n\n-- Muhammad Ikram", "msg_date": "Thu, 9 May 2024 09:47:34 +0500", "msg_from": "Muhammad Ikram <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 9 May 2024 at 16:47, Muhammad Ikram <[email protected]> wrote:\n> A minor formatting issue in the start below. Bullet is not required here.\n\nThis is a placeholder for the highlight features of v17 will go.\nBruce tends not to decide what those are all by himself.\n\nDavid\n\n\n", "msg_date": "Thu, 9 May 2024 16:52:14 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn Thu, May 09, 2024 at 12:03:50AM -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n> \n> \thttps://momjian.us/pgsql_docs/release-17.html\n\nThanks for working on that!\n \n> I welcome feedback.\n\n> Add system view pg_wait_events that reports wait event types (Michael Paquier) \n\nMichael is the committer for 1e68e43d3f, the author is me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 May 2024 04:53:38 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn Thu, May 9, 2024 at 1:03 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n\nThank you for working on that!\n\nI'd like to mention some of my works. I think we can add the vacuum\nperformance improvements by the following commits:\n\n- Add template for adaptive radix tree (ee1b30f1)\n- Add TIDStore, to store sets of TIDs (ItemPointerData) efficiently (30e144287)\n- Use TidStore for dead tuple TIDs storage during lazy vacuum (667e65aac)\n\nAlso, please consider the following item:\n\n- Improve eviction algorithm in ReorderBuffer using max-heap for many\nsubtransactions (5bec1d6bc)\n\nFinally, should we mention the following commit in the release note?\nIt's not a user-visible change but added a new regression test module.\n\n- Add tests for XID wraparound (e255b646a)\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 May 2024 14:17:12 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n\n\nThanks for working on that.\n\nFor this item:\n\n\n> Allow the optimizer to improve CTE plans by using the sort order of\n> columns referenced in earlier CTE clauses (Jian Guo)\n\n\nI think you mean a65724dfa. The author should be 'Richard Guo'.\n\nAnd I'm wondering if it is more accurate to state it as \"Allow the\noptimizer to improve plans for the outer query by leveraging the sort\norder of a CTE's output.\"\n\nI think maybe a similar revision can be applied to the item just above\nthis one.\n\nThanks\nRichard\n\nOn Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:I have committed the first draft of the PG 17 release notes;  you can\nsee the results here:\n\n        https://momjian.us/pgsql_docs/release-17.htmlThanks for working on that.For this item: Allow the optimizer to improve CTE plans by using the sort order ofcolumns referenced in earlier CTE clauses (Jian Guo)I think you mean a65724dfa.  The author should be 'Richard Guo'.And I'm wondering if it is more accurate to state it as \"Allow theoptimizer to improve plans for the outer query by leveraging the sortorder of a CTE's output.\"I think maybe a similar revision can be applied to the item just abovethis one.ThanksRichard", "msg_date": "Thu, 9 May 2024 14:37:57 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n\nThanks for working on this.\n\nI believe the part of the 64-bit XIDs patchset that was delivered in\nPG17 is worth highlighting in \"E.1.3.10. Source Code\" section:\n\n4ed8f0913bfd\n2cdf131c46e6\n5a1dfde8334b\na60b8a58f435\n\nAll this can probably be summarized as one bullet \"Index SLRUs by\n64-bit integers rather than by 32-bit ones\" where the authors are:\nMaxim Orlov, Aleksander Alekseev, Alexander Korotkov, Teodor Sigaev,\nNikita Glukhov, Pavel Borisov, Yura Sokolov.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 9 May 2024 12:18:44 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n\nanother potential incompatibilities issue:\nALTER TABLE DROP PRIMARY KEY\n\nsee:\nhttps://www.postgresql.org/message-id/202404181849.6frtmajobe27%40alvherre.pgsql\n\n\n", "msg_date": "Thu, 9 May 2024 18:00:24 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> \thttps://momjian.us/pgsql_docs/release-17.html\n\nMy name is listed twice in the \"Improve psql tab completion\" item.\n\n- ilmari\n\n\n", "msg_date": "Thu, 09 May 2024 11:22:06 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Dagfinn Ilmari Mannsåker <[email protected]> writes:\n\n> Bruce Momjian <[email protected]> writes:\n>\n>> I have committed the first draft of the PG 17 release notes; you can\n>> see the results here:\n>>\n>> \thttps://momjian.us/pgsql_docs/release-17.html\n>\n> My name is listed twice in the \"Improve psql tab completion\" item.\n\nYou can move one of them to \"Track DEALLOCATE in pg_stat_statements\",\nwhich Michael and I co-authored.\n\n- ilmari\n\n\n", "msg_date": "Thu, 09 May 2024 11:31:17 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n\n* Add function pg_buffercache_evict() to allow shared buffer eviction\n(Palak Chaturvedi, Thomas Munro)\n* This is useful for testing.\n\nthis should put it on the section\n< E.1.3.11. Additional Modules\n?\n\nThen I found out official release notes don't have <section> attributes,\nso it doesn't matter?\n\n\n\n<<\nAllow ALTER OPERATOR to set more optimization attributes (Tommy Pavlicek)\nThis is useful for extensions.\n<<\nI think this commit title \"Add hash support functions and hash opclass\nfor contrib/ltree.\"\n from [1] is more descriptive.\ni am not 100% sure of the meaning of \"This is useful for extensions.\"\n\n\n\n[1] https://git.postgresql.org/cgit/postgresql.git/commit/?id=485f0aa85995340fb62113448c992ee48dc6fff1\n\n\n", "msg_date": "Thu, 9 May 2024 18:53:30 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "> <<\n> Allow ALTER OPERATOR to set more optimization attributes (Tommy Pavlicek)\n> This is useful for extensions.\n> <<\n\nsorry, I mean\n<<\nAllow the creation of hash indexes on ltree columns (Tommy Pavlicek)\nThis also enables hash join and hash aggregation on ltree columns.\n<<\n\nbetter description would be:\n<<\nAdd hash support functions and hash opclass for contrib/ltree (Tommy Pavlicek)\nThis also enables hash join and hash aggregation on ltree columns.\n<<\n\n\n", "msg_date": "Thu, 9 May 2024 18:57:01 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 6:53 PM jian he <[email protected]> wrote:\n>\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n\n< Add columns to pg_stats to report range histogram information (Egor\nRogov, Soumyadeep Chakraborty)\nI think this applies to range type and multi range type, \"range\nhistogram information\" seems not very clear to me.\nSo maybe:\n< Add columns to pg_stats to report range-type histogram information\n(Egor Rogov, Soumyadeep Chakraborty)\n\n\n\nDisplay length and bounds histograms in pg_stats\n< Add new COPY option \"ON_ERROR ignore\" to discard error rows (Damir\nBelyalov, Atsushi Torikoshi, Alex Shulgin, Jian He, Jian He, Yugo\nNagata)\nduplicate name.\n\n\n", "msg_date": "Thu, 9 May 2024 19:49:55 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 04:44:47PM +1200, David Rowley wrote:\n> On Thu, 9 May 2024 at 16:04, Bruce Momjian <[email protected]> wrote:\n> > I welcome feedback. For some reason it was an easier job than usual.\n> \n> Thanks for working on that.\n> \n> > +2023-11-02 [cac169d68] Increase DEFAULT_FDW_TUPLE_COST from 0.01 to 0.2\n> \n> > +Double the default foreign data wrapper tuple cost (David Rowley, Umair Shahid)\n> \n> That's 20x rather than 2x.\n\nOops, changed to:\n\n\tIncrease the default foreign data wrapper tuple cost (David\n\tRowley, Umair Shahid)\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 09:08:52 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 09:47:34AM +0500, Muhammad Ikram wrote:\n> Hi Bruce,\n> \n> A minor formatting issue in the start below. Bullet is not required here.\n> \n> \n> E.1.1. Overview  \n> \n> PostgreSQL 17 contains many new features and enhancements, including:\n> \n> • \n> \n> The above items and other new features of PostgreSQL 17 are explained in more\n> detail in the sections below.\n\nThat is just a place-holder. I changed the bullet text to be:\n\n\tTO BE COMPLETED LATER\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 09:10:16 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 04:52:14PM +1200, David Rowley wrote:\n> On Thu, 9 May 2024 at 16:47, Muhammad Ikram <[email protected]> wrote:\n> > A minor formatting issue in the start below. Bullet is not required here.\n> \n> This is a placeholder for the highlight features of v17 will go.\n> Bruce tends not to decide what those are all by himself.\n\nYes, I already have so much of my opinion in the release notes that I\nprefer others to make that list, and to make the Acknowledgments list\nat the bottom.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 09:11:10 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 04:53:38AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Thu, May 09, 2024 at 12:03:50AM -0400, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> Thanks for working on that!\n> \n> > I welcome feedback.\n> \n> > Add system view pg_wait_events that reports wait event types (Michael Paquier) \n> \n> Michael is the committer for 1e68e43d3f, the author is me.\n\nWow, thank you for finding that. The commit message is very clear so I\ndon't know how I made that mistake. Fixed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 09:29:21 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 02:17:12PM +0900, Masahiko Sawada wrote:\n> Hi,\n> \n> On Thu, May 9, 2024 at 1:03 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> \n> Thank you for working on that!\n> \n> I'd like to mention some of my works. I think we can add the vacuum\n> performance improvements by the following commits:\n> \n> - Add template for adaptive radix tree (ee1b30f1)\n> - Add TIDStore, to store sets of TIDs (ItemPointerData) efficiently (30e144287)\n> - Use TidStore for dead tuple TIDs storage during lazy vacuum (667e65aac)\n\nOkay, I reworded the item, added authors, and added the commits:\n\n\t<!--\n\tAuthor: John Naylor <[email protected]>\n\t2024-03-07 [ee1b30f12] Add template for adaptive radix tree\n\tAuthor: Masahiko Sawada <[email protected]>\n\t2024-03-21 [30e144287] Add TIDStore, to store sets of TIDs (ItemPointerData) ef\n\tAuthor: Masahiko Sawada <[email protected]>\n\t2024-04-02 [667e65aac] Use TidStore for dead tuple TIDs storage during lazy vac\n\tAuthor: Heikki Linnakangas <[email protected]>\n\t2024-04-03 [6dbb49026] Combine freezing and pruning steps in VACUUM\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow vacuum to more efficiently remove and freeze tuples (John Naylor, Masahiko Sawada, Melanie Plageman)\n\t</para>\n\t</listitem>\n\n> Also, please consider the following item:\n> \n> - Improve eviction algorithm in ReorderBuffer using max-heap for many\n> subtransactions (5bec1d6bc)\n\nI looked at that item and I don't have a generic \"make logical\nreplication apply faster\" item to merge it into, and many\nsubtransactions seemed like enough of an edge-case that I didn't think\nmentioning it make sense. Can you see a good place to add it?\n\n> Finally, should we mention the following commit in the release note?\n> It's not a user-visible change but added a new regression test module.\n> \n> - Add tests for XID wraparound (e255b646a)\n\nI don't normally add testing infrastructure changes unless they are\nmajor.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 09:48:41 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 02:37:57PM +0800, Richard Guo wrote:\n> \n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> \n> I have committed the first draft of the PG 17 release notes;  you can\n> see the results here:\n> \n>         https://momjian.us/pgsql_docs/release-17.html\n> \n> \n> Thanks for working on that.\n> \n> For this item:\n>  \n> \n> Allow the optimizer to improve CTE plans by using the sort order of\n> columns referenced in earlier CTE clauses (Jian Guo)\n> \n> \n> I think you mean a65724dfa.  The author should be 'Richard Guo'.\n\nWow the CTE item above it was done by Jian Guo. I probably copied the\ntext from the line above it, modified the description, but thought the\nauthor's name was the same, but it was not. Fixed.\n\n> And I'm wondering if it is more accurate to state it as \"Allow the\n> optimizer to improve plans for the outer query by leveraging the sort\n> order of a CTE's output.\"\n>\n> I think maybe a similar revision can be applied to the item just above\n> this one.\n\nOkay, I went with this text:\n\n\t<!--\n\tAuthor: Tom Lane <[email protected]>\n\t2023-11-17 [f7816aec2] Extract column statistics from CTE references, if possib\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow the optimizer to improve CTE plans by considering the statistics of columns referenced in earlier row output clauses (Jian Guo, Tom Lane)\n\t</para>\n\t</listitem>\n\t\n\t<!--\n\tAuthor: Tom Lane <[email protected]>\n\t2024-03-26 [a65724dfa] Propagate pathkeys from CTEs up to the outer query.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow the optimizer to improve CTE plans by considering the sort order of columns referenced in earlier row output clauses (Richard Guo)\n\t</para>\n\t</listitem>\n\nI did not use \"leveraging\" because I am concerned non-native English\nspeakers might find the term confusing.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 10:17:03 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:18:44PM +0300, Aleksander Alekseev wrote:\n> Hi,\n> \n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> > It will be improved until the final release. The item count is 188,\n> > which is similar to recent releases:\n> \n> Thanks for working on this.\n> \n> I believe the part of the 64-bit XIDs patchset that was delivered in\n> PG17 is worth highlighting in \"E.1.3.10. Source Code\" section:\n> \n> 4ed8f0913bfd\n> 2cdf131c46e6\n> 5a1dfde8334b\n> a60b8a58f435\n> \n> All this can probably be summarized as one bullet \"Index SLRUs by\n> 64-bit integers rather than by 32-bit ones\" where the authors are:\n> Maxim Orlov, Aleksander Alekseev, Alexander Korotkov, Teodor Sigaev,\n> Nikita Glukhov, Pavel Borisov, Yura Sokolov.\n\nWow, I try to only list source code items that have some user-facing\nimpact, and I don't think these do. I do realize how important they are\nthough. This gets into the balance of mentioning items _users_ need to\nknow about, vs. important improvements that _we_ know about.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 10:20:23 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 06:00:24PM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> \n> another potential incompatibilities issue:\n> ALTER TABLE DROP PRIMARY KEY\n> \n> see:\n> https://www.postgresql.org/message-id/202404181849.6frtmajobe27%40alvherre.pgsql\n\nI see it now, and I see Alvaro Herrera saying:\n\n\thttps://www.postgresql.org/message-id/202404181849.6frtmajobe27%40alvherre.pgsql\n\n\t> I wonder is there any incompatibility issue, or do we need to say something\n\t> about the new behavior when dropping a key column?\n\t\n-->\tUmm, yeah, maybe we should document it in ALTER TABLE DROP PRIMARY KEY\n-->\tand in the release notes to note the different behavior.\n\nHowever, I don't see it mentioned as a release note item in the commit\nmessage or mentioned in our docs. I suppose the release note text would\nbe:\n\n\tRemoving a PRIMARY KEY will remove the NOT NULL column specification\n\n\tPreviously the NOT NULL specification would be retained.\n\nDo we have agreement that we want this release note item?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 10:49:37 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 11:22:06AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Bruce Momjian <[email protected]> writes:\n> \n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> My name is listed twice in the \"Improve psql tab completion\" item.\n\nYou did such a great job I wanted to list you twice. :-) Actually, the\nauthor list was so long I just didn't notice, fixed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 10:50:45 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 11:31:17AM +0100, Dagfinn Ilmari Mannsåker wrote:\n> Dagfinn Ilmari Mannsåker <[email protected]> writes:\n> \n> > Bruce Momjian <[email protected]> writes:\n> >\n> >> I have committed the first draft of the PG 17 release notes; you can\n> >> see the results here:\n> >>\n> >> \thttps://momjian.us/pgsql_docs/release-17.html\n> >\n> > My name is listed twice in the \"Improve psql tab completion\" item.\n> \n> You can move one of them to \"Track DEALLOCATE in pg_stat_statements\",\n> which Michael and I co-authored.\n\nYep, also my mistake, fixed. My apologies.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 10:51:38 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 06:53:30PM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> \n> * Add function pg_buffercache_evict() to allow shared buffer eviction\n> (Palak Chaturvedi, Thomas Munro)\n> * This is useful for testing.\n> \n> this should put it on the section\n> < E.1.3.11. Additional Modules\n> ?\n\nOh, it is in the pg_buffercache module --- I should have realized that\nfrom the name, fixed.\n\n> Then I found out official release notes don't have <section> attributes,\n> so it doesn't matter?\n\nUh, what are sections? Did previous release notes have it?\n\n> I think this commit title \"Add hash support functions and hash opclass\n> for contrib/ltree.\"\n> from [1] is more descriptive.\n\nUh, I don't think people know what hash support functions are, but they\nknow what hash indexes are, and maybe hash joins and hash aggregates. \nWhy do you consider the commit text better?\n\n> i am not 100% sure of the meaning of \"This is useful for extensions.\"\n\nThe commit says:\n\n\tcommit 2b5154beab7\n\tAuthor: Tom Lane <[email protected]>\n\tDate: Fri Oct 20 12:28:38 2023 -0400\n\t\n\t Extend ALTER OPERATOR to allow setting more optimization attributes.\n\t\n\t Allow the COMMUTATOR, NEGATOR, MERGES, and HASHES attributes to be set\n\t by ALTER OPERATOR. However, we don't allow COMMUTATOR/NEGATOR to be\n\t changed once set, nor allow the MERGES/HASHES flags to be unset once\n\t set. Changes like that might invalidate plans already made, and\n\t dealing with the consequences seems like more trouble than it's worth.\n-->\t The main use-case we foresee for this is to allow addition of missed\n-->\t properties in extension update scripts, such as extending an existing\n-->\t operator to support hashing. So only transitions from not-set to set\n\t states seem very useful.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 11:08:34 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 06:57:01PM +0800, jian he wrote:\n> > <<\n> > Allow ALTER OPERATOR to set more optimization attributes (Tommy Pavlicek)\n> > This is useful for extensions.\n> > <<\n> \n> sorry, I mean\n> <<\n> Allow the creation of hash indexes on ltree columns (Tommy Pavlicek)\n> This also enables hash join and hash aggregation on ltree columns.\n> <<\n> \n> better description would be:\n> <<\n> Add hash support functions and hash opclass for contrib/ltree (Tommy Pavlicek)\n> This also enables hash join and hash aggregation on ltree columns.\n> <<\n\nYes, please see my previous email where I am asking why being more\nspecific is worse.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 11:09:26 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 07:49:55PM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 6:53 PM jian he <[email protected]> wrote:\n> >\n> > On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> > > I have committed the first draft of the PG 17 release notes; you can\n> > > see the results here:\n> > >\n> > > https://momjian.us/pgsql_docs/release-17.html\n> \n> < Add columns to pg_stats to report range histogram information (Egor\n> Rogov, Soumyadeep Chakraborty)\n> I think this applies to range type and multi range type, \"range\n> histogram information\" seems not very clear to me.\n> So maybe:\n> < Add columns to pg_stats to report range-type histogram information\n> (Egor Rogov, Soumyadeep Chakraborty)\n\nYes, good point, done.\n\n> Display length and bounds histograms in pg_stats\n\nUh, isn't that assumed? Is this a detail worth mentioning?\n\n> < Add new COPY option \"ON_ERROR ignore\" to discard error rows (Damir\n> Belyalov, Atsushi Torikoshi, Alex Shulgin, Jian He, Jian He, Yugo\n> Nagata)\n> duplicate name.\n\nFixed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 11:12:14 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 11:12 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, May 9, 2024 at 07:49:55PM +0800, jian he wrote:\n> > On Thu, May 9, 2024 at 6:53 PM jian he <[email protected]> wrote:\n> > >\n> > > On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> > > > I have committed the first draft of the PG 17 release notes; you can\n> > > > see the results here:\n> > > >\n> > > > https://momjian.us/pgsql_docs/release-17.html\n> >\n\nE.1.3.1.5. Privileges\nAdd per-table GRANT permission MAINTAIN to control maintenance\noperations (Nathan Bossart)\n\nThe operations are VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZE VIEW,\nCLUSTER, and LOCK TABLE.\n\nAdd user-grantable role pg_maintain to control maintenance operations\n(Nathan Bossart)\n\nThe operations are VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZE VIEW,\nCLUSTER, and LOCK TABLE.\n\nAllow roles with pg_monitor privileges to execute pg_current_logfile()\n(Pavlo Golub, Nathan Bossart)\n---------------\nshould be \"REFRESH MATERIALIZED VIEW\"?\n\nalso\n\"Allow roles with pg_monitor privileges to execute\npg_current_logfile() (Pavlo Golub, Nathan Bossart)\"\n\"pg_monitor\" is a predefined role, so technically, \"with pg_monitor\nprivileges\" is not correct?\n--------------------------------------------------------------------------\nAdd function XMLText() to convert text to a single XML text node (Jim Jones)\n\nXMLText()\nshould be\nxmltext()\n--------------------------------------------------------------------------\nAdd function to_regtypemod() to return the typemod of a string (David\nWheeler, Erik Wienhold)\nI think this description does not mean the same thing as the doc[1]\n\n[1] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-CATALOG\n--------------------------------------------------------------------------\n\nAllow GROUP BY columns to be internally ordered to match ORDER BY\n(Andrei Lepikhov, Teodor Sigaev)\nThis can be disabled using server variable enable_group_by_reordering.\n\nProbably\n`This can be disabled by setting the server variable\nenable_group_by_reordering to false`.\n\n\n", "msg_date": "Thu, 9 May 2024 23:26:44 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 11:26:44PM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 11:12 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Thu, May 9, 2024 at 07:49:55PM +0800, jian he wrote:\n> > > On Thu, May 9, 2024 at 6:53 PM jian he <[email protected]> wrote:\n> > > >\n> > > > On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> > > > > I have committed the first draft of the PG 17 release notes; you can\n> > > > > see the results here:\n> > > > >\n> > > > > https://momjian.us/pgsql_docs/release-17.html\n> > >\n> \n> E.1.3.1.5. Privileges\n> Add per-table GRANT permission MAINTAIN to control maintenance\n> operations (Nathan Bossart)\n> \n> The operations are VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZE VIEW,\n> CLUSTER, and LOCK TABLE.\n> \n> Add user-grantable role pg_maintain to control maintenance operations\n> (Nathan Bossart)\n> \n> The operations are VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZE VIEW,\n> CLUSTER, and LOCK TABLE.\n> \n> Allow roles with pg_monitor privileges to execute pg_current_logfile()\n> (Pavlo Golub, Nathan Bossart)\n> ---------------\n> should be \"REFRESH MATERIALIZED VIEW\"?\n\nYes, fixed.\n\n> also\n> \"Allow roles with pg_monitor privileges to execute\n> pg_current_logfile() (Pavlo Golub, Nathan Bossart)\"\n> \"pg_monitor\" is a predefined role, so technically, \"with pg_monitor\n> privileges\" is not correct?\n\nGood point, new text:\n\n\tAllow roles with pg_monitor membership to execute pg_current_logfile() (Pavlo Golub, Nathan Bossart)\n\n> --------------------------------------------------------------------------\n> Add function XMLText() to convert text to a single XML text node (Jim Jones)\n> \n> XMLText()\n> should be\n> xmltext()\n\nRight, fixed.\n\n> --------------------------------------------------------------------------\n> Add function to_regtypemod() to return the typemod of a string (David\n> Wheeler, Erik Wienhold)\n> I think this description does not mean the same thing as the doc[1]\n\nYes, I see your point. I changed the text to:\n\n\tAdd function to_regtypemod() to return the type modifier of a\n\ttype specification (David Wheeler, Erik Wienhold)\n\n\n> [1] https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-CATALOG\n> --------------------------------------------------------------------------\n> \n> Allow GROUP BY columns to be internally ordered to match ORDER BY\n> (Andrei Lepikhov, Teodor Sigaev)\n> This can be disabled using server variable enable_group_by_reordering.\n> \n> Probably\n> `This can be disabled by setting the server variable\n> enable_group_by_reordering to false`.\n\nUh, I usually don't go into that detail. There will be a link to the\nvariable in about a month so users can look up its behavior.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 11:41:57 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-05-09 Th 00:03, Bruce Momjian wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> \thttps://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> \trelease-16: 206\n> \trelease-17: 188\n>\n> I welcome feedback. For some reason it was an easier job than usual.\n\n\n *\n\n Remove the ability to build Postgres with Visual Studio (Michael\n Paquier)\n\n Meson is now the only available Windows build method.\n\n\nThis is a category mistake. What was removed was the special code we had \nfor building with VS, but not the ability to build with VS. You can \nbuild with VS using meson (see for example drongo on the buildfarm)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-09 Th 00:03, Bruce Momjian\n wrote:\n\n\nI have committed the first draft of the PG 17 release notes; you can\nsee the results here:\n\n\thttps://momjian.us/pgsql_docs/release-17.html\n\nIt will be improved until the final release. The item count is 188,\nwhich is similar to recent releases:\n\n\trelease-10: 189\n\trelease-11: 170\n\trelease-12: 180\n\trelease-13: 178\n\trelease-14: 220\n\trelease-15: 184\n\trelease-16: 206\n\trelease-17: 188\n\nI welcome feedback. For some reason it was an easier job than usual.\n\n\n\n\n\n\n Remove the ability to build Postgres with Visual Studio\n (Michael Paquier)\n \n\n Meson is now the only available Windows build method.\n \n\n\n\n\nThis is a category mistake. What was removed was the special code\n we had for building with VS, but not the ability to build with VS.\n You can build with VS using meson (see for example drongo on the\n buildfarm)\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 9 May 2024 12:10:11 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:10:11PM -0400, Andrew Dunstan wrote:\n> \n> On 2024-05-09 Th 00:03, Bruce Momjian wrote:\n> \n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n> \n> https://momjian.us/pgsql_docs/release-17.html\n> \n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n> \n> release-10: 189\n> release-11: 170\n> release-12: 180\n> release-13: 178\n> release-14: 220\n> release-15: 184\n> release-16: 206\n> release-17: 188\n> \n> I welcome feedback. For some reason it was an easier job than usual.\n> \n> \n> • Remove the ability to build Postgres with Visual Studio (Michael Paquier)\n> \n> Meson is now the only available Windows build method.\n> \n> \n> This is a category mistake. What was removed was the special code we had for\n> building with VS, but not the ability to build with VS. You can build with VS\n> using meson (see for example drongo on the buildfarm)\n\nWow, okay, I am not surprised I was confused. New text is:\n\n\t<!--\n\tAuthor: Michael Paquier <[email protected]>\n\t2023-12-20 [1301c80b2] Remove MSVC scripts\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tRemove the Microsoft Visual Studio Studio-specific Postgres build option (Michael Paquier)\n\t</para>\n\t\n\t<para>\n\tMeson is now the only method for Visual Studio builds.\n\t</para>\n\t</listitem>\n\t<!--\n\tAuthor: Michael Paquier <[email protected]>\n\t2023-12-20 [1301c80b2] Remove MSVC scripts\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tRemove the Microsoft Visual Studio Studio-specific Postgres build option (Michael Paquier)\n\t</para>\n\t\n\t<para>\n\tMeson is now the only method for Visual Studio builds.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 12:29:38 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-May-09, Bruce Momjian wrote:\n\n> However, I don't see it mentioned as a release note item in the commit\n> message or mentioned in our docs. I suppose the release note text would\n> be:\n> \n> \tRemoving a PRIMARY KEY will remove the NOT NULL column specification\n> \n> \tPreviously the NOT NULL specification would be retained.\n> \n> Do we have agreement that we want this release note item?\n\nYes. Maybe we want some others too (especially regarding inheritance,\nbut also regarding the way we handle the constraints internally), and\nmaybe in this one we want different wording. How about something like\nthis:\n\n Removing a primary key constraint may change the nullability\n characteristic of the columns that the primary key covered.\n\n If explicit not-null constraints exist on the same column, then they\n continue to be /known not nullable/; otherwise they become /possibly\n nullable/.\n\nThis is largely based on the SQL standard's language of a column\ndescriptor having a \"nullability characteristic\", which for columns with\nnot-null or primary key constraints is \"known not null\". I don't think\nwe use those terms anywhere. I hope this isn't too confusing.\n\nThe standard's text on this, in section \"4.13 Columns, fields, and\nattributes\", is\n\n Every column has a nullability characteristic that indicates whether\n the value from that column can be the null value. A nullability\n characteristic is either known not nullable or possibly nullable.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 9 May 2024 20:40:00 +0200", "msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 4:04 PM Bruce Momjian <[email protected]> wrote:\n> I welcome feedback. For some reason it was an easier job than usual.\n\n> 2024-01-25 [820b5af73] jit: Require at least LLVM 10.\n\n> Require LLVM version 10 or later (Peter Eisentraut)\n\nPeter reviewed, I authored, and I think you intend to list authors in\nparentheses.\n\n\n", "msg_date": "Fri, 10 May 2024 08:05:43 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 08:05:43AM +1200, Thomas Munro wrote:\n> On Thu, May 9, 2024 at 4:04 PM Bruce Momjian <[email protected]> wrote:\n> > I welcome feedback. For some reason it was an easier job than usual.\n> \n> > 2024-01-25 [820b5af73] jit: Require at least LLVM 10.\n> \n> > Require LLVM version 10 or later (Peter Eisentraut)\n> \n> Peter reviewed, I authored, and I think you intend to list authors in\n> parentheses.\n\nYes, my mistake, fixed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 16:35:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 08:40:00PM +0200, Álvaro Herrera wrote:\n> On 2024-May-09, Bruce Momjian wrote:\n> \n> > However, I don't see it mentioned as a release note item in the commit\n> > message or mentioned in our docs. I suppose the release note text would\n> > be:\n> > \n> > \tRemoving a PRIMARY KEY will remove the NOT NULL column specification\n> > \n> > \tPreviously the NOT NULL specification would be retained.\n> > \n> > Do we have agreement that we want this release note item?\n> \n> Yes. Maybe we want some others too (especially regarding inheritance,\n> but also regarding the way we handle the constraints internally), and\n> maybe in this one we want different wording. How about something like\n> this:\n> \n> Removing a primary key constraint may change the nullability\n> characteristic of the columns that the primary key covered.\n> \n> If explicit not-null constraints exist on the same column, then they\n> continue to be /known not nullable/; otherwise they become /possibly\n> nullable/.\n> \n> This is largely based on the SQL standard's language of a column\n> descriptor having a \"nullability characteristic\", which for columns with\n> not-null or primary key constraints is \"known not null\". I don't think\n> we use those terms anywhere. I hope this isn't too confusing.\n\nYes, it was confusing, partly because it is using wording we don't use,\nand partly because it is talking about what can go into the column,\nrather than the visible column restriction NOT NULL. I also think \"may\"\nis too imprecise.\n\nHow about:\n\n\tRemoving a primary key will remove a column's NOT NULL constraint\n\tif the constraint was added by the primary key\n\t\n\tPreviously such NOT NULL constraints would remain after a primary\n\tkey was removed. A column-level NOT NULL constraint would not be\n\temoved.\n\nHere is the PG 16 output:\n\n\tCREATE TABLE test ( x INT CONSTRAINT test_pkey PRIMARY KEY );\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n\t x | integer | | not null |\n\tIndexes:\n\t \"test_pkey\" PRIMARY KEY, btree (x)\n\t\n\tCREATE TABLE test_with_not_null (x INT NOT NULL CONSTRAINT test_pkey_with_not_null PRIMARY KEY);\n\t Table \"public.test_with_not_null\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n\t x | integer | | not null |\n\tIndexes:\n\t \"test_pkey_with_not_null\" PRIMARY KEY, btree (x)\n\t\n\tALTER TABLE test DROP CONSTRAINT test_pkey;\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n-->\t x | integer | | not null |\n\t\n\tALTER TABLE test_with_not_null DROP CONSTRAINT test_pkey_with_not_null;\n\t Table \"public.test_with_not_null\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n-->\t x | integer | | not null |\n\nHere is the output in PG 17:\n\n\tCREATE TABLE test ( x INT CONSTRAINT test_pkey PRIMARY KEY );\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n\t x | integer | | not null |\n\tIndexes:\n\t \"test_pkey\" PRIMARY KEY, btree (x)\n\t\n\tCREATE TABLE test_with_not_null (x INT NOT NULL CONSTRAINT test_pkey_with_not_null PRIMARY KEY);\n\t Table \"public.test_with_not_null\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n\t x | integer | | not null |\n\tIndexes:\n\t \"test_pkey_with_not_null\" PRIMARY KEY, btree (x)\n\t\n\tALTER TABLE test DROP CONSTRAINT test_pkey;\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n-->\t x | integer | | |\n\t\n\tALTER TABLE test_with_not_null DROP CONSTRAINT test_pkey_with_not_null;\n\t Table \"public.test_with_not_null\"\n\t Column | Type | Collation | Nullable | Default\n\t--------+---------+-----------+----------+---------\n-->\t x | integer | | not null |\n\nNotice that the table without a _column_ NOT NULL removes the NOT NULL\ndesignation after removing the primary key only in PG 17.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 19:54:22 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 9:34 AM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n> release-10: 189\n> release-11: 170\n> release-12: 180\n> release-13: 178\n> release-14: 220\n> release-15: 184\n> release-16: 206\n> release-17: 188\n>\n> I welcome feedback. For some reason it was an easier job than usual.\n\nThanks a lot for this work Bruce! It looks like commit\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=91f2cae7a4e664e9c0472b364c7db29d755ab151\nis missing from daft release notes. Just curious to know if it's\nintentional or a miss out.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 10 May 2024 13:54:30 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 01:54:30PM +0530, Bharath Rupireddy wrote:\n> On Thu, May 9, 2024 at 9:34 AM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> > It will be improved until the final release. The item count is 188,\n> > which is similar to recent releases:\n> >\n> > release-10: 189\n> > release-11: 170\n> > release-12: 180\n> > release-13: 178\n> > release-14: 220\n> > release-15: 184\n> > release-16: 206\n> > release-17: 188\n> >\n> > I welcome feedback. For some reason it was an easier job than usual.\n> \n> Thanks a lot for this work Bruce! It looks like commit\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=91f2cae7a4e664e9c0472b364c7db29d755ab151\n> is missing from daft release notes. Just curious to know if it's\n> intentional or a miss out.\n\nI did not mention it because the commit didn't mention any performance\nbenefit and it seemed more like an internal change than something people\nneeded to know about. I could reword and merge it into this item, if\nyou think I should:\n\n\t Improve performance of heavily-contended WAL writes (Bharath Rupireddy) \n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 10 May 2024 09:50:01 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\tBruce Momjian wrote:\n\n> have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n> \n> https://momjian.us/pgsql_docs/release-17.html\n\nIn the psql items, I'd suggest mentioning\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=90f5178\n\nFor the short description, maybe something like that:\n\n- Improve FETCH_COUNT to work with all queries (Daniel Vérité)\nPreviously, results would be fetched in chunks only for queries\nthat start with the SELECT keyword.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 10 May 2024 18:29:11 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 9 May 2024 at 06:04, Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n\nGreat work!\n\nThere are two commits that I think would benefit from being listed\n(but maybe they are already listed and I somehow missed them, or they\nare left out on purpose for some reason):\n\n- c4ab7da60617f020e8d75b1584d0754005d71830\n- cafe1056558fe07cdc52b95205588fcd80870362\n\n\n", "msg_date": "Fri, 10 May 2024 18:50:54 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 06:29:11PM +0200, Daniel Verite wrote:\n> \tBruce Momjian wrote:\n> \n> > have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> > \n> > https://momjian.us/pgsql_docs/release-17.html\n> \n> In the psql items, I'd suggest mentioning\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=90f5178\n> \n> For the short description, maybe something like that:\n> \n> - Improve FETCH_COUNT to work with all queries (Daniel Vérité)\n> Previously, results would be fetched in chunks only for queries\n> that start with the SELECT keyword.\n\nAgreed, patch attached and applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 10 May 2024 15:47:04 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Mhd\n\nEnviado desde Outlook para Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Bruce Momjian <[email protected]>\nSent: Friday, May 10, 2024 4:47:04 PM\nTo: Daniel Verite <[email protected]>\nCc: PostgreSQL-development <[email protected]>\nSubject: Re: First draft of PG 17 release notes\n\nOn Fri, May 10, 2024 at 06:29:11PM +0200, Daniel Verite wrote:\n> Bruce Momjian wrote:\n>\n> > have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n>\n> In the psql items, I'd suggest mentioning\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=90f5178\n>\n> For the short description, maybe something like that:\n>\n> - Improve FETCH_COUNT to work with all queries (Daniel Vérité)\n> Previously, results would be fetched in chunks only for queries\n> that start with the SELECT keyword.\n\nAgreed, patch attached and applied.\n\n--\n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n\n\n\n\nMhd \n\n\nEnviado desde \nOutlook para Android\n\nFrom: Bruce Momjian <[email protected]>\nSent: Friday, May 10, 2024 4:47:04 PM\nTo: Daniel Verite <[email protected]>\nCc: PostgreSQL-development <[email protected]>\nSubject: Re: First draft of PG 17 release notes\n \n\n\nOn Fri, May 10, 2024 at 06:29:11PM +0200, Daniel Verite wrote:\n>        Bruce Momjian wrote:\n> \n> >  have committed the first draft of the PG 17 release notes;  you can\n> > see the results here:\n> > \n> >         https://momjian.us/pgsql_docs/release-17.html\n> \n> In the psql items, I'd suggest mentioning\n> \n> \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=90f5178\n> \n> For the short description, maybe something like that:\n> \n> - Improve FETCH_COUNT to work with all queries (Daniel Vérité)\n> Previously, results would be fetched in chunks only for queries\n> that start with the SELECT keyword.\n\nAgreed, patch attached and applied.\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  Only you can decide what is important to you.", "msg_date": "Fri, 10 May 2024 20:58:02 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 06:50:54PM +0200, Jelte Fennema-Nio wrote:\n> On Thu, 9 May 2024 at 06:04, Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> \n> Great work!\n> \n> There are two commits that I think would benefit from being listed\n> (but maybe they are already listed and I somehow missed them, or they\n> are left out on purpose for some reason):\n\nI looked at both of these. In both cases I didn't see why the user\nwould need to know these changes were made:\n\n---------------------------------------------------------------------------\n\n> - c4ab7da60617f020e8d75b1584d0754005d71830\n\n\tcommit c4ab7da6061\n\tAuthor: David Rowley <[email protected]>\n\tDate: Sun Apr 7 21:20:18 2024 +1200\n\t\n\t Avoid needless large memcpys in libpq socket writing\n\t\n\t Until now, when calling pq_putmessage to write new data to a libpq\n\t socket, all writes are copied into a buffer and that buffer gets flushed\n\t when full to avoid having to perform small writes to the socket.\n\t\n\t There are cases where we must write large amounts of data to the socket,\n\t sometimes larger than the size of the buffer. In this case, it's\n\t wasteful to memcpy this data into the buffer and flush it out, instead,\n\t we can send it directly from the memory location that the data is already\n\t stored in.\n\t\n\t Here we adjust internal_putbytes() so that after having just flushed the\n\t buffer to the socket, if the remaining bytes to send is as big or bigger\n\t than the buffer size, we just send directly rather than needlessly\n\t copying into the PqSendBuffer buffer first.\n\t\n\t Examples of operations that write large amounts of data in one message\n\t are; outputting large tuples with SELECT or COPY TO STDOUT and\n\t pg_basebackup.\n\t\n\t Author: Melih Mutlu\n\t Reviewed-by: Heikki Linnakangas\n\t Reviewed-by: Jelte Fennema-Nio\n\t Reviewed-by: David Rowley\n\t Reviewed-by: Ranier Vilela\n\t Reviewed-by: Andres Freund\n\t Discussion: https://postgr.es/m/CAGPVpCR15nosj0f6xe-c2h477zFR88q12e6WjEoEZc8ZYkTh3Q@mail.gmail.com\n\n> - cafe1056558fe07cdc52b95205588fcd80870362\n\n\tcommit cafe1056558\n\tAuthor: Robert Haas <[email protected]>\n\tDate: Tue Apr 2 10:26:10 2024 -0400\n\t\n\t Allow SIGINT to cancel psql database reconnections.\n\t\n\t After installing the SIGINT handler in psql, SIGINT can no longer cancel\n\t database reconnections. For instance, if the user starts a reconnection\n\t and then needs to do some form of interaction (ie psql is polling),\n\t there is no way to cancel the reconnection process currently.\n\t\n\t Use PQconnectStartParams() in order to insert a cancel_pressed check\n\t into the polling loop.\n\t\n\t Tristan Partin, reviewed by Gurjeet Singh, Heikki Linnakangas, Jelte\n\t Fennema-Nio, and me.\n\t\n\t Discussion: http://postgr.es/m/[email protected]\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 10 May 2024 17:21:01 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Fri, May 10, 2024 at 06:50:54PM +0200, Jelte Fennema-Nio wrote:\n>> There are two commits that I think would benefit from being listed\n>> (but maybe they are already listed and I somehow missed them, or they\n>> are left out on purpose for some reason):\n\n> I looked at both of these. In both cases I didn't see why the user\n> would need to know these changes were made:\n\nI agree that the buffering change is not likely interesting, but\nthe fact that you can now control-C out of a psql \"\\c\" command\nis user-visible. People might have internalized the fact that\nit didn't work, or created complicated workarounds.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 May 2024 17:31:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 05:31:33PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Fri, May 10, 2024 at 06:50:54PM +0200, Jelte Fennema-Nio wrote:\n> >> There are two commits that I think would benefit from being listed\n> >> (but maybe they are already listed and I somehow missed them, or they\n> >> are left out on purpose for some reason):\n> \n> > I looked at both of these. In both cases I didn't see why the user\n> > would need to know these changes were made:\n> \n> I agree that the buffering change is not likely interesting, but\n> the fact that you can now control-C out of a psql \"\\c\" command\n> is user-visible. People might have internalized the fact that\n> it didn't work, or created complicated workarounds.\n\nIt was not clear to me what the user-visible behavior was with the\nSIGINT control. Yes, based on your details, it should be mentioned.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 10 May 2024 17:37:25 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\nHello Bruce,\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> \thttps://momjian.us/pgsql_docs/release-17.html\n\nThank you for working on this!\n\n> I welcome feedback. For some reason it was an easier job than usual.\n\nDo you think we need to add the following 2 items?\n\n- 9f133763961e280d8ba692bcad0b061b861e9138 this is an optimizer\n transform improvement.\n\n- a8a968a8212ee3ef7f22795c834b33d871fac262 this is an optimizer costing\n improvement.\n\nBoth of them can generate a better plan on some cases. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sat, 11 May 2024 13:27:25 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, 11 May 2024 at 17:32, Andy Fan <[email protected]> wrote:\n> Do you think we need to add the following 2 items?\n>\n> - 9f133763961e280d8ba692bcad0b061b861e9138 this is an optimizer\n> transform improvement.\n\nI think this should be in the release notes.\n\nSuggest:\n\n* Allow correlated IN subqueries to be transformed into joins (Andy\nFan, Tom Lane)\n\n> - a8a968a8212ee3ef7f22795c834b33d871fac262 this is an optimizer costing\n> improvement.\n>\n> Both of them can generate a better plan on some cases.\n\nI think this should be present too.\n\nSuggest:\n\n* Improve optimizer's ability to use cheap startup plans when querying\npartitioned tables, inheritance parents and for UNION ALL (Andy Fan,\nDavid Rowley)\n\nBoth under \"E.1.3.1.1. Optimizer\"\n\nDavid\n\n\n", "msg_date": "Sat, 11 May 2024 17:57:31 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, 10 May 2024 at 23:31, Tom Lane <[email protected]> wrote:\n>\n> Bruce Momjian <[email protected]> writes:\n> > I looked at both of these. In both cases I didn't see why the user\n> > would need to know these changes were made:\n>\n> I agree that the buffering change is not likely interesting, but\n> the fact that you can now control-C out of a psql \"\\c\" command\n> is user-visible. People might have internalized the fact that\n> it didn't work, or created complicated workarounds.\n\nThe buffering change improved performance up to ~40% in some of the\nbenchmarks. The case it improves mostly is COPY of large rows and\nstreaming a base backup. That sounds user-visible enough to me to\nwarrant an entry imho.\n\n\n", "msg_date": "Sat, 11 May 2024 15:57:49 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 5/11/24 09:57, Jelte Fennema-Nio wrote:\n> On Fri, 10 May 2024 at 23:31, Tom Lane <[email protected]> wrote:\n>>\n>> Bruce Momjian <[email protected]> writes:\n>> > I looked at both of these. In both cases I didn't see why the user\n>> > would need to know these changes were made:\n>>\n>> I agree that the buffering change is not likely interesting, but\n>> the fact that you can now control-C out of a psql \"\\c\" command\n>> is user-visible. People might have internalized the fact that\n>> it didn't work, or created complicated workarounds.\n> \n> The buffering change improved performance up to ~40% in some of the\n> benchmarks. The case it improves mostly is COPY of large rows and\n> streaming a base backup. That sounds user-visible enough to me to\n> warrant an entry imho.\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Sat, 11 May 2024 10:24:39 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\nOn 2024-05-09 Th 00:03, Bruce Momjian wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> \thttps://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> \trelease-16: 206\n> \trelease-17: 188\n>\n> I welcome feedback. For some reason it was an easier job than usual.\n\n\nI don't like blowing my own horn but I feel commit 3311ea86ed \"Introduce \na non-recursive JSON parser\" should be in the release notes. This isn't \nsomething that's purely internal, but it could be used by an extension \nor a client program to parse JSON documents that are too large to handle \nwith the existing API.\n\nMaybe \"Introduce an incremental JSON parser\" would have been a better \nheadline.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 11 May 2024 15:32:55 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 7:20 PM Bruce Momjian <[email protected]> wrote:\n>\n> > Thanks a lot for this work Bruce! It looks like commit\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=91f2cae7a4e664e9c0472b364c7db29d755ab151\n> > is missing from daft release notes. Just curious to know if it's\n> > intentional or a miss out.\n>\n> I did not mention it because the commit didn't mention any performance\n> benefit and it seemed more like an internal change than something people\n> needed to know about.\n\nYes, it's an internal feature for someone not using Direct IO for WAL\nand helps achieve things mentioned at\nhttps://www.postgresql.org/message-id/flat/20230125211540.zylu74dj2uuh3k7w%40awork3.anarazel.de#0cac0a0d219129e32329831adea05db5\n(I'm hoping to target them for PG18). It starts to show visible\nbenefits if someone enables direct IO for WAL (for whatever reasons)\nhttps://www.postgresql.org/message-id/CALj2ACV6rS%2B7iZx5%2BoAvyXJaN4AG-djAQeM1mrM%3DYSDkVrUs7g%40mail.gmail.com\nand https://www.postgresql.org/message-id/20230127061745.46yu4ksitzociwkt%40awork3.anarazel.de.\n\nI'm okay if 91f2cae7 is left out for the reason that Direct IO for WAL\nisn't something used in production and debug_io_direct is a developer\noption.\n\n> I could reword and merge it into this item, if\n> you think I should:\n>\n> Improve performance of heavily-contended WAL writes (Bharath Rupireddy)\n\nI think both the commits are for different purposes - one is for WAL\nwrties, another is for WAL reads.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 13 May 2024 12:46:38 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 10, 2024 at 05:31:33PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Fri, May 10, 2024 at 06:50:54PM +0200, Jelte Fennema-Nio wrote:\n> >> There are two commits that I think would benefit from being listed\n> >> (but maybe they are already listed and I somehow missed them, or they\n> >> are left out on purpose for some reason):\n> \n> > I looked at both of these. In both cases I didn't see why the user\n> > would need to know these changes were made:\n> \n> I agree that the buffering change is not likely interesting, but\n> the fact that you can now control-C out of a psql \"\\c\" command\n> is user-visible. People might have internalized the fact that\n> it didn't work, or created complicated workarounds.\n\nAgreed, attached patch applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Mon, 13 May 2024 20:30:59 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, May 11, 2024 at 10:24:39AM -0400, Joe Conway wrote:\n> On 5/11/24 09:57, Jelte Fennema-Nio wrote:\n> > On Fri, 10 May 2024 at 23:31, Tom Lane <[email protected]> wrote:\n> > > \n> > > Bruce Momjian <[email protected]> writes:\n> > > > I looked at both of these. In both cases I didn't see why the user\n> > > > would need to know these changes were made:\n> > > \n> > > I agree that the buffering change is not likely interesting, but\n> > > the fact that you can now control-C out of a psql \"\\c\" command\n> > > is user-visible. People might have internalized the fact that\n> > > it didn't work, or created complicated workarounds.\n> > \n> > The buffering change improved performance up to ~40% in some of the\n> > benchmarks. The case it improves mostly is COPY of large rows and\n> > streaming a base backup. That sounds user-visible enough to me to\n> > warrant an entry imho.\n> \n> +1\n\nAttached patch applied.\n\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Mon, 13 May 2024 20:56:26 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, May 11, 2024 at 01:27:25PM +0800, Andy Fan wrote:\n> \n> Hello Bruce,\n> \n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> Thank you for working on this!\n> \n> > I welcome feedback. For some reason it was an easier job than usual.\n> \n> Do you think we need to add the following 2 items?\n> \n> - 9f133763961e280d8ba692bcad0b061b861e9138 this is an optimizer\n> transform improvement.\n\nIt was unclear from the commit message exactly what user-visible\noptimization this allowed. Do you have details?\n\n> - a8a968a8212ee3ef7f22795c834b33d871fac262 this is an optimizer costing\n> improvement.\n\nDoes this allow faster UNION ALL with LIMIT, perhaps?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 13 May 2024 20:59:42 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "jian he <[email protected]> 于2024年5月9日周四 18:00写道:\n\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n>\n> another potential incompatibilities issue:\n> ALTER TABLE DROP PRIMARY KEY\n>\n> see:\n>\n> https://www.postgresql.org/message-id/202404181849.6frtmajobe27%40alvherre.pgsql\n>\n>\nSince Alvaro has reverted all changes to not-null constraints, so will not\nhave potential incompatibilities issue.\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\njian he <[email protected]> 于2024年5月9日周四 18:00写道:On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes;  you can\n> see the results here:\n>\n>         https://momjian.us/pgsql_docs/release-17.html\n>\n\nanother potential incompatibilities issue:\nALTER TABLE DROP PRIMARY KEY\n\nsee:\nhttps://www.postgresql.org/message-id/202404181849.6frtmajobe27%40alvherre.pgsql\n\nSince Alvaro has reverted all changes to not-null constraints, so will not have potential incompatibilities issue.-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Tue, 14 May 2024 10:22:35 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\nBruce Momjian <[email protected]> writes:\n\n> On Sat, May 11, 2024 at 01:27:25PM +0800, Andy Fan wrote:\n>> \n>> Hello Bruce,\n>> \n>> > I have committed the first draft of the PG 17 release notes; you can\n>> > see the results here:\n>> >\n>> > \thttps://momjian.us/pgsql_docs/release-17.html\n>> \n>> Thank you for working on this!\n>> \n>> > I welcome feedback. For some reason it was an easier job than usual.\n>> \n>> Do you think we need to add the following 2 items?\n>> \n>> - 9f133763961e280d8ba692bcad0b061b861e9138 this is an optimizer\n>> transform improvement.\n>\n> It was unclear from the commit message exactly what user-visible\n> optimization this allowed. Do you have details?\n\nYes, It allows the query like \"SELECT * FROM t1 WHERE t1.a in (SELECT a\nFROM t2 WHERE t2.b = t1.b)\" be pulled up a semi join, hence more join\nmethods / join orders are possible.\n\n>\n>> - a8a968a8212ee3ef7f22795c834b33d871fac262 this is an optimizer costing\n>> improvement.\n>\n> Does this allow faster UNION ALL with LIMIT, perhaps?\n\nYes, for example: (subquery-1) UNION ALL (subquery-2) LIMIT n;\n\nWhen planning the subquery-1 or subquery-2, limit N should be\nconsidered. As a consequence, maybe hash join should be replaced with\nNested Loop. Before this commits, it is ignored if it is flatten into \nappendrel, and the \"flatten\" happens very often.\n\nDavid provided a summary for the both commits in [1].\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvqAQgq27LgYmJ85VVGTR0%3DhRW6HHq2oZgK0ZiYC_a%2BEww%40mail.gmail.com \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 14 May 2024 10:32:14 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi everybody,\n\n\nBeing a technical writer, I attached a small patch that fixes minor \nlanguage stuff.\n\nThank you.\n\n\nElena Indrupskaya\n\nPostgres Professional Company\n\nMoscow, Russia\n\n\nOn 09.05.2024 07:03, Bruce Momjian wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> \thttps://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> \trelease-16: 206\n> \trelease-17: 188\n>\n> I welcome feedback. For some reason it was an easier job than usual.\n>", "msg_date": "Tue, 14 May 2024 13:34:56 +0300", "msg_from": "Elena Indrupskaya <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, 14 May 2024 at 02:56, Bruce Momjian <[email protected]> wrote:\n>\n> On Sat, May 11, 2024 at 10:24:39AM -0400, Joe Conway wrote:\n> > On 5/11/24 09:57, Jelte Fennema-Nio wrote:\n> > > The buffering change improved performance up to ~40% in some of the\n> > > benchmarks. The case it improves mostly is COPY of large rows and\n> > > streaming a base backup. That sounds user-visible enough to me to\n> > > warrant an entry imho.\n> >\n> > +1\n>\n> Attached patch applied.\n\nI think we shouldn't list this under the libpq changes and shouldn't\nmention libpq in the description, since this patch changes\nsrc/backend/libpq files instead of src/interfaces/libpq files. I think\nit should be in the \"General performance\" section and describe the\nchange as something like the below:\n\nImprove performance when transferring large blocks of data to a client\n\nPS. I completely understand that this was not clear from the commit message.\n\n\n", "msg_date": "Tue, 14 May 2024 14:20:24 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 5:03 AM Bruce Momjian <[email protected]> wrote\n>\n>\n> I welcome feedback. For some reason it was an easier job than usual.\n\nThis looks better if \"more case\" -> \"more cases\" :\n> Allow query nodes to be run in parallel in more case (Tom Lane)\n\n\n", "msg_date": "Tue, 14 May 2024 14:58:41 +0100", "msg_from": "Pantelis Theodosiou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 AM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n\nI had two comments:\n\n--------\nI think the read stream item:\n\n\"Allow the grouping of file system reads with the new system variable\nio_combine_limit\"\n\nMight be better if it mentions the effect, like:\n\n\"Reduce system calls by automatically merging reads up to io_combine_limit\"\n-------\nFor the vacuum feature:\n\n\"Allow vacuum to more efficiently remove and freeze tuples\"\n\nI think that we need to more clearly point out the implications of the\nfeature added by Sawada-san (and reviewed by John) in 667e65aac35497.\nVacuum no longer uses a fixed amount of memory for dead tuple TID\nstorage and it is not preallocated. This affects users as they may\nwant to change their configuration (and expectations).\n\nIf you make that item more specific to their work, you should also\nremove my name, as the work I did on vacuum this release was unrelated\nto their work on dead tuple TID storage.\n\nThe work Heikki and I did which culminated in 6dbb490261 mainly has\nthe impact of improving vacuum's performance (vacuum emits less WAL\nand is more efficient). So you could argue for removing it from the\nrelease notes if you are using the requirement that performance\nimprovements don't go in the release notes.\n\nHowever, one of the preliminary commits for this f83d70976 does change\nWAL format. There are three WAL records which no longer exist as\nseparate records. Do users care about this?\n\n- Melanie\n\n\n", "msg_date": "Tue, 14 May 2024 15:39:26 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 14, 2024 at 10:32:14AM +0800, Andy Fan wrote:\n> Bruce Momjian <[email protected]> writes:\n> > It was unclear from the commit message exactly what user-visible\n> > optimization this allowed. Do you have details?\n> \n> Yes, It allows the query like \"SELECT * FROM t1 WHERE t1.a in (SELECT a\n> FROM t2 WHERE t2.b = t1.b)\" be pulled up a semi join, hence more join\n> methods / join orders are possible.\n> \n> \n> Yes, for example: (subquery-1) UNION ALL (subquery-2) LIMIT n;\n> \n> When planning the subquery-1 or subquery-2, limit N should be\n> considered. As a consequence, maybe hash join should be replaced with\n> Nested Loop. Before this commits, it is ignored if it is flatten into \n> appendrel, and the \"flatten\" happens very often.\n> \n> David provided a summary for the both commits in [1].\n\nOkay, attached patch applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 14 May 2024 20:37:19 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, May 11, 2024 at 03:32:55PM -0400, Andrew Dunstan wrote:\n> \n> On 2024-05-09 Th 00:03, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> > \n> > \thttps://momjian.us/pgsql_docs/release-17.html\n> > \n> > It will be improved until the final release. The item count is 188,\n> > which is similar to recent releases:\n> > \n> > \trelease-10: 189\n> > \trelease-11: 170\n> > \trelease-12: 180\n> > \trelease-13: 178\n> > \trelease-14: 220\n> > \trelease-15: 184\n> > \trelease-16: 206\n> > \trelease-17: 188\n> > \n> > I welcome feedback. For some reason it was an easier job than usual.\n> \n> \n> I don't like blowing my own horn but I feel commit 3311ea86ed \"Introduce a\n> non-recursive JSON parser\" should be in the release notes. This isn't\n> something that's purely internal, but it could be used by an extension or a\n> client program to parse JSON documents that are too large to handle with the\n> existing API.\n> \n> Maybe \"Introduce an incremental JSON parser\" would have been a better\n> headline.\n\nWell, this gets into a level of detail that is beyond the average\nreader. I think at that level people will need to read the git logs or\nreview the code. Do we use it for anything yet?\n\nIt could be put in the source code section but I try to only have\nuser-visible stuff there.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 14 May 2024 20:39:29 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 14, 2024 at 01:34:56PM +0300, Elena Indrupskaya wrote:\n> Being a technical writer, I attached a small patch that fixes minor language\n> stuff.\n\nYou are absolutely correct. Patch applied, thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 14 May 2024 20:43:33 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 14, 2024 at 02:20:24PM +0200, Jelte Fennema-Nio wrote:\n> On Tue, 14 May 2024 at 02:56, Bruce Momjian <[email protected]> wrote:\n> >\n> > On Sat, May 11, 2024 at 10:24:39AM -0400, Joe Conway wrote:\n> > > On 5/11/24 09:57, Jelte Fennema-Nio wrote:\n> > > > The buffering change improved performance up to ~40% in some of the\n> > > > benchmarks. The case it improves mostly is COPY of large rows and\n> > > > streaming a base backup. That sounds user-visible enough to me to\n> > > > warrant an entry imho.\n> > >\n> > > +1\n> >\n> > Attached patch applied.\n> \n> I think we shouldn't list this under the libpq changes and shouldn't\n> mention libpq in the description, since this patch changes\n> src/backend/libpq files instead of src/interfaces/libpq files. I think\n> it should be in the \"General performance\" section and describe the\n> change as something like the below:\n> \n> Improve performance when transferring large blocks of data to a client\n> \n> PS. I completely understand that this was not clear from the commit message.\n\nOkay, I went with your wording. Attached patch applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 14 May 2024 20:47:10 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 14, 2024 at 02:58:41PM +0100, Pantelis Theodosiou wrote:\n> On Thu, May 9, 2024 at 5:03 AM Bruce Momjian <[email protected]> wrote\n> >\n> >\n> > I welcome feedback. For some reason it was an easier job than usual.\n> \n> This looks better if \"more case\" -> \"more cases\" :\n> > Allow query nodes to be run in parallel in more case (Tom Lane)\n\nYes, you are correct, fixed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 14 May 2024 20:48:21 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 14, 2024 at 03:39:26PM -0400, Melanie Plageman wrote:\n> On Thu, May 9, 2024 at 12:04 AM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> \n> I had two comments:\n> \n> --------\n> I think the read stream item:\n> \n> \"Allow the grouping of file system reads with the new system variable\n> io_combine_limit\"\n> \n> Might be better if it mentions the effect, like:\n> \n> \"Reduce system calls by automatically merging reads up to io_combine_limit\"\n\nUh, as I understand it, the reduced number of system calls is not the\nvalue of the feature, but rather the ability to request a larger block\nfrom the I/O subsystem. Without it, you have to make a request and wait\nfor each request to finish. I am open to new wording, but I am not sure\nyour new wording is accurate.\n\n> -------\n> For the vacuum feature:\n> \n> \"Allow vacuum to more efficiently remove and freeze tuples\"\n> \n> I think that we need to more clearly point out the implications of the\n> feature added by Sawada-san (and reviewed by John) in 667e65aac35497.\n> Vacuum no longer uses a fixed amount of memory for dead tuple TID\n> storage and it is not preallocated. This affects users as they may\n> want to change their configuration (and expectations).\n> \n> If you make that item more specific to their work, you should also\n> remove my name, as the work I did on vacuum this release was unrelated\n> to their work on dead tuple TID storage.\n> \n> The work Heikki and I did which culminated in 6dbb490261 mainly has\n> the impact of improving vacuum's performance (vacuum emits less WAL\n> and is more efficient). So you could argue for removing it from the\n> release notes if you are using the requirement that performance\n> improvements don't go in the release notes.\n> \n> However, one of the preliminary commits for this f83d70976 does change\n> WAL format. There are three WAL records which no longer exist as\n> separate records. Do users care about this?\n\nI don't think users really care about these details, just that it is\nfaster so they will not be surprised if there is a change. I was\npurposely vague to group multiple commits into the single item. By\ngrouping them together, I got enough impact to warrant listing it. If\nyou split it apart, it is harder to justify mentioning them.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 14 May 2024 21:00:50 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 10:48 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, May 9, 2024 at 02:17:12PM +0900, Masahiko Sawada wrote:\n> > Hi,\n> >\n>\n> > Also, please consider the following item:\n> >\n> > - Improve eviction algorithm in ReorderBuffer using max-heap for many\n> > subtransactions (5bec1d6bc)\n>\n> I looked at that item and I don't have a generic \"make logical\n> replication apply faster\" item to merge it into, and many\n> subtransactions seemed like enough of an edge-case that I didn't think\n> mentioning it make sense. Can you see a good place to add it?\n\nI think that since many subtransactions cases are no longer becoming\nedge-cases these days, we needed to improve that and it might be\nhelpful for users to mention it. How about the following item for\nexample?\n\nImprove logical decoding performance in cases where there are many\nsubtransactions.\n\n>\n> > Finally, should we mention the following commit in the release note?\n> > It's not a user-visible change but added a new regression test module.\n> >\n> > - Add tests for XID wraparound (e255b646a)\n>\n> I don't normally add testing infrastructure changes unless they are\n> major.\n\nI've seen we had such item, for example in PG14 release note:\n\nAdd a test module for the regular expression package (Tom Lane)\n\nBut if our policy has already changed, I'm okay with not mentioning\nthe xid_wraparound test in the PG17 release note.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 10:10:28 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 14, 2024 at 10:22:35AM +0800, Tender Wang wrote:\n> \n> \n> jian he <[email protected]> 于2024年5月9日周四 18:00写道:\n> \n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes;  you can\n> > see the results here:\n> >\n> >         https://momjian.us/pgsql_docs/release-17.html\n> >\n> \n> another potential incompatibilities issue:\n> ALTER TABLE DROP PRIMARY KEY\n> \n> see:\n> https://www.postgresql.org/message-id/\n> 202404181849.6frtmajobe27%40alvherre.pgsql\n> \n> \n> \n> Since Alvaro has reverted all changes to not-null constraints, so will not have\n> potential incompatibilities issue.\n\nAgreed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 14 May 2024 22:02:11 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, 15 May 2024 at 13:00, Bruce Momjian <[email protected]> wrote:\n>\n> On Tue, May 14, 2024 at 03:39:26PM -0400, Melanie Plageman wrote:\n> > \"Reduce system calls by automatically merging reads up to io_combine_limit\"\n>\n> Uh, as I understand it, the reduced number of system calls is not the\n> value of the feature, but rather the ability to request a larger block\n> from the I/O subsystem. Without it, you have to make a request and wait\n> for each request to finish. I am open to new wording, but I am not sure\n> your new wording is accurate.\n\nI think you have the cause and effect backwards. There's no advantage\nto reading 128KB if you only need 8KB. It's the fact that doing\n*larger* reads allows *fewer* reads that allows it to be more\nefficient. There are also the efficiency gains from fadvise\nPOSIX_FADV_WILLNEED. I'm unsure how to jam that into a short sentence.\nMaybe; \"Optimize reading of tables by allowing pages to be prefetched\nand read in chunks up to io_combine_limit\", or a bit more buzzy;\n\"Optimize reading of tables by allowing pages to be prefetched and\nperforming vectored reads in chunks up to io_combine_limit\".\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 14:03:32 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 02:03:32PM +1200, David Rowley wrote:\n> On Wed, 15 May 2024 at 13:00, Bruce Momjian <[email protected]> wrote:\n> >\n> > On Tue, May 14, 2024 at 03:39:26PM -0400, Melanie Plageman wrote:\n> > > \"Reduce system calls by automatically merging reads up to io_combine_limit\"\n> >\n> > Uh, as I understand it, the reduced number of system calls is not the\n> > value of the feature, but rather the ability to request a larger block\n> > from the I/O subsystem. Without it, you have to make a request and wait\n> > for each request to finish. I am open to new wording, but I am not sure\n> > your new wording is accurate.\n> \n> I think you have the cause and effect backwards. There's no advantage\n> to reading 128KB if you only need 8KB. It's the fact that doing\n> *larger* reads allows *fewer* reads that allows it to be more\n> efficient. There are also the efficiency gains from fadvise\n> POSIX_FADV_WILLNEED. I'm unsure how to jam that into a short sentence.\n> Maybe; \"Optimize reading of tables by allowing pages to be prefetched\n> and read in chunks up to io_combine_limit\", or a bit more buzzy;\n> \"Optimize reading of tables by allowing pages to be prefetched and\n> performing vectored reads in chunks up to io_combine_limit\".\n\nYes, my point is that it is not the number of system calls or system\ncall overhead that is the advantage of this patch, but the ability to\nrequest more of the I/O system in one call, which is not the same as\nsystem calls.\n\nI can use your wording, but how much prefetching to we enable now?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 14 May 2024 22:06:17 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 10:10:28AM +0900, Masahiko Sawada wrote:\n> > I looked at that item and I don't have a generic \"make logical\n> > replication apply faster\" item to merge it into, and many\n> > subtransactions seemed like enough of an edge-case that I didn't think\n> > mentioning it make sense. Can you see a good place to add it?\n> \n> I think that since many subtransactions cases are no longer becoming\n> edge-cases these days, we needed to improve that and it might be\n> helpful for users to mention it. How about the following item for\n> example?\n> \n> Improve logical decoding performance in cases where there are many\n> subtransactions.\n\nOkay, item added in the attached applied patch.\n\n> > > Finally, should we mention the following commit in the release note?\n> > > It's not a user-visible change but added a new regression test module.\n> > >\n> > > - Add tests for XID wraparound (e255b646a)\n> >\n> > I don't normally add testing infrastructure changes unless they are\n> > major.\n> \n> I've seen we had such item, for example in PG14 release note:\n> \n> Add a test module for the regular expression package (Tom Lane)\n> \n> But if our policy has already changed, I'm okay with not mentioning\n> the xid_wraparound test in the PG17 release note.\n\nUh, that PG 14 test suite was huge and flushed out a lot of bugs, not\nonly in our regex code but I think in the TCL/Henry Spencer regex\nlibrary we inherited.\n\nWe add 10-40 tests every year, and how many do I mention in the release\nnotes? You had to go back to PG 14 to find one. We have not changed\nour release note \"test item\" criteria --- I only mention tests that are\nsignificant to our userbase. I think that test suite was significant to\nanyone using the TCL/Henry Spencer regex library.\n\nIf you want your test mentioned, you have to explain why it is useful\nfor users to know about it, or the value it brings them.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Tue, 14 May 2024 22:20:11 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, 15 May 2024 at 14:06, Bruce Momjian <[email protected]> wrote:\n> I can use your wording, but how much prefetching to we enable now?\n\nI believe the read stream API is used for Seq Scan, ANALYZE and\npg_prewarm(). fadvise() is used when the next buffer that's required\nis not in shared buffers on any build that has defined\nHAVE_DECL_POSIX_FADVISE.\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 14:24:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-May-14, Bruce Momjian wrote:\n\n> On Tue, May 14, 2024 at 03:39:26PM -0400, Melanie Plageman wrote:\n\n> > I think that we need to more clearly point out the implications of the\n> > feature added by Sawada-san (and reviewed by John) in 667e65aac35497.\n> > Vacuum no longer uses a fixed amount of memory for dead tuple TID\n> > storage and it is not preallocated. This affects users as they may\n> > want to change their configuration (and expectations).\n> > \n> > If you make that item more specific to their work, you should also\n> > remove my name, as the work I did on vacuum this release was unrelated\n> > to their work on dead tuple TID storage.\n> > \n> > The work Heikki and I did which culminated in 6dbb490261 mainly has\n> > the impact of improving vacuum's performance (vacuum emits less WAL\n> > and is more efficient). So you could argue for removing it from the\n> > release notes if you are using the requirement that performance\n> > improvements don't go in the release notes.\n> \n> I don't think users really care about these details, just that it is\n> faster so they will not be surprised if there is a change. I was\n> purposely vague to group multiple commits into the single item. By\n> grouping them together, I got enough impact to warrant listing it. If\n> you split it apart, it is harder to justify mentioning them.\n\nI disagree with this. IMO the impact of the Sawada/Naylor change is\nlikely to be enormous for people with large tables and large numbers of\ntuples to clean up (I know we've had a number of customers in this\nsituation, I can't imagine any Postgres service provider that doesn't).\nThe fact that maintenance_work_mem is no longer capped at 1GB is very\nimportant and I think we should mention that explicitly in the release\nnotes, as setting it higher could make a big difference in vacuum run\ntimes.\n\nI don't know what's the impact of the Plageman/Linnakangas work, but\nsince there are no user-visible consequences other than it being faster,\nI agree it could be put more succintly, perhaps together as a sub-para\nof the same item.\n\nWhat about something like this?\n\n<para>\n Lift the 1 GB allocation limit for vacuum memory usage for dead\n tuples, and make storage more compact and performant.\n</para>\n<para>\n This can reduce the number of index passes that vacuum has to perform\n for tables with many dead tuples, shortening vacuum times.\n</para>\n<para>\n Also, the WAL traffic caused by vacuum has been made more compact.\n</para>\n \n\n> > However, one of the preliminary commits for this f83d70976 does\n> > change WAL format. There are three WAL records which no longer exist\n> > as separate records. Do users care about this?\n\nI don't think so.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You don't solve a bad join with SELECT DISTINCT\" #CupsOfFail\nhttps://twitter.com/connor_mc_d/status/1431240081726115845\n\n\n", "msg_date": "Wed, 15 May 2024 10:38:20 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, 15 May 2024 at 20:38, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-May-14, Bruce Momjian wrote:\n> > I don't think users really care about these details, just that it is\n> > faster so they will not be surprised if there is a change. I was\n> > purposely vague to group multiple commits into the single item. By\n> > grouping them together, I got enough impact to warrant listing it. If\n> > you split it apart, it is harder to justify mentioning them.\n>\n> I disagree with this. IMO the impact of the Sawada/Naylor change is\n> likely to be enormous for people with large tables and large numbers of\n> tuples to clean up (I know we've had a number of customers in this\n> situation, I can't imagine any Postgres service provider that doesn't).\n> The fact that maintenance_work_mem is no longer capped at 1GB is very\n> important and I think we should mention that explicitly in the release\n> notes, as setting it higher could make a big difference in vacuum run\n> times.\n\nI very much agree with Alvaro here. IMO, this should be on the\nhighlight feature list for v17. Prior to this, having to do multiple\nindex scans because of filling maintenance_work_mem was a performance\ntragedy. If there were enough dead tuples to have filled\nmaintenance_work_mem, then the indexes are large. Having to scan\nmultiple large indexes multiple times isn't good use of I/O and CPU.\nAs far as I understand it, this work means it'll be unlikely that a\nwell-configured server will ever have to do multiple index passes. I\ndon't think \"enormous impact\" is an exaggeration here.\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 23:17:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 4:38 AM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-May-14, Bruce Momjian wrote:\n>\n> > On Tue, May 14, 2024 at 03:39:26PM -0400, Melanie Plageman wrote:\n>\n> > > I think that we need to more clearly point out the implications of the\n> > > feature added by Sawada-san (and reviewed by John) in 667e65aac35497.\n> > > Vacuum no longer uses a fixed amount of memory for dead tuple TID\n> > > storage and it is not preallocated. This affects users as they may\n> > > want to change their configuration (and expectations).\n> > >\n> > > If you make that item more specific to their work, you should also\n> > > remove my name, as the work I did on vacuum this release was unrelated\n> > > to their work on dead tuple TID storage.\n> > >\n> > > The work Heikki and I did which culminated in 6dbb490261 mainly has\n> > > the impact of improving vacuum's performance (vacuum emits less WAL\n> > > and is more efficient). So you could argue for removing it from the\n> > > release notes if you are using the requirement that performance\n> > > improvements don't go in the release notes.\n> >\n> > I don't think users really care about these details, just that it is\n> > faster so they will not be surprised if there is a change. I was\n> > purposely vague to group multiple commits into the single item. By\n> > grouping them together, I got enough impact to warrant listing it. If\n> > you split it apart, it is harder to justify mentioning them.\n>\n> I disagree with this. IMO the impact of the Sawada/Naylor change is\n> likely to be enormous for people with large tables and large numbers of\n> tuples to clean up (I know we've had a number of customers in this\n> situation, I can't imagine any Postgres service provider that doesn't).\n> The fact that maintenance_work_mem is no longer capped at 1GB is very\n> important and I think we should mention that explicitly in the release\n> notes, as setting it higher could make a big difference in vacuum run\n> times.\n>\n> I don't know what's the impact of the Plageman/Linnakangas work, but\n> since there are no user-visible consequences other than it being faster,\n> I agree it could be put more succintly, perhaps together as a sub-para\n> of the same item.\n>\n> What about something like this?\n>\n> <para>\n> Lift the 1 GB allocation limit for vacuum memory usage for dead\n> tuples, and make storage more compact and performant.\n> </para>\n> <para>\n> This can reduce the number of index passes that vacuum has to perform\n> for tables with many dead tuples, shortening vacuum times.\n> </para>\n> <para>\n> Also, the WAL traffic caused by vacuum has been made more compact.\n> </para>\n\nI think this wording and organization makes sense. I hadn't thought of\nusing \"traffic\" to describe this, but I like it.\n\nAlso +1 on the Sawada/Naylor change being on the highlight section of\nthe release (as David suggested upthread).\n\n- Melanie\n\n\n", "msg_date": "Wed, 15 May 2024 09:13:14 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 7:36 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 02:03:32PM +1200, David Rowley wrote:\n> > On Wed, 15 May 2024 at 13:00, Bruce Momjian <[email protected]> wrote:\n> > >\n> > > On Tue, May 14, 2024 at 03:39:26PM -0400, Melanie Plageman wrote:\n> > > > \"Reduce system calls by automatically merging reads up to io_combine_limit\"\n> > >\n> > > Uh, as I understand it, the reduced number of system calls is not the\n> > > value of the feature, but rather the ability to request a larger block\n> > > from the I/O subsystem. Without it, you have to make a request and wait\n> > > for each request to finish. I am open to new wording, but I am not sure\n> > > your new wording is accurate.\n> >\n> > I think you have the cause and effect backwards. There's no advantage\n> > to reading 128KB if you only need 8KB. It's the fact that doing\n> > *larger* reads allows *fewer* reads that allows it to be more\n> > efficient. There are also the efficiency gains from fadvise\n> > POSIX_FADV_WILLNEED. I'm unsure how to jam that into a short sentence.\n> > Maybe; \"Optimize reading of tables by allowing pages to be prefetched\n> > and read in chunks up to io_combine_limit\", or a bit more buzzy;\n> > \"Optimize reading of tables by allowing pages to be prefetched and\n> > performing vectored reads in chunks up to io_combine_limit\".\n>\n> Yes, my point is that it is not the number of system calls or system\n> call overhead that is the advantage of this patch, but the ability to\n> request more of the I/O system in one call, which is not the same as\n> system calls.\n>\n> I can use your wording, but how much prefetching to we enable now?\n>\n\nShouldn't we need to include commit\nb5a9b18cd0bc6f0124664999b31a00a264d16913 with this item?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 May 2024 18:43:51 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n\n>> Add local I/O block read/write timing statistics columns of pg_stat_statement (Nazir Bilal Yavuz)\n>> The new columns are \"local_blk_read_time\" and \"local_blk_write_time\".\nhere, \"pg_stat_statement\" should be \"pg_stat_statements\"?\n\n\n>> Add optional fourth parameter to pg_stat_statements_reset() to allow for the resetting of only min/max statistics (Andrei Zubkov)\n>> This parameter defaults to \"false\".\nhere, \"parameter\", should be \"argument\"?\n\nmaybe\n>> Add optional fourth boolean argument (minmax_only) to pg_stat_statements_reset() to allow for the resetting of only min/max statistics (Andrei Zubkov)\n>> This argument defaults to \"false\".\n----------------------------------------------------------------\nin section: E.1.2. Migration to Version 17\n\n>> Rename I/O block read/write timing statistics columns of pg_stat_statement (Nazir Bilal Yavuz)\n>> This renames \"blk_read_time\" to \"shared_blk_read_time\", and \"blk_write_time\" to \"shared_blk_write_time\".\n\n\"pg_stat_statement\" should be \"pg_stat_statements\"?\n\nalso, we only mentioned, pg_stat_statements some columns name changed\nin \"E.1.2. Migration to Version 17\"\nbut if you look at the release note pg_stat_statements section,\nwe added a bunch of columns, which are far more incompatibile than\njust colunm name changes.\n\nnot sure we need add these in section \"E.1.2. Migration to Version 17\"\n\n\n", "msg_date": "Thu, 16 May 2024 10:39:18 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 09:13:14AM -0400, Melanie Plageman wrote:\n> I think this wording and organization makes sense. I hadn't thought of\n> using \"traffic\" to describe this, but I like it.\n> \n> Also +1 on the Sawada/Naylor change being on the highlight section of\n> the release (as David suggested upthread).\n\nAgreed, I went with the attached applied patch.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 15 May 2024 22:48:27 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wednesday, May 15, 2024, jian he <[email protected]> wrote:\n\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n>\n> in section: E.1.2. Migration to Version 17\n>\n> >> Rename I/O block read/write timing statistics columns of\n> pg_stat_statement (Nazir Bilal Yavuz)\n> >> This renames \"blk_read_time\" to \"shared_blk_read_time\", and\n> \"blk_write_time\" to \"shared_blk_write_time\".\n>\n> we only mentioned, pg_stat_statements some columns name changed\n> in \"E.1.2. Migration to Version 17\"\n> but if you look at the release note pg_stat_statements section,\n> we added a bunch of columns, which are far more incompatibile than\n> just colunm name changes.\n>\n> not sure we need add these in section \"E.1.2. Migration to Version 17\"\n>\n>\nNew columns are not a migration issue since nothing being migrated forward\never referenced them. Its the ones that existing code knows about that\nwe’ve removed (including renames) that matter from a migration perspective.\n\nDavid J.\n\nOn Wednesday, May 15, 2024, jian he <[email protected]> wrote:On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes;  you can\n> see the results here:\n>\n>         https://momjian.us/pgsql_docs/release-17.html\n>\nin section: E.1.2. Migration to Version 17\n\n>> Rename I/O block read/write timing statistics columns of pg_stat_statement (Nazir Bilal Yavuz)\n>> This renames \"blk_read_time\" to \"shared_blk_read_time\", and \"blk_write_time\" to \"shared_blk_write_time\".\nwe only mentioned, pg_stat_statements some columns name changed\nin \"E.1.2. Migration to Version 17\"\nbut if you look at the release note pg_stat_statements section,\nwe added a bunch of columns, which are far more incompatibile than\njust colunm name changes.\n\nnot sure we need add these in section \"E.1.2. Migration to Version 17\"\nNew columns are not a migration issue since nothing being migrated forward ever referenced them.  Its the ones that existing code knows about that we’ve removed (including renames) that matter from a migration perspective.David J.", "msg_date": "Wed, 15 May 2024 19:53:48 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 16, 2024 at 10:39:18AM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> \n> >> Add local I/O block read/write timing statistics columns of pg_stat_statement (Nazir Bilal Yavuz)\n> >> The new columns are \"local_blk_read_time\" and \"local_blk_write_time\".\n> here, \"pg_stat_statement\" should be \"pg_stat_statements\"?\n\nAgreed.\n\n> >> Add optional fourth parameter to pg_stat_statements_reset() to allow for the resetting of only min/max statistics (Andrei Zubkov)\n> >> This parameter defaults to \"false\".\n> here, \"parameter\", should be \"argument\"?\n> \n> maybe\n> >> Add optional fourth boolean argument (minmax_only) to pg_stat_statements_reset() to allow for the resetting of only min/max statistics (Andrei Zubkov)\n> >> This argument defaults to \"false\".\n\nSure.\n\n> ----------------------------------------------------------------\n> in section: E.1.2. Migration to Version 17\n> \n> >> Rename I/O block read/write timing statistics columns of pg_stat_statement (Nazir Bilal Yavuz)\n> >> This renames \"blk_read_time\" to \"shared_blk_read_time\", and \"blk_write_time\" to \"shared_blk_write_time\".\n> \n> \"pg_stat_statement\" should be \"pg_stat_statements\"?\n\nYes, fixed.\n\n> also, we only mentioned, pg_stat_statements some columns name changed\n> in \"E.1.2. Migration to Version 17\"\n> but if you look at the release note pg_stat_statements section,\n> we added a bunch of columns, which are far more incompatibile than\n> just colunm name changes.\n> \n> not sure we need add these in section \"E.1.2. Migration to Version 17\"\n\nWell, new columns don't cause breakage like renamed columns, which is\nwhy I only put renames/removed columns in the migration section.\n\nAlso, thanks everyone for the release notes feedback. In some cases I\nmade a mistake, and in some cases I misjudged the item.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 15 May 2024 22:55:47 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 16 May 2024 at 14:48, Bruce Momjian <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 09:13:14AM -0400, Melanie Plageman wrote:\n> > Also +1 on the Sawada/Naylor change being on the highlight section of\n> > the release (as David suggested upthread).\n>\n> Agreed, I went with the attached applied patch.\n\n+Allow vacuum to more efficiently store tuple references and remove\nits memory limit (Masahiko Sawada, John Naylor)\n+</para>\n\nI don't want it to seem like I'm splitting hairs, but I'd drop the \"\nand remove its memory limit\"\n\n+<para>\n+Specifically, maintenance_work_mem and autovacuum_work_mem can now be\nconfigured to use more than one gigabyte of memory. WAL traffic\ncaused by vacuum is also more compact.\n\nI'd say the first sentence above should be written as:\n\n\"Additionally, vacuum no longer silently imposes a 1GB tuple reference\nlimit even when maintenance_work_mem or autovacuum_work_mem are set to\nhigher values\"\n\nIt's not \"Specifically\" as the \"more efficiently store tuple\nreferences\" isn't the same thing as removing the 1GB cap. Also, there\nwas never a restriction in configuring maintenance_work_mem or\nautovacuum_work_mem to values higher than 1GB. The restriction was\nthat vacuum was unable to utilize anything more than that.\n\nDavid\n\n\n", "msg_date": "Thu, 16 May 2024 15:35:17 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n> I disagree with this. IMO the impact of the Sawada/Naylor change is\n> likely to be enormous for people with large tables and large numbers of\n> tuples to clean up (I know we've had a number of customers in this\n> situation, I can't imagine any Postgres service provider that doesn't).\n> The fact that maintenance_work_mem is no longer capped at 1GB is very\n> important and I think we should mention that explicitly in the release\n> notes, as setting it higher could make a big difference in vacuum run\n> times.\n\n+many.\n\nWe're having this debate every release. I think the ongoing reticence to note\nperformance improvements in the release notes is hurting Postgres.\n\nFor one, performance improvements are one of the prime reason users\nupgrade. Without them being noted anywhere more dense than the commit log,\nit's very hard to figure out what improved for users. A halfway widely\napplicable performance improvement is far more impactful than many of the\nfeature changes we do list in the release notes.\n\nFor another, it's also very frustrating for developers that focus on\nperformance. The reticence to note their work, while noting other, far\nsmaller, things in the release notes, pretty much tells us that our work isn't\nvalued.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 May 2024 20:48:02 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n\n>> Add jsonpath methods to convert JSON values to different data types (Jeevan Chalke)\n>> The jsonpath methods are .bigint(), .boolean(), .date(), .decimal([precision [, scale]]), .integer(), .number(), .string(), .time(), .time_tz(), .timestamp(), and .timestamp_tz().\n\nI think it's slightly incorrect.\n\nfor example:\nselect jsonb_path_query('\"2023-08-15\"', '$.date()');\nI think it does is trying to validate json value \"2023-08-15\" can be a\ndate value, if so, print json string out, else error out.\n\n\n\"convert JSON values to different data types\"\nmeaning that we are converting json values to another data type, date?\n\n\n", "msg_date": "Thu, 16 May 2024 16:29:38 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 16 May 2024 at 05:48, Andres Freund <[email protected]> wrote:\n> We're having this debate every release. I think the ongoing reticence to note\n> performance improvements in the release notes is hurting Postgres.\n>\n> For one, performance improvements are one of the prime reason users\n> upgrade. Without them being noted anywhere more dense than the commit log,\n> it's very hard to figure out what improved for users. A halfway widely\n> applicable performance improvement is far more impactful than many of the\n> feature changes we do list in the release notes.\n>\n> For another, it's also very frustrating for developers that focus on\n> performance. The reticence to note their work, while noting other, far\n> smaller, things in the release notes, pretty much tells us that our work isn't\n> valued.\n\n+1 to the general gist of listing every perf improvement **and memory\nusage reduction** in the release notes. Most of them are already\ngrouped together in a dedicated \"General performance\" section anyway,\nhaving that section be big would only be good imho to show that we're\ncommitted to improving perf.\n\nI think one thing would make this a lot easier though is if commits\nthat knowlingy impact perf would clearly say so in the commit message,\nbecause now it's sometimes hard to spot as someone not deeply involved\nwith the specific patch. e.g. c4ab7da606 doesn't mention performance\nat all, so I'm not surprised it wasn't listed initially. And while\n667e65aac3 states that multiple rounds of heap scanning is now\nextremely rare, it doesn't explicitly state what the kind of perf\nimpact can be expected because of that.\n\nMaybe something like introducing a common \"Perf-Improvement: true\"\nmarker in the commit message and when doing so add a clear paragraph\nexplaining the expected perf impact perf impact. Another option could\nbe to add a \"User Impact\" section to the commit message, where an\nauthor could add their suggestion for a release note entry. So\nbasically this suggestion boils down to more clearly mentioning user\nimpact in commit messages, instead of mostly/only including\ntechnical/implementation details.\n\n\n", "msg_date": "Thu, 16 May 2024 11:49:24 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 5/15/24 23:48, Andres Freund wrote:\n> On 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n>> I disagree with this. IMO the impact of the Sawada/Naylor change is\n>> likely to be enormous for people with large tables and large numbers of\n>> tuples to clean up (I know we've had a number of customers in this\n>> situation, I can't imagine any Postgres service provider that doesn't).\n>> The fact that maintenance_work_mem is no longer capped at 1GB is very\n>> important and I think we should mention that explicitly in the release\n>> notes, as setting it higher could make a big difference in vacuum run\n>> times.\n> \n> +many.\n> \n> We're having this debate every release. I think the ongoing reticence to note\n> performance improvements in the release notes is hurting Postgres.\n> \n> For one, performance improvements are one of the prime reason users\n> upgrade. Without them being noted anywhere more dense than the commit log,\n> it's very hard to figure out what improved for users. A halfway widely\n> applicable performance improvement is far more impactful than many of the\n> feature changes we do list in the release notes.\n\nmany++\n\n> For another, it's also very frustrating for developers that focus on\n> performance. The reticence to note their work, while noting other, far\n> smaller, things in the release notes, pretty much tells us that our work isn't\n> valued.\n\nagreed\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 08:09:56 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 11:48 PM Andres Freund <[email protected]> wrote:\n>\n> On 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n> > I disagree with this. IMO the impact of the Sawada/Naylor change is\n> > likely to be enormous for people with large tables and large numbers of\n> > tuples to clean up (I know we've had a number of customers in this\n> > situation, I can't imagine any Postgres service provider that doesn't).\n> > The fact that maintenance_work_mem is no longer capped at 1GB is very\n> > important and I think we should mention that explicitly in the release\n> > notes, as setting it higher could make a big difference in vacuum run\n> > times.\n>\n> +many.\n>\n> We're having this debate every release. I think the ongoing reticence to note\n> performance improvements in the release notes is hurting Postgres.\n>\n> For one, performance improvements are one of the prime reason users\n> upgrade. Without them being noted anywhere more dense than the commit log,\n> it's very hard to figure out what improved for users. A halfway widely\n> applicable performance improvement is far more impactful than many of the\n> feature changes we do list in the release notes.\n\nThe practical reason this matters to users is that they want to change\ntheir configuration or expectations in response to performance\nimprovements.\n\nAnd also, as Jelte mentions upthread, describing performance\nimprovements made each release in Postgres makes it clear that we are\nconsistently improving it.\n\n> For another, it's also very frustrating for developers that focus on\n> performance. The reticence to note their work, while noting other, far\n> smaller, things in the release notes, pretty much tells us that our work isn't\n> valued.\n\n+many\n\n- Melanie\n\n\n", "msg_date": "Thu, 16 May 2024 09:09:11 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 11:48 PM Andres Freund <[email protected]> wrote:\n> On 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n> > I disagree with this. IMO the impact of the Sawada/Naylor change is\n> > likely to be enormous for people with large tables and large numbers of\n> > tuples to clean up (I know we've had a number of customers in this\n> > situation, I can't imagine any Postgres service provider that doesn't).\n> > The fact that maintenance_work_mem is no longer capped at 1GB is very\n> > important and I think we should mention that explicitly in the release\n> > notes, as setting it higher could make a big difference in vacuum run\n> > times.\n>\n> +many.\n\nTIDStore/the lifting of the maintenance_work_mem cap is likely to make\nthe performance of VACUUM a lot more predictable, overall. While most\nVACUUM operations don't hit the limit, the limit is disproportionately\ninvolved in cases where (for whatever reason) vacuuming becomes a long\nand painful process. Even if you as a user never run into such a\nproblem, you still spend time worrying about it, and/or taking\nmeasures to make sure it doesn't affect you.\n\nThe justification for not including mention of these items is that\nthey're not very relevant to users. I find that hard to square with\nwhat does get included. For example, the \"Source Code\" section is full\nof highly niche items. Items that are low impact, even for users\nthat'll benefit the most. Also, \"Monitoring\" often mentions monitoring\nimprovements that expose low-level implementation details (e.g. SLRU\nstatistics), even though there's a good chance that Bruce wouldn't\ninclude an item for some improvement to the SLRU subsystem itself.\n\nIf somebody puts in an enormous amount of effort to get a big\nperformance improvement over the line, then ISTM that that effort is a\nuseful signal when the time comes to write the release notes (at least\nup to a point). For example, Masahiko and John spent about 2 years on\nthe TIDStore thing, on and off. These things do not happen in a vacuum\n(no pun intended). Common sense tells me that they went to those\nlengths precisely because they understood that it very much was\nrelevant to users. That belief would have been reinforced by both\nexperience, and by discussion on the list during the development of\nthe feature.\n\nTo be fair to Bruce, it probably really is true that most individual\nusers won't care about (say) TIDStore. But it's probably also true\nthat most individual users don't care about the release notes, or at\nmost skim the major items.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 16 May 2024 10:55:25 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\nOn 2024-05-14 Tu 20:39, Bruce Momjian wrote:\n> On Sat, May 11, 2024 at 03:32:55PM -0400, Andrew Dunstan wrote:\n>> On 2024-05-09 Th 00:03, Bruce Momjian wrote:\n>>> I have committed the first draft of the PG 17 release notes; you can\n>>> see the results here:\n>>>\n>>> \thttps://momjian.us/pgsql_docs/release-17.html\n>>>\n>>> It will be improved until the final release. The item count is 188,\n>>> which is similar to recent releases:\n>>>\n>>> \trelease-10: 189\n>>> \trelease-11: 170\n>>> \trelease-12: 180\n>>> \trelease-13: 178\n>>> \trelease-14: 220\n>>> \trelease-15: 184\n>>> \trelease-16: 206\n>>> \trelease-17: 188\n>>>\n>>> I welcome feedback. For some reason it was an easier job than usual.\n>>\n>> I don't like blowing my own horn but I feel commit 3311ea86ed \"Introduce a\n>> non-recursive JSON parser\" should be in the release notes. This isn't\n>> something that's purely internal, but it could be used by an extension or a\n>> client program to parse JSON documents that are too large to handle with the\n>> existing API.\n>>\n>> Maybe \"Introduce an incremental JSON parser\" would have been a better\n>> headline.\n> Well, this gets into a level of detail that is beyond the average\n> reader. I think at that level people will need to read the git logs or\n> review the code. Do we use it for anything yet?\n\n\nYes, certainly, it's used in handling backup manifests. Without it we \ncan't handle huge manifests. See commits ea7b4e9a2a and 222e11a10a.\n\nOther uses are in the works.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 11:50:20 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n\nThis thread mentioned performance.\nactually this[1] refactored some interval aggregation related functions,\nwhich will make these two aggregate function: avg(interval), sum(interval)\nrun faster, especially avg(interval).\nsee [2].\nwell, I guess, this is a kind of niche performance improvement to be\nmentioned separately.\n\n\nthese 3 items need to be removed, because of\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=8aee330af55d8a759b2b73f5a771d9d34a7b887f\n\n>> Add stratnum GiST support function (Paul A. Jungwirth)\n\n>> Allow foreign keys to reference WITHOUT OVERLAPS primary keys (Paul A. Jungwirth)\n>> The keyword PERIOD is used for this purpose.\n\n>> Allow PRIMARY KEY and UNIQUE constraints to use WITHOUT OVERLAPS for non-overlapping exclusion constraints (Paul A. Jungwirth)\n\n\n[1] https://git.postgresql.org/cgit/postgresql.git/commit/?id=519fc1bd9e9d7b408903e44f55f83f6db30742b7\n[2] https://www.postgresql.org/message-id/CAEZATCUJ0xjyQUL7SHKxJ5a%2BDm5pjoq-WO3NtkDLi6c76rh58w%40mail.gmail.com\n\n\n", "msg_date": "Fri, 17 May 2024 21:22:59 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\tBruce Momjian wrote:\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n> \n> \thttps://momjian.us/pgsql_docs/release-17.html\n\nAbout the changes in collations:\n\n<quote>\n Create a \"builtin\" collation provider similar to libc's C locale\n (Jeff Davis)\n\n It uses a \"C\" locale which is identical but independent of\n libc, but it allows the use of non-\"C\" collations like \"en_US\"\n and \"C.UTF-8\" with the \"C\" locale, which libc does not. MORE?\n</quote>\n\nThe new builtin provider has two collations:\n* ucs_basic which is 100% identical to \"C\". It was introduced\nseveral versions ago and the v17 novelty is simply to change\nits pg_collation.collprovider from 'c' to 'b'.\n\n* pg_c_utf8 which sorts like \"C\" but is Unicode-aware for\nthe rest, which makes it quite different from \"C\".\nIt's also different from the other UTF-8 collations that could\nbe used up to v17 in that it does not depend on an external\nlibrary, making it free from the collation OS-upgrade risks.\n\nThe part that is concretely of interest to users is the introduction\nof pg_c_utf8. As described in [1]:\n\n<quote>\npg_c_utf8\n\n This collation sorts by Unicode code point values rather than\n natural language order. For the functions lower, initcap, and\n upper, it uses Unicode simple case mapping. For pattern\n matching (including regular expressions), it uses the POSIX\n Compatible variant of Unicode Compatibility Properties. Behavior\n is efficient and stable within a Postgres major version. This\n collation is only available for encoding UTF8.\n</quote>\n\nI'd suggest that the relnote entry should be more like a condensed\nversion of that description, without mentioning en_US or C.UTF-8,\nwhose existence and semantics are OS-dependent, contrary to pg_c_utf8.\n\n\n[1] https://www.postgresql.org/docs/devel/collation.html\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 17 May 2024 15:42:44 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 2024-05-09 at 00:03 -0400, Bruce Momjian wrote:\n> I have committed the first draft of the PG 17 release notes;  you can\n> see the results here:\n> \n>         https://momjian.us/pgsql_docs/release-17.html\n\nFor this item:\n\n Create a \"builtin\" collation provider similar to libc's C\n locale (Jeff Davis)\n\n It uses a \"C\" locale which is identical but independent of\n libc, but it allows the use of non-\"C\" collations like \"en_US\"\n and \"C.UTF-8\" with the \"C\" locale, which libc does not. MORE? \n\nI suggest something more like:\n\n New, platform-independent \"builtin\" collation\n provider. (Jeff Davis)\n\n Currently, it offers the \"C\" and \"C.UTF-8\" locales. The\n \"C.UTF-8\" locale combines stable and fast code point order\n collation with Unicode character semantics.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 17 May 2024 13:30:03 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 16, 2024 at 03:35:17PM +1200, David Rowley wrote:\n> On Thu, 16 May 2024 at 14:48, Bruce Momjian <[email protected]> wrote:\n> >\n> > On Wed, May 15, 2024 at 09:13:14AM -0400, Melanie Plageman wrote:\n> > > Also +1 on the Sawada/Naylor change being on the highlight section of\n> > > the release (as David suggested upthread).\n> >\n> > Agreed, I went with the attached applied patch.\n> \n> +Allow vacuum to more efficiently store tuple references and remove\n> its memory limit (Masahiko Sawada, John Naylor)\n> +</para>\n> \n> I don't want it to seem like I'm splitting hairs, but I'd drop the \"\n> and remove its memory limit\"\n> \n> +<para>\n> +Specifically, maintenance_work_mem and autovacuum_work_mem can now be\n> configured to use more than one gigabyte of memory. WAL traffic\n> caused by vacuum is also more compact.\n> \n> I'd say the first sentence above should be written as:\n> \n> \"Additionally, vacuum no longer silently imposes a 1GB tuple reference\n> limit even when maintenance_work_mem or autovacuum_work_mem are set to\n> higher values\"\n> \n> It's not \"Specifically\" as the \"more efficiently store tuple\n> references\" isn't the same thing as removing the 1GB cap. Also, there\n> was never a restriction in configuring maintenance_work_mem or\n> autovacuum_work_mem to values higher than 1GB. The restriction was\n> that vacuum was unable to utilize anything more than that.\n\nSlightly adjusted wording patch attached and applied.\n\nMy deep apologies for the delay in addressing this. I should have done\nit sooner.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 18 May 2024 10:40:50 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 15, 2024 at 08:48:02PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n> > I disagree with this. IMO the impact of the Sawada/Naylor change is\n> > likely to be enormous for people with large tables and large numbers of\n> > tuples to clean up (I know we've had a number of customers in this\n> > situation, I can't imagine any Postgres service provider that doesn't).\n> > The fact that maintenance_work_mem is no longer capped at 1GB is very\n> > important and I think we should mention that explicitly in the release\n> > notes, as setting it higher could make a big difference in vacuum run\n> > times.\n> \n> +many.\n> \n> We're having this debate every release. I think the ongoing reticence to note\n> performance improvements in the release notes is hurting Postgres.\n> \n> For one, performance improvements are one of the prime reason users\n> upgrade. Without them being noted anywhere more dense than the commit log,\n> it's very hard to figure out what improved for users. A halfway widely\n> applicable performance improvement is far more impactful than many of the\n> feature changes we do list in the release notes.\n\nI agree the impact of performance improvements are often greater than\nthe average release note item. However, if people expect Postgres to be\nfaster, is it important for them to know _why_ it is faster?\n\nIf we add a new flag to a command, people will want to know about it so\nthey can make use of it, or if there is a performance improvement that\nallows new workloads, they will want to know about that too so they can\nconsider those workloads.\n\nOn the flip side, a performance improvement that makes everything 10%\nfaster has little behavioral change for users, and in fact I think we\nget ~6% faster in every major release.\n\n> For another, it's also very frustrating for developers that focus on\n> performance. The reticence to note their work, while noting other, far\n> smaller, things in the release notes, pretty much tells us that our work isn't\n> valued.\n\nYes, but are we willing to add text that every user will have to read\njust for this purpose?\n\nOne think we _could_ do is to create a generic performance release note\nitem saying performance has been improved in the following areas, with\nno more details, but we can list the authors on the item.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 18 May 2024 10:59:47 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 16, 2024 at 09:09:11AM -0400, Melanie Plageman wrote:\n> On Wed, May 15, 2024 at 11:48 PM Andres Freund <[email protected]> wrote:\n> >\n> > On 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n> > > I disagree with this. IMO the impact of the Sawada/Naylor change is\n> > > likely to be enormous for people with large tables and large numbers of\n> > > tuples to clean up (I know we've had a number of customers in this\n> > > situation, I can't imagine any Postgres service provider that doesn't).\n> > > The fact that maintenance_work_mem is no longer capped at 1GB is very\n> > > important and I think we should mention that explicitly in the release\n> > > notes, as setting it higher could make a big difference in vacuum run\n> > > times.\n> >\n> > +many.\n> >\n> > We're having this debate every release. I think the ongoing reticence to note\n> > performance improvements in the release notes is hurting Postgres.\n> >\n> > For one, performance improvements are one of the prime reason users\n> > upgrade. Without them being noted anywhere more dense than the commit log,\n> > it's very hard to figure out what improved for users. A halfway widely\n> > applicable performance improvement is far more impactful than many of the\n> > feature changes we do list in the release notes.\n> \n> The practical reason this matters to users is that they want to change\n> their configuration or expectations in response to performance\n> improvements.\n> \n> And also, as Jelte mentions upthread, describing performance\n> improvements made each release in Postgres makes it clear that we are\n> consistently improving it.\n> \n> > For another, it's also very frustrating for developers that focus on\n> > performance. The reticence to note their work, while noting other, far\n> > smaller, things in the release notes, pretty much tells us that our work isn't\n> > valued.\n> \n> +many\n\nPlease see the email I just posted. There are three goals we have to\nadjust for:\n\n1. short release notes so they are readable\n2. giving people credit for performance improvements\n3. showing people Postgres cares about performance\n\nI would like to achieve 2 & 3 without harming #1. My experience is if I\nam reading a long document, and I get to a section where I start to\nwonder, \"Why should I care about this?\", I start to skim the rest of\nthe document. I am particularly critical if I start to wonder, \"Why\ndoes the author _think_ I should care about this?\" becasue it feels like\nthe author is writing for him/herself and not the audience.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 18 May 2024 11:13:54 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 16, 2024 at 04:29:38PM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> \n> >> Add jsonpath methods to convert JSON values to different data types (Jeevan Chalke)\n> >> The jsonpath methods are .bigint(), .boolean(), .date(), .decimal([precision [, scale]]), .integer(), .number(), .string(), .time(), .time_tz(), .timestamp(), and .timestamp_tz().\n> \n> I think it's slightly incorrect.\n> \n> for example:\n> select jsonb_path_query('\"2023-08-15\"', '$.date()');\n> I think it does is trying to validate json value \"2023-08-15\" can be a\n> date value, if so, print json string out, else error out.\n> \n> \n> \"convert JSON values to different data types\"\n> meaning that we are converting json values to another data type, date?\n\nI see your point. I have reworded it to be:\n\n\tAdd jsonpath methods to convert JSON values to other JSON data\n\ttypes (Jeevan Chalke)\n\nDoes that help? I think your example is causing confusion because once\nJSON values enter the SQL data type space, they are strings.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 18 May 2024 12:11:38 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 16, 2024 at 11:50:20AM -0400, Andrew Dunstan wrote:\n> > Maybe \"Introduce an incremental JSON parser\" would have been a better\n> > > headline.\n> > Well, this gets into a level of detail that is beyond the average\n> > reader. I think at that level people will need to read the git logs or\n> > review the code. Do we use it for anything yet?\n> \n> \n> Yes, certainly, it's used in handling backup manifests. Without it we can't\n> handle huge manifests. See commits ea7b4e9a2a and 222e11a10a.\n> \n> Other uses are in the works.\n\nOkay, added in the attached applied patch.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 18 May 2024 12:50:53 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "\nOn 2024-05-18 Sa 12:50, Bruce Momjian wrote:\n> On Thu, May 16, 2024 at 11:50:20AM -0400, Andrew Dunstan wrote:\n>>> Maybe \"Introduce an incremental JSON parser\" would have been a better\n>>>> headline.\n>>> Well, this gets into a level of detail that is beyond the average\n>>> reader. I think at that level people will need to read the git logs or\n>>> review the code. Do we use it for anything yet?\n>>\n>> Yes, certainly, it's used in handling backup manifests. Without it we can't\n>> handle huge manifests. See commits ea7b4e9a2a and 222e11a10a.\n>>\n>> Other uses are in the works.\n> Okay, added in the attached applied patch.\n>\n\nThanks\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 18 May 2024 16:37:54 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 17, 2024 at 09:22:59PM +0800, jian he wrote:\n> On Thu, May 9, 2024 at 12:04 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> >\n> > It will be improved until the final release. The item count is 188,\n> > which is similar to recent releases:\n> >\n> \n> This thread mentioned performance.\n> actually this[1] refactored some interval aggregation related functions,\n> which will make these two aggregate function: avg(interval), sum(interval)\n> run faster, especially avg(interval).\n> see [2].\n> well, I guess, this is a kind of niche performance improvement to be\n> mentioned separately.\n> \n> \n> these 3 items need to be removed, because of\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=8aee330af55d8a759b2b73f5a771d9d34a7b887f\n> \n> >> Add stratnum GiST support function (Paul A. Jungwirth)\n> \n> >> Allow foreign keys to reference WITHOUT OVERLAPS primary keys (Paul A. Jungwirth)\n> >> The keyword PERIOD is used for this purpose.\n> \n> >> Allow PRIMARY KEY and UNIQUE constraints to use WITHOUT OVERLAPS for non-overlapping exclusion constraints (Paul A. Jungwirth)\n> \n> \n> [1] https://git.postgresql.org/cgit/postgresql.git/commit/?id=519fc1bd9e9d7b408903e44f55f83f6db30742b7\n> [2] https://www.postgresql.org/message-id/CAEZATCUJ0xjyQUL7SHKxJ5a%2BDm5pjoq-WO3NtkDLi6c76rh58w%40mail.gmail.com\n\nAgreed, I have applied the attached patch to make the release notes\ncurrent.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 18 May 2024 17:37:29 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 17, 2024 at 01:30:03PM -0700, Jeff Davis wrote:\n> On Thu, 2024-05-09 at 00:03 -0400, Bruce Momjian wrote:\n> > I have committed the first draft of the PG 17 release notes;  you can\n> > see the results here:\n> > \n> >         https://momjian.us/pgsql_docs/release-17.html\n> \n> For this item:\n> \n> Create a \"builtin\" collation provider similar to libc's C\n> locale (Jeff Davis)\n> \n> It uses a \"C\" locale which is identical but independent of\n> libc, but it allows the use of non-\"C\" collations like \"en_US\"\n> and \"C.UTF-8\" with the \"C\" locale, which libc does not. MORE? \n> \n> I suggest something more like:\n> \n> New, platform-independent \"builtin\" collation\n> provider. (Jeff Davis)\n> \n> Currently, it offers the \"C\" and \"C.UTF-8\" locales. The\n> \"C.UTF-8\" locale combines stable and fast code point order\n> collation with Unicode character semantics.\n\nOkay, I went with the attached applied patch. Adjustments?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 18 May 2024 17:51:56 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sun, 19 May 2024 at 02:40, Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 03:35:17PM +1200, David Rowley wrote:\n> > \"Additionally, vacuum no longer silently imposes a 1GB tuple reference\n> > limit even when maintenance_work_mem or autovacuum_work_mem are set to\n> > higher values\"\n\n> Slightly adjusted wording patch attached and applied.\n\nThanks for adjusting.\n\nIt's a minor detail, but I'll mention it because you went to the\neffort to adjust it away from what I'd written...\n\nI didn't make a random choice to use \"or\" between the two GUCs.\nChanging it to \"and\", IMO, isn't an improvement. Using \"and\" implies\nthat the silent limited was only imposed when both of these GUCs were\nset >= 1GB. That's not true. For the case we're talking about here, if\nautovacuum_work_mem is set to anything apart from -1 then the value of\nmaintenance_work_mem does not matter.\n\nDavid\n\n\n", "msg_date": "Sun, 19 May 2024 15:53:38 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sun, May 19, 2024 at 03:53:38PM +1200, David Rowley wrote:\n> On Sun, 19 May 2024 at 02:40, Bruce Momjian <[email protected]> wrote:\n> >\n> > On Thu, May 16, 2024 at 03:35:17PM +1200, David Rowley wrote:\n> > > \"Additionally, vacuum no longer silently imposes a 1GB tuple reference\n> > > limit even when maintenance_work_mem or autovacuum_work_mem are set to\n> > > higher values\"\n> \n> > Slightly adjusted wording patch attached and applied.\n> \n> Thanks for adjusting.\n> \n> It's a minor detail, but I'll mention it because you went to the\n> effort to adjust it away from what I'd written...\n\nTrue.\n\n> I didn't make a random choice to use \"or\" between the two GUCs.\n> Changing it to \"and\", IMO, isn't an improvement. Using \"and\" implies\n> that the silent limited was only imposed when both of these GUCs were\n> set >= 1GB. That's not true. For the case we're talking about here, if\n> autovacuum_work_mem is set to anything apart from -1 then the value of\n> maintenance_work_mem does not matter.\n\nOkay, changed to \"or\".\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 19 May 2024 20:12:31 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi Bruce, thanks for doing this again!\n\nI'm a bit late to this discussion -- there's been a bit of churn in\nthe vacuum items, and some streams got crossed along the way. I've\nattached an attempt to rectify this.", "msg_date": "Mon, 20 May 2024 13:23:02 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Mon, May 20, 2024 at 01:23:02PM +0700, John Naylor wrote:\n> Hi Bruce, thanks for doing this again!\n> \n> I'm a bit late to this discussion -- there's been a bit of churn in\n> the vacuum items, and some streams got crossed along the way. I've\n> attached an attempt to rectify this.\n\nAgreed, patch applied, thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 20 May 2024 09:37:15 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, May 18, 2024 at 11:13 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 09:09:11AM -0400, Melanie Plageman wrote:\n> > On Wed, May 15, 2024 at 11:48 PM Andres Freund <[email protected]> wrote:\n> > >\n> > > On 2024-05-15 10:38:20 +0200, Alvaro Herrera wrote:\n> > > > I disagree with this. IMO the impact of the Sawada/Naylor change is\n> > > > likely to be enormous for people with large tables and large numbers of\n> > > > tuples to clean up (I know we've had a number of customers in this\n> > > > situation, I can't imagine any Postgres service provider that doesn't).\n> > > > The fact that maintenance_work_mem is no longer capped at 1GB is very\n> > > > important and I think we should mention that explicitly in the release\n> > > > notes, as setting it higher could make a big difference in vacuum run\n> > > > times.\n> > >\n> > > +many.\n> > >\n> > > We're having this debate every release. I think the ongoing reticence to note\n> > > performance improvements in the release notes is hurting Postgres.\n> > >\n> > > For one, performance improvements are one of the prime reason users\n> > > upgrade. Without them being noted anywhere more dense than the commit log,\n> > > it's very hard to figure out what improved for users. A halfway widely\n> > > applicable performance improvement is far more impactful than many of the\n> > > feature changes we do list in the release notes.\n> >\n> > The practical reason this matters to users is that they want to change\n> > their configuration or expectations in response to performance\n> > improvements.\n> >\n> > And also, as Jelte mentions upthread, describing performance\n> > improvements made each release in Postgres makes it clear that we are\n> > consistently improving it.\n> >\n> > > For another, it's also very frustrating for developers that focus on\n> > > performance. The reticence to note their work, while noting other, far\n> > > smaller, things in the release notes, pretty much tells us that our work isn't\n> > > valued.\n> >\n> > +many\n>\n> Please see the email I just posted. There are three goals we have to\n> adjust for:\n>\n> 1. short release notes so they are readable\n> 2. giving people credit for performance improvements\n> 3. showing people Postgres cares about performance\n\nI agree with all three of these goals. I would even add to 3 \"show\nusers Postgres is addressing their performance complaints\". That in\nparticular makes me less excited about having a \"generic performance\nrelease note item saying performance has been improved in the\nfollowing areas\" (from your other email). I think that describing the\nspecific performance improvements is required to 1) allow users to\nchange expectations and configurations to take advantage of the\nperformance enhancements 2) ensure users know that their performance\nconcerns are being addressed.\n\n> I would like to achieve 2 & 3 without harming #1. My experience is if I\n> am reading a long document, and I get to a section where I start to\n> wonder, \"Why should I care about this?\", I start to skim the rest of\n> the document. I am particularly critical if I start to wonder, \"Why\n> does the author _think_ I should care about this?\" becasue it feels like\n> the author is writing for him/herself and not the audience.\n\nI see what you are saying. We don't want to just end up with the whole\ngit log in the release notes. However, we know that not all users will\ncare about the same features. As someone said somewhere else in this\nthread, presumably hackers spend time on features because some users\nwant them.\n\n- Melanie\n\n\n", "msg_date": "Mon, 20 May 2024 14:35:37 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Mon, May 20, 2024 at 02:35:37PM -0400, Melanie Plageman wrote:\n> On Sat, May 18, 2024 at 11:13 AM Bruce Momjian <[email protected]> wrote:\n> > Please see the email I just posted. There are three goals we have to\n> > adjust for:\n> >\n> > 1. short release notes so they are readable\n> > 2. giving people credit for performance improvements\n> > 3. showing people Postgres cares about performance\n> \n> I agree with all three of these goals. I would even add to 3 \"show\n> users Postgres is addressing their performance complaints\". That in\n> particular makes me less excited about having a \"generic performance\n> release note item saying performance has been improved in the\n> following areas\" (from your other email). I think that describing the\n> specific performance improvements is required to 1) allow users to\n> change expectations and configurations to take advantage of the\n> performance enhancements 2) ensure users know that their performance\n> concerns are being addressed.\n\nWell, as you can see, doing #2 & #3 works against accomplishing #1.\n\n> > I would like to achieve 2 & 3 without harming #1. My experience is if I\n> > am reading a long document, and I get to a section where I start to\n> > wonder, \"Why should I care about this?\", I start to skim the rest of\n> > the document. I am particularly critical if I start to wonder, \"Why\n> > does the author _think_ I should care about this?\" becasue it feels like\n> > the author is writing for him/herself and not the audience.\n> \n> I see what you are saying. We don't want to just end up with the whole\n> git log in the release notes. However, we know that not all users will\n> care about the same features. As someone said somewhere else in this\n> thread, presumably hackers spend time on features because some users\n> want them.\n\nI think we need as a separate section about performance improvements\nthat don't affect specific workloads. Peter Eisentraut created an\nAcknowledgements section at the bottom of the release notes, similar to\n#2 above, so maybe someone else can add a performance internals section\ntoo.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 20 May 2024 14:40:42 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Mon, May 20, 2024 at 9:37 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Mon, May 20, 2024 at 01:23:02PM +0700, John Naylor wrote:\n> > Hi Bruce, thanks for doing this again!\n> >\n> > I'm a bit late to this discussion -- there's been a bit of churn in\n> > the vacuum items, and some streams got crossed along the way. I've\n> > attached an attempt to rectify this.\n>\n> Agreed, patch applied, thanks.\n\nIf \"Allow vacuum to more efficiently remove and freeze tuples\" stays\nin the release notes, I would add Heikki's name. He wasn't listed as a\nco-author on all of the commits that were part of this feature, but he\nmade a substantial investment in it and should be listed, IMO. Patch\nattached.\n\n- Melanie", "msg_date": "Mon, 20 May 2024 14:47:28 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, 2024-05-18 at 17:51 -0400, Bruce Momjian wrote:\n> Okay, I went with the attached applied patch.  Adjustments?\n\nI think it should have more emphasis on the actual new feature: a\nplatform-independent builtin collation provider and the C.UTF-8 locale.\n\nThe user can look at the documentation for comparison with libc.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 20 May 2024 11:48:09 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 1:04 PM Bruce Momjian <[email protected]> wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n> release-10: 189\n> release-11: 170\n> release-12: 180\n> release-13: 178\n> release-14: 220\n> release-15: 184\n> release-16: 206\n> release-17: 188\n>\n> I welcome feedback. For some reason it was an easier job than usual.\n\nThanks Bruce for working on this as always.\n\nFailed to notice when I read the notes before:\n\n<listitem>\n<para>\nAdd SQL/JSON constructor functions JSON(), JSON_SCALAR(), and\nJSON_SERIALIZE() (Amit Langote)\n</para>\n</listitem>\n\nShould be:\n\n<listitem>\n<para>\nAdd SQL/JSON constructor functions JSON(), JSON_SCALAR(), and\nJSON_SERIALIZE() (Nikita Glukhov, Teodor Sigaev, Oleg Bartunov,\nAlexander Korotkov, Andrew Dunstan, Amit Langote)\n</para>\n</listitem>\n\nPatch attached.\n\n-- \nThanks, Amit Langote", "msg_date": "Tue, 21 May 2024 11:20:02 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn 2024-05-18 10:59:47 -0400, Bruce Momjian wrote:\n> On Wed, May 15, 2024 at 08:48:02PM -0700, Andres Freund wrote:\n> > +many.\n> >\n> > We're having this debate every release. I think the ongoing reticence to note\n> > performance improvements in the release notes is hurting Postgres.\n> >\n> > For one, performance improvements are one of the prime reason users\n> > upgrade. Without them being noted anywhere more dense than the commit log,\n> > it's very hard to figure out what improved for users. A halfway widely\n> > applicable performance improvement is far more impactful than many of the\n> > feature changes we do list in the release notes.\n>\n> I agree the impact of performance improvements are often greater than\n> the average release note item. However, if people expect Postgres to be\n> faster, is it important for them to know _why_ it is faster?\n\nYes, it very often is. Performance improvements typically aren't \"make\neverything 3% faster\", they're more \"make this special thing 20%\nfaster\". Without know what got faster, users don't know if\na) the upgrade will improve their production situation\nb) they need to change something to take advantage of the improvement\n\n\n> On the flip side, a performance improvement that makes everything 10%\n> faster has little behavioral change for users, and in fact I think we\n> get ~6% faster in every major release.\n\nI cannot recall many \"make everything faster\" improvements, if any.\n\nAnd even if it's \"make everything faster\" - that's useful for users to know,\nit might solve their production problem! It's also good for PR.\n\n\nGiven how expensive postgres upgrades still are, we can't expect production\nworkloads to upgrade to every major version. The set of performance\nimprovements and feature additions between major versions can help users make\nan informed decision.\n\n\nAlso, the release notes are also not just important to users. I often go back\nand look in the release notes to see when some some important change was made,\nbecause sometimes it's harder to find it in the git log, due to sheer\nvolume. And even just keeping up with all the changes between two releases is\nhard, it's useful to be able to read the release notes and see what happened.\n\n\n> > For another, it's also very frustrating for developers that focus on\n> > performance. The reticence to note their work, while noting other, far\n> > smaller, things in the release notes, pretty much tells us that our work isn't\n> > valued.\n>\n> Yes, but are we willing to add text that every user will have to read\n> just for this purpose?\n\nOf course it's a tradeoff. We shouldn't verbosely note down every small\nchanges just because of the recognition, that'd make the release notes\nunreadable. And it'd just duplicate the commit log. But that's not the same as\ndefaulting to not noting performance improvements, even if the performance\nimprovement is more impactful than many other features that are noted.\n\n\n> One think we _could_ do is to create a generic performance release note\n> item saying performance has been improved in the following areas, with\n> no more details, but we can list the authors on the item.\n\nTo me that's the \"General Performance\" section. If somebody reading the\nrelease notes doesn't care about performance, they can just skip that section\n([1]). I don't see why we wouldn't want to include the same level of detail\nas for other changes.\n\nGreetings,\n\nAndres Freund\n\n[1] I've wondered if we should have one more level of TOC on the release note\npage, so it's easier to jump to specific sections.\n\n\n", "msg_date": "Tue, 21 May 2024 09:27:20 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn 2024-05-18 11:13:54 -0400, Bruce Momjian wrote:\n> Please see the email I just posted. There are three goals we have to\n> adjust for:\n> \n> 1. short release notes so they are readable\n> 2. giving people credit for performance improvements\n> 3. showing people Postgres cares about performance\n> \n> I would like to achieve 2 & 3 without harming #1. My experience is if I\n> am reading a long document, and I get to a section where I start to\n> wonder, \"Why should I care about this?\", I start to skim the rest of\n> the document.\n\nI agree keeping things reasonably short is important. But I don't think you're\nevenly applying it as a goal.\n\nJust skimming the notes from the end, I see\n- an 8 entries long pg_stat_statements section\n- multiple entries about \"Create custom wait events for ...\"\n- three entries about adding --all to {reindexdb,vacuumdb,clusterdb}.\n- an entry about adding long options to pg_archivecleanup\n- two entries about grantable maintenance rights, once via pg_maintain, once\n per-table\n- separate entries about pg_stat_reset_slru(), pg_stat_reset_shared(\"slru\"),\n\nIf you're concerned about brevity, we can make things shorter without skipping\nover most performance imporvements.\n\n\n> I am particularly critical if I start to wonder, \"Why\n> does the author _think_ I should care about this?\" becasue it feels like\n> the author is writing for him/herself and not the audience.\n\nFWIW, I think it's a good thing for somebody other than the author to have a\nhand in writing a release notes entry for a change. The primary author(s) are\noften too deep into some issue to have a good view of the right level of\ndetail and understandability.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 May 2024 09:40:28 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn 2024-05-21 09:27:20 -0700, Andres Freund wrote:\n> Also, the release notes are also not just important to users. I often go back\n> and look in the release notes to see when some some important change was made,\n> because sometimes it's harder to find it in the git log, due to sheer\n> volume. And even just keeping up with all the changes between two releases is\n> hard, it's useful to be able to read the release notes and see what happened.\n>\n> [...]\n>\n> [1] I've wondered if we should have one more level of TOC on the release note\n> page, so it's easier to jump to specific sections.\n\nWhich reminds me: Eventually I'd like to add links to the most important\ncommits related to release note entries. We already do much of the work of\nbuilding that list of commits for each entry. That'd allow a reader to find\nmore details if interested.\n\nRight one either has to open the sgml file (which no user will know to do), or\nfind the git entries manually. The latter of which is often hard, because the\ngit commits often will use different wording etc.\n\nAdmittedly doing so within the constraints of docbook and not wanting to\noverly decrease density (both in .sgml and the resulting html) isn't a trivial\ntask.\n\n\nThat's entirely independent of my concern around noting performance\nimprovements in the release notes, of course.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 May 2024 09:51:09 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-May-21, Andres Freund wrote:\n\n> Which reminds me: Eventually I'd like to add links to the most important\n> commits related to release note entries. We already do much of the work of\n> building that list of commits for each entry. That'd allow a reader to find\n> more details if interested.\n\n+1. Several times I have had to open the SGML to fetch a commit ID and\nbuild a URL to provide to someone.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n", "msg_date": "Tue, 21 May 2024 18:55:01 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 12:27 PM Andres Freund <[email protected]> wrote:\n> > I agree the impact of performance improvements are often greater than\n> > the average release note item. However, if people expect Postgres to be\n> > faster, is it important for them to know _why_ it is faster?\n>\n> Yes, it very often is.\n\nIs it important for them to even read the release notes?\n\nBruce's arguments against listing performance items more often/with\ngreater prominence could just as easily be applied to other types of\nfeatures, in other areas. Performance is a feature (or a feature\ncategory) -- no better or worse than any other category of feature.\n\n> Performance improvements typically aren't \"make\n> everything 3% faster\", they're more \"make this special thing 20%\n> faster\". Without know what got faster, users don't know if\n> a) the upgrade will improve their production situation\n> b) they need to change something to take advantage of the improvement\n\nAnother important category of performance improvement is \"make the\nthing that was just unusable usable, for the first time ever\".\n\nSometimes the baseline is unreasonably slow, so an improvement\neffectively allows you as a user to do something that just wasn't\npossible on previous versions. Other times it's addressed at something\nthat was very scary, like VACUUMs that need multiple rounds of index\nvacuuming. Multiple rounds of index vacuuming are just woefully,\nhorribly inefficient, and are the single individual thing that can\nmake things far worse. Even if you didn't technically have that\nproblem before now, you did have the problem of having to worry about\nit. So the work in question has sanded-down this really nasty sharp\nedge. That's a substantial quality of life improvement for many users.\n\nIn short, many individual performance improvements are best thought of\nas qualitative improvements, rather than quantitative improvements.\n\nIt doesn't help that there is a kind of pressure to present them as\nquantitative improvements. For example, I was recently encouraged to\npresent my own Postgres 17 B-Tree work internally using some kind of\nheadline grabbing measure like \"6x faster\". That just seems silly to\nme. I can contrive a case where it's faster by an arbitrarily large\namount. Much like how a selective index scan can be arbitrarily faster\nthan a sequential scan. Again, a qualitative improvement.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 21 May 2024 13:06:43 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 12:27 PM Andres Freund <[email protected]> wrote:\n> To me that's the \"General Performance\" section. If somebody reading the\n> release notes doesn't care about performance, they can just skip that section\n> ([1]). I don't see why we wouldn't want to include the same level of detail\n> as for other changes.\n\nI'm relatively sure that we've had this argument in previous years and\nessentially everyone but Bruce has agreed with the idea that\nperformance changes ought to be treated the same as any other kind of\nimprovement. The difficulty is that Bruce is the one doing the release\nnotes. I think it might help if someone were willing to prepare a\npatch showing what they think specifically should be changed. Or maybe\nBruce would be willing to provide a list of all of the performance\nimprovements he doesn't think are worth release-noting or isn't sure\nhow to release-note, and someone else can then have a go at them.\n\nPersonally, I suspect that a part of the problem, other than the\ninevitable fact that the person doing the work has a perspective on\nhow the work should be done with which not everyone will agree, is\nthat a lot of performance changes have commit messages that don't\nreally explain what the user impact is. For instance, consider\n6dbb490261a6170a3fc3e326c6983ad63e795047 (\"Combine freezing and\npruning steps in VACUUM\"). It does actually say what the benefit is\n(\"That reduces the overall amount of WAL generated\") but the reader\ncould easily be left wondering whether that is really the selling\npoint. Does it also reduce CPU consumption? Is that more or less\nimportant than the WAL reduction? Was the WAL reduction the motivation\nfor the work? Is the WAL reduction significant enough that this is a\nfeature in its own right, or is this just preparatory to some other\nwork? These kinds of ambiguities can exist for any commit, not just\nperformance commits, but I bet that on average the problem is worse\nfor performance-related commits.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 May 2024 13:50:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 1:51 PM Robert Haas <[email protected]> wrote:\n>\n> On Tue, May 21, 2024 at 12:27 PM Andres Freund <[email protected]> wrote:\n> > To me that's the \"General Performance\" section. If somebody reading the\n> > release notes doesn't care about performance, they can just skip that section\n> > ([1]). I don't see why we wouldn't want to include the same level of detail\n> > as for other changes.\n>\n> I'm relatively sure that we've had this argument in previous years and\n> essentially everyone but Bruce has agreed with the idea that\n> performance changes ought to be treated the same as any other kind of\n> improvement. The difficulty is that Bruce is the one doing the release\n> notes. I think it might help if someone were willing to prepare a\n> patch showing what they think specifically should be changed. Or maybe\n> Bruce would be willing to provide a list of all of the performance\n> improvements he doesn't think are worth release-noting or isn't sure\n> how to release-note, and someone else can then have a go at them.\n>\n> Personally, I suspect that a part of the problem, other than the\n> inevitable fact that the person doing the work has a perspective on\n> how the work should be done with which not everyone will agree, is\n> that a lot of performance changes have commit messages that don't\n> really explain what the user impact is. For instance, consider\n> 6dbb490261a6170a3fc3e326c6983ad63e795047 (\"Combine freezing and\n> pruning steps in VACUUM\"). It does actually say what the benefit is\n> (\"That reduces the overall amount of WAL generated\") but the reader\n> could easily be left wondering whether that is really the selling\n> point. Does it also reduce CPU consumption? Is that more or less\n> important than the WAL reduction? Was the WAL reduction the motivation\n> for the work? Is the WAL reduction significant enough that this is a\n> feature in its own right, or is this just preparatory to some other\n> work? These kinds of ambiguities can exist for any commit, not just\n> performance commits, but I bet that on average the problem is worse\n> for performance-related commits.\n\nIn Postgres development, we break larger projects into smaller ones\nand then those smaller projects into multiple individual commits. Each\ncommit needs to stand alone and each subproject needs to have a\ndefensible benefit. One thing that is harder with performance-related\nwork than non-performance feature work is that there isn't always a\nfinal \"turn it on\" commit. For example, let's say you are adding a new\nview that tracks new stats of some kind. You do a bunch of refactoring\nand small subprojects to make it possible to add the view. Then the\nfinal commit that actually creates the view has obvious user value to\nwhoever is reading the log. For performance features, it doesn't\nalways work like this.\n\nFor the vacuum WAL volume reduction, there were a bunch of smaller\nprojects throughout the last development year that I worked on that\nwere committed by different people and with different individual\nbenefits. Some changes caused vacuum to do less visibility checks (so\nless CPU usage), some changed WAL format in a way that saves some\nspace, and some, like the commit you mention, make vacuum emit less\nWAL. That commit by itself doesn't contain all of the user benefits of\nthe whole project. I couldn't think of a good place to list all of the\ncommits together that were part of the same project. Perhaps you could\nargue that they were not in fact part of the same project and instead\nwere just small individual changes -- none of which are individually\nworth including in the release notes.\n\n- Melanie\n\n\n", "msg_date": "Tue, 21 May 2024 14:26:15 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 2:26 PM Melanie Plageman\n<[email protected]> wrote:\n> For the vacuum WAL volume reduction, there were a bunch of smaller\n> projects throughout the last development year that I worked on that\n> were committed by different people and with different individual\n> benefits. Some changes caused vacuum to do less visibility checks (so\n> less CPU usage), some changed WAL format in a way that saves some\n> space, and some, like the commit you mention, make vacuum emit less\n> WAL. That commit by itself doesn't contain all of the user benefits of\n> the whole project. I couldn't think of a good place to list all of the\n> commits together that were part of the same project. Perhaps you could\n> argue that they were not in fact part of the same project and instead\n> were just small individual changes -- none of which are individually\n> worth including in the release notes.\n\nYeah, I think a lot of projects have this problem in one way or\nanother, but I think it may be worse for performance-related projects.\n\nI wasn't intending to knock that particular commit, just to be clear,\nor the commit message. I'm just saying that sometimes summarizing the\ncommit log may not be as easy as we'd hope.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 May 2024 14:53:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn Thu, May 9, 2024 at 1:03 PM Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n>\n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n>\n\nI found a typo:\n\ns/pg_statstatement/pg_stat_statement/\n\nI've attached a patch to fix it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 22 May 2024 11:29:06 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-05-09 13:03, Bruce Momjian wrote:\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n> \n> \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n> \n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> \trelease-16: 206\n> \trelease-17: 188\n> \n> I welcome feedback. For some reason it was an easier job than usual.\n\nThanks for working on this as always.\n\n<para>\nAllow pg_stat_reset_shared(\"slru\") to clear SLRU statistics (Atsushi \nTorikoshi)\n</para>\n\nConsidering someone may copy and paste this, 'slru' is better than \n\"slru\", isn't it?\nI also found an older release note[1] used single quotes for this like:\n\n Add pg_stat_reset_shared('bgwriter') to reset the cluster-wide shared \nstatistics for the background writer (Greg Smith)\n\n[1] https://www.postgresql.org/docs/release/9.0.0/\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Wed, 22 May 2024 21:25:41 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Mon, May 20, 2024 at 02:47:28PM -0400, Melanie Plageman wrote:\n> On Mon, May 20, 2024 at 9:37 AM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Mon, May 20, 2024 at 01:23:02PM +0700, John Naylor wrote:\n> > > Hi Bruce, thanks for doing this again!\n> > >\n> > > I'm a bit late to this discussion -- there's been a bit of churn in\n> > > the vacuum items, and some streams got crossed along the way. I've\n> > > attached an attempt to rectify this.\n> >\n> > Agreed, patch applied, thanks.\n> \n> If \"Allow vacuum to more efficiently remove and freeze tuples\" stays\n> in the release notes, I would add Heikki's name. He wasn't listed as a\n> co-author on all of the commits that were part of this feature, but he\n> made a substantial investment in it and should be listed, IMO. Patch\n> attached.\n\nThanks, patch applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 17:59:20 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 09:27:20AM -0700, Andres Freund wrote:\n> On 2024-05-18 10:59:47 -0400, Bruce Momjian wrote:\n> > I agree the impact of performance improvements are often greater than\n> > the average release note item. However, if people expect Postgres to be\n> > faster, is it important for them to know _why_ it is faster?\n> \n> Yes, it very often is. Performance improvements typically aren't \"make\n> everything 3% faster\", they're more \"make this special thing 20%\n> faster\". Without know what got faster, users don't know if\n> a) the upgrade will improve their production situation\n> b) they need to change something to take advantage of the improvement\n\nYou might have seen in this thread, I do record commits that speed up\nworkloads that are user-visible, or specifically make new workloads\npossible. I assume that covers the items above, though I have to\ndetermine this from the commit message.\n\n> > On the flip side, a performance improvement that makes everything 10%\n> > faster has little behavioral change for users, and in fact I think we\n> > get ~6% faster in every major release.\n> \n> I cannot recall many \"make everything faster\" improvements, if any.\n> \n> And even if it's \"make everything faster\" - that's useful for users to know,\n> it might solve their production problem! It's also good for PR.\n\nAgain, it is down to having three goals for the release notes, and #1 is\nhaving it readable/short, and 2 & 3 are for PR and acknowledging authors.\n\n> Also, the release notes are also not just important to users. I often go back\n> and look in the release notes to see when some some important change was made,\n> because sometimes it's harder to find it in the git log, due to sheer\n> volume. And even just keeping up with all the changes between two releases is\n> hard, it's useful to be able to read the release notes and see what happened.\n\nWell, I would say we need some _other_ way to record and perhaps\nadvertise such changes.\n\n> > > For another, it's also very frustrating for developers that focus on\n> > > performance. The reticence to note their work, while noting other, far\n> > > smaller, things in the release notes, pretty much tells us that our work isn't\n> > > valued.\n> >\n> > Yes, but are we willing to add text that every user will have to read\n> > just for this purpose?\n> \n> Of course it's a tradeoff. We shouldn't verbosely note down every small\n> changes just because of the recognition, that'd make the release notes\n> unreadable. And it'd just duplicate the commit log. But that's not the same as\n> defaulting to not noting performance improvements, even if the performance\n> improvement is more impactful than many other features that are noted.\n\nAgain, see above, I do mention performance improvements, but they have\nto be user-visible or enable new workloads.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:04:09 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 09:51:09AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-21 09:27:20 -0700, Andres Freund wrote:\n> > Also, the release notes are also not just important to users. I often go back\n> > and look in the release notes to see when some some important change was made,\n> > because sometimes it's harder to find it in the git log, due to sheer\n> > volume. And even just keeping up with all the changes between two releases is\n> > hard, it's useful to be able to read the release notes and see what happened.\n> >\n> > [...]\n> >\n> > [1] I've wondered if we should have one more level of TOC on the release note\n> > page, so it's easier to jump to specific sections.\n> \n> Which reminds me: Eventually I'd like to add links to the most important\n> commits related to release note entries. We already do much of the work of\n> building that list of commits for each entry. That'd allow a reader to find\n> more details if interested.\n\nYes, it would be cool if they could mouse-over a graphic next to each\nrelease note item to get a popup to the commits.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:05:13 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 01:50:58PM -0400, Robert Haas wrote:\n> On Tue, May 21, 2024 at 12:27 PM Andres Freund <[email protected]> wrote:\n> > To me that's the \"General Performance\" section. If somebody reading the\n> > release notes doesn't care about performance, they can just skip that section\n> > ([1]). I don't see why we wouldn't want to include the same level of detail\n> > as for other changes.\n> \n> I'm relatively sure that we've had this argument in previous years and\n> essentially everyone but Bruce has agreed with the idea that\n> performance changes ought to be treated the same as any other kind of\n> improvement. The difficulty is that Bruce is the one doing the release\n> notes. I think it might help if someone were willing to prepare a\n> patch showing what they think specifically should be changed. Or maybe\n> Bruce would be willing to provide a list of all of the performance\n> improvements he doesn't think are worth release-noting or isn't sure\n> how to release-note, and someone else can then have a go at them.\n\nWell, developers do ask why their individual commits were not listed,\nand I repeat the same thing, as I have done in this thread multiple\ntimes. You can probably look over this thread to find them, or in\nprevious years.\n\nTo be clear, it is performance improvements that don't have user-visible\nchanges or enable new workloads that I skip listing. I will also note\nthat don't remember any user ever finding a performance boost, and then\ncoming to use and asking why we didn't list it --- this release note\nreview process seems to add all of those.\n\nMaybe adding a section called \"Internal Performance\". Here is our\nGeneral Performance contents:\n\n\tE.1.3.1.3. General Performance\n\t\n\t Allow vacuum to more efficiently remove and freeze tuples (Melanie\n\tPlageman)\n\t\n\t WAL traffic caused by vacuum is also more compact.\n\t\n\t Allow vacuum to more efficiently store tuple references (Masahiko\n\tSawada, John Naylor)\n\t\n\t Additionally, vacuum is no longer silently limited to one gigabyte\n\tof memory when maintenance_work_mem or autovacuum_work_mem are higher.\n\t\n\t Optimize vacuuming of relations with no indexes (Melanie Plageman)\n\t\n\t Increase default vacuum_buffer_usage_limit to 2MB (Thomas Munro)\n\t\n\t Improve performance when checking roles with many memberships\n\t(Nathan Bossart)\n\t\n\t Improve performance of heavily-contended WAL writes (Bharath\n\tRupireddy)\n\t\n\t Improve performance when transferring large blocks of data to a\n\tclient (Melih Mutlu)\n\t\n\t Allow the grouping of file system reads with the new system variable\n\tio_combine_limit (Thomas Munro, Andres Freund, Melanie Plageman, Nazir\n\tBilal Yavuz)\n\nDo we think more user-invisible changes should be in there?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:13:47 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 02:26:15PM -0400, Melanie Plageman wrote:\n> In Postgres development, we break larger projects into smaller ones\n> and then those smaller projects into multiple individual commits. Each\n> commit needs to stand alone and each subproject needs to have a\n> defensible benefit. One thing that is harder with performance-related\n> work than non-performance feature work is that there isn't always a\n> final \"turn it on\" commit. For example, let's say you are adding a new\n> view that tracks new stats of some kind. You do a bunch of refactoring\n> and small subprojects to make it possible to add the view. Then the\n> final commit that actually creates the view has obvious user value to\n> whoever is reading the log. For performance features, it doesn't\n> always work like this.\n> \n> For the vacuum WAL volume reduction, there were a bunch of smaller\n> projects throughout the last development year that I worked on that\n> were committed by different people and with different individual\n> benefits. Some changes caused vacuum to do less visibility checks (so\n> less CPU usage), some changed WAL format in a way that saves some\n> space, and some, like the commit you mention, make vacuum emit less\n> WAL. That commit by itself doesn't contain all of the user benefits of\n> the whole project. I couldn't think of a good place to list all of the\n> commits together that were part of the same project. Perhaps you could\n> argue that they were not in fact part of the same project and instead\n> were just small individual changes -- none of which are individually\n> worth including in the release notes.\n\nI try and group them, but I am sure imperfectly. It is very true that\ninfrastucture changes that enable later commits are often missed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:15:04 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 09:40:28AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-18 11:13:54 -0400, Bruce Momjian wrote:\n> > Please see the email I just posted. There are three goals we have to\n> > adjust for:\n> > \n> > 1. short release notes so they are readable\n> > 2. giving people credit for performance improvements\n> > 3. showing people Postgres cares about performance\n> > \n> > I would like to achieve 2 & 3 without harming #1. My experience is if I\n> > am reading a long document, and I get to a section where I start to\n> > wonder, \"Why should I care about this?\", I start to skim the rest of\n> > the document.\n> \n> I agree keeping things reasonably short is important. But I don't think you're\n> evenly applying it as a goal.\n> \n> Just skimming the notes from the end, I see\n> - an 8 entries long pg_stat_statements section\n\nWhat item did you want to remove? Those are all user-visible changes.\n\n> - multiple entries about \"Create custom wait events for ...\"\n\nWell, those are all in different sections, so how can they be merged,\nunless I create a \"wait event section\", I guess.\n\n> - three entries about adding --all to {reindexdb,vacuumdb,clusterdb}.\n\nThe problem with merging these is that the \"Specifically, --all can now\nbe used with\" is different for all three of them.\n\n> - an entry about adding long options to pg_archivecleanup\n\nWell, that is a user-visible change. Should it not be listed?\n\n> - two entries about grantable maintenance rights, once via pg_maintain, once\n> per-table\n\nWell, one is a GRANT and another is a role, so merging them seemed like\nit would be too confusing.\n\n> - separate entries about pg_stat_reset_slru(), pg_stat_reset_shared(\"slru\"),\n\nThey are different functions with different detail text.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:33:03 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Mon, May 20, 2024 at 11:48:09AM -0700, Jeff Davis wrote:\n> On Sat, 2024-05-18 at 17:51 -0400, Bruce Momjian wrote:\n> > Okay, I went with the attached applied patch.  Adjustments?\n> \n> I think it should have more emphasis on the actual new feature: a\n> platform-independent builtin collation provider and the C.UTF-8 locale.\n> \n> The user can look at the documentation for comparison with libc.\n\nOkay, changed with the attached applied patch.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 22 May 2024 18:39:41 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 21, 2024 at 11:20:02AM +0900, Amit Langote wrote:\n> Thanks Bruce for working on this as always.\n> \n> Failed to notice when I read the notes before:\n> \n> <listitem>\n> <para>\n> Add SQL/JSON constructor functions JSON(), JSON_SCALAR(), and\n> JSON_SERIALIZE() (Amit Langote)\n> </para>\n> </listitem>\n> \n> Should be:\n> \n> <listitem>\n> <para>\n> Add SQL/JSON constructor functions JSON(), JSON_SCALAR(), and\n> JSON_SERIALIZE() (Nikita Glukhov, Teodor Sigaev, Oleg Bartunov,\n> Alexander Korotkov, Andrew Dunstan, Amit Langote)\n> </para>\n> </listitem>\n\nThanks, applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:46:53 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 22, 2024 at 11:29:06AM +0900, Masahiko Sawada wrote:\n> I found a typo:\n> \n> s/pg_statstatement/pg_stat_statement/\n> \n> I've attached a patch to fix it.\n\nAgreed, applied, thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:48:31 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 22, 2024 at 09:25:41PM +0900, torikoshia wrote:\n> Thanks for working on this as always.\n> \n> <para>\n> Allow pg_stat_reset_shared(\"slru\") to clear SLRU statistics (Atsushi\n> Torikoshi)\n> </para>\n> \n> Considering someone may copy and paste this, 'slru' is better than \"slru\",\n> isn't it?\n> I also found an older release note[1] used single quotes for this like:\n> \n> Add pg_stat_reset_shared('bgwriter') to reset the cluster-wide shared\n> statistics for the background writer (Greg Smith)\n\nAgreed, patch applied, thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 18:50:53 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 23 May 2024 at 10:04, Bruce Momjian <[email protected]> wrote:\n> You might have seen in this thread, I do record commits that speed up\n> workloads that are user-visible, or specifically make new workloads\n> possible. I assume that covers the items above, though I have to\n> determine this from the commit message.\n\nIt sometimes is hard to write something specific in the commit message\nabout the actual performance increase.\n\nFor example, if a commit removes an O(N log2 N) algorithm and replaces\nit with an O(1), you can't say there's an X% increase in performance\nas the performance increase depends on the value of N.\n\nJelte did call me out for not mentioning enough detail about the\nperformance in c4ab7da60, but if I claimed any % of an increase, it\nwould have been specific to some workload.\n\nWhat is the best way to communicate this stuff so it's easily\nidentifiable when you parse the commit messages?\n\nDavid\n\n\n", "msg_date": "Thu, 23 May 2024 13:34:10 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 01:34:10PM +1200, David Rowley wrote:\n> On Thu, 23 May 2024 at 10:04, Bruce Momjian <[email protected]> wrote:\n> > You might have seen in this thread, I do record commits that speed up\n> > workloads that are user-visible, or specifically make new workloads\n> > possible. I assume that covers the items above, though I have to\n> > determine this from the commit message.\n> \n> It sometimes is hard to write something specific in the commit message\n> about the actual performance increase.\n> \n> For example, if a commit removes an O(N log2 N) algorithm and replaces\n> it with an O(1), you can't say there's an X% increase in performance\n> as the performance increase depends on the value of N.\n> \n> Jelte did call me out for not mentioning enough detail about the\n> performance in c4ab7da60, but if I claimed any % of an increase, it\n> would have been specific to some workload.\n> \n> What is the best way to communicate this stuff so it's easily\n> identifiable when you parse the commit messages?\n\nThis is why I think we need an \"Internal Performance\" section where we\ncan be clear about simple scaling improvements that might have no\nuser-visible explanation. I would suggest putting it after our \"Source\ncode\" section.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 22 May 2024 22:01:51 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 23 May 2024 at 14:01, Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, May 23, 2024 at 01:34:10PM +1200, David Rowley wrote:\n> > What is the best way to communicate this stuff so it's easily\n> > identifiable when you parse the commit messages?\n>\n> This is why I think we need an \"Internal Performance\" section where we\n> can be clear about simple scaling improvements that might have no\n> user-visible explanation. I would suggest putting it after our \"Source\n> code\" section.\n\nhmm, but that does not really answer my question. There's still a\ncommunication barrier if you're parsing the commit messages are\ncommitters don't clearly state some performance improvement numbers\nfor the reason I stated.\n\nI also don't agree these should be left to \"Source code\" section. I\nfeel that section is best suited for extension authors who might care\nabout some internal API change. I'm talking of stuff that makes some\nuser queries possibly N times (!) faster. Surely \"Source Code\" isn't\nthe place to talk about that?\n\nDavid\n\n\n", "msg_date": "Thu, 23 May 2024 14:27:07 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hello,\n\nRegarding this item\n\n: Allow the SLRU cache sizes to be configured (Andrey Borodin, Dilip Kumar)\n: \n: The new server variables are commit_timestamp_buffers,\n: multixact_member_buffers, multixact_offset_buffers, notify_buffers,\n: serializable_buffers, subtransaction_buffers, and transaction_buffers.\n\nI hereby request to be listed as third author of this feature.\n\nAlso, I'd like to suggest to make it more verbose, as details might be\nuseful to users. Mention that scalability is improved, because\npreviously we've suggested to recompile with larger #defines, but to be\ncautious because values too high degrade performance. Also mention the\npoint that some of these grow with shared_buffers is user-visible enough\nthat it warrants an explicit mention. How about like this:\n\n: Allow the SLRU cache sizes to be configured and improve performance of\n: larger caches\n: (Andrey Borodin, Dilip Kumar, Álvaro Herrera)\n: \n: The new server variables are commit_timestamp_buffers,\n: multixact_member_buffers, multixact_offset_buffers, notify_buffers,\n: serializable_buffers, subtransaction_buffers, and transaction_buffers.\n: commit_timestamp_buffers, transaction_buffers and\n: subtransaction_buffers scale up automatically with shared_buffers.\n\n\nThese three items\n\n: Allow pg_stat_reset_shared() to reset all shared statistics (Atsushi Torikoshi)\n: \n: This is done by passing NULL.\n: \n: Allow pg_stat_reset_shared('slru') to clear SLRU statistics (Atsushi Torikoshi)\n: \n: Now pg_stat_reset_shared(NULL) also resets SLRU statistics.\n: \n: Allow pg_stat_reset_slru() to reset all SLRU statistics (Bharath Rupireddy)\n: \n: The command pg_stat_reset_slru(NULL) already did this.\n\nseem a bit repetitive. (I think the first one is also wrong, because it\nsays you have to pass NULL, but in reality you can also not give an\nargument and it works.) Can we make them a single item? Maybe\nsomething like\n\n: Improve reset routines for shared statistics (Atsushi Torikoshi, Bharath Rupireddy)\n:\n: Resetting all shared statistics can now be done with\n: pg_stat_reset_shared() or pg_stat_reset_shared(NULL), while SLRU\n: statistics can now be reset with pg_stat_reset_shared('slru'),\n: pg_stat_reset_slru() and pg_stat_reset_slru(NULL).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Find a bug in a program, and fix it, and the program will work today.\nShow the program how to find and fix a bug, and the program\nwill work forever\" (Oliver Silfridge)\n\n\n", "msg_date": "Thu, 23 May 2024 13:22:51 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, 2024-05-22 at 18:39 -0400, Bruce Momjian wrote:\n> On Mon, May 20, 2024 at 11:48:09AM -0700, Jeff Davis wrote:\n> > On Sat, 2024-05-18 at 17:51 -0400, Bruce Momjian wrote:\n> > > Okay, I went with the attached applied patch.  Adjustments?\n> > \n> > I think it should have more emphasis on the actual new feature: a\n> > platform-independent builtin collation provider and the C.UTF-8\n> > locale.\n> > \n> > The user can look at the documentation for comparison with libc.\n> \n> Okay, changed with the attached applied patch.\n\nThank you, looks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 23 May 2024 08:30:25 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "-\n\n Rename SLRU columns in system view pg_stat_slru (Alvaro Herrera)\n\n The column names accepted by pg_stat_slru_rest() are also changed.\n\nIs pg_stat_slru_rest() correct ?\n\nRename SLRU columns in system view pg_stat_slru (Alvaro Herrera)The column names accepted by pg_stat_slru_rest() are also changed.Is pg_stat_slru_rest() correct ?", "msg_date": "Thu, 23 May 2024 16:54:28 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, May 22, 2024 at 6:50 PM Bruce Momjian <[email protected]> wrote:\n> Agreed, patch applied, thanks.\n\nThe item for my commit 5bf748b8 currently reads:\n\n\"Allow btree indexes to more efficiently find array matches\"\n\nI think that this isn't likely to mean much to most users. It seems\nlike it'd be a lot clearer if the wording was more in line with the\nbeta1 announcement, which talks about the work as an enhancement to\nindex scans that use an IN ( ) condition. Specifically referencing\nIN() (as opposed to something about arrays or array conditions) is\nlikely to make the item much more understandable.\n\nReferencing array matches makes me think of a GIN index on an array\ncolumn. While ScalarArrayOps do use an array under the cover, that's\nmostly an implementation detail. At least it is to users that\nexclusively use IN(), likely the majority (that's the SQL standard\nsyntax).\n\nFor context, the Postgres 9.2 release notes described the feature that\nmy work directly built on as follows:\n\n\"Allow indexed_col op ANY(ARRAY[...]) conditions to be used in plain\nindex scans and index-only scans\"\n\nThis was a very accurate description of this earlier work. Similar\nwording could be used now, but that doesn't seem great to me either.\nSimply because this wording also doesn't reference IN() conditions in\nindex quals.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 23 May 2024 20:19:15 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 02:27:07PM +1200, David Rowley wrote:\n> On Thu, 23 May 2024 at 14:01, Bruce Momjian <[email protected]> wrote:\n> >\n> > On Thu, May 23, 2024 at 01:34:10PM +1200, David Rowley wrote:\n> > > What is the best way to communicate this stuff so it's easily\n> > > identifiable when you parse the commit messages?\n> >\n> > This is why I think we need an \"Internal Performance\" section where we\n> > can be clear about simple scaling improvements that might have no\n> > user-visible explanation. I would suggest putting it after our \"Source\n> > code\" section.\n> \n> hmm, but that does not really answer my question. There's still a\n> communication barrier if you're parsing the commit messages are\n> committers don't clearly state some performance improvement numbers\n> for the reason I stated.\n\nFor a case where O(N^2) become O(N), we might not even know the\nperformance change since it is a micro-optimization. That is why I\nsuggested we call it \"Internal Performance\".\n\n> I also don't agree these should be left to \"Source code\" section. I\n> feel that section is best suited for extension authors who might care\n> about some internal API change. I'm talking of stuff that makes some\n> user queries possibly N times (!) faster. Surely \"Source Code\" isn't\n> the place to talk about that?\n\nI said a new section \"after our 'Source code' section,\" not in the\nsource code section.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 23 May 2024 23:04:43 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Thu, May 23, 2024 at 02:27:07PM +1200, David Rowley wrote:\n>> I also don't agree these should be left to \"Source code\" section. I\n>> feel that section is best suited for extension authors who might care\n>> about some internal API change. I'm talking of stuff that makes some\n>> user queries possibly N times (!) faster. Surely \"Source Code\" isn't\n>> the place to talk about that?\n\n> I said a new section \"after our 'Source code' section,\" not in the\n> source code section.\n\nSurely, if we make this a separate section, it would come before\n'Source code'?\n\nI am not sure Bruce that you realize that your disregard for\nperformance improvements is shared by nobody. Arguably,\nperformance is 90% of what we do these days, and it's also\n90% of what users care about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 May 2024 23:11:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 11:11:10PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Thu, May 23, 2024 at 02:27:07PM +1200, David Rowley wrote:\n> >> I also don't agree these should be left to \"Source code\" section. I\n> >> feel that section is best suited for extension authors who might care\n> >> about some internal API change. I'm talking of stuff that makes some\n> >> user queries possibly N times (!) faster. Surely \"Source Code\" isn't\n> >> the place to talk about that?\n> \n> > I said a new section \"after our 'Source code' section,\" not in the\n> > source code section.\n> \n> Surely, if we make this a separate section, it would come before\n> 'Source code'?\n> \n> I am not sure Bruce that you realize that your disregard for\n> performance improvements is shared by nobody. Arguably,\n> performance is 90% of what we do these days, and it's also\n> 90% of what users care about.\n\nPlease stop saying I don't document performance. I have already\nexplained enough which performance items I choose. Please address my\ncriteria or suggest new criteria.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 23 May 2024 23:27:04 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 11:04 PM Bruce Momjian <[email protected]> wrote:\n> For a case where O(N^2) become O(N), we might not even know the\n> performance change since it is a micro-optimization. That is why I\n> suggested we call it \"Internal Performance\".\n\nI don't get this at all. If we can't tell the difference between\nO(N^2) and O(N), then N was small enough that there wasn't any real\nneed to optimize in the first place. I think we should be assuming\nthat if somebody took the trouble to write a patch, the difference did\nmatter. Hence the change would be user-visible, and should be\ndocumented.\n\n\"Internal Performance\" doesn't make a lot of sense to me as a section\nheading. What does \"Internal\" mean here as opposed to \"General\"? I\nsuspect you mean to imply that the user won't be able to tell the\ndifference, but I doubt that very much. We make performance\nimprovements because they are user-visible. If a performance\nimprovement is so miniscule that nobody would ever notice the\ndifference, well then I don't think it needs to be release-noted at\nall, and we might have a few changes like that where people were\nmostly aiming for code cleanliness. But in general, what people do is\nlook for something that's slow (for them) and try to make it faster.\nSo the presumption should be that a performance feature has a\nmeaningful impact on users, and then in rare cases we may decide\notherwise.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 May 2024 09:54:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn 2024-05-23 23:27:04 -0400, Bruce Momjian wrote:\n> On Thu, May 23, 2024 at 11:11:10PM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > I am not sure Bruce that you realize that your disregard for\n> > performance improvements is shared by nobody. Arguably,\n> > performance is 90% of what we do these days, and it's also\n> > 90% of what users care about.\n>\n> Please stop saying I don't document performance. I have already\n> explained enough which performance items I choose. Please address my\n> criteria or suggest new criteria.\n\nBruce, just about everyone seems to disagree with your current approach. And\nnot just this year, this has been a discussion in most if not all release note\nthreads of the last few years.\n\nPeople, including me, *have* addressed your criteria, but you just waved those\nconcerns away. It's hard to continue discussing criteria when it doesn't at\nall feel like a conversation.\n\nIn the end, these are patches to the source code, I don't think you can just\nwave away widespread disagreement with your changes. That's not how we do\npostgres development.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 May 2024 10:50:28 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn 2024-05-22 18:33:03 -0400, Bruce Momjian wrote:\n> On Tue, May 21, 2024 at 09:40:28AM -0700, Andres Freund wrote:\n> > On 2024-05-18 11:13:54 -0400, Bruce Momjian wrote:\n> > I agree keeping things reasonably short is important. But I don't think you're\n> > evenly applying it as a goal.\n> >\n> > Just skimming the notes from the end, I see\n> > - an 8 entries long pg_stat_statements section\n>\n> What item did you want to remove? Those are all user-visible changes.\n\nMy point here was not that we necessarily need to remove those, but that their\nimpact to users is smaller than many of the performance impacts you disregard.\n\n\n> > - multiple entries about \"Create custom wait events for ...\"\n>\n> Well, those are all in different sections, so how can they be merged,\n> unless I create a \"wait event section\", I guess.\n\nThey're not, all are in \"Additional Modules\". Instead of\n\n- Create custom wait events for postgres_fdw (Masahiro Ikeda)\n- Create custom wait events for dblink (Masahiro Ikeda)\n- Allow extensions to define custom wait events (Masahiro Ikeda)\n\nI'd make it:\n\n- Allow extensions to define custom wait events and create custom wait events\n for postgres_fdw, dblink (Masahiro Ikeda)\n\n\n> > - three entries about adding --all to {reindexdb,vacuumdb,clusterdb}.\n>\n> The problem with merging these is that the \"Specifically, --all can now\n> be used with\" is different for all three of them.\n\nYou said you were worried about the length of the release notes, because it\ndiscourages users from actually reading the release notes, due to getting\nbored. Having three instance of almost the same entry, with just minor changes\nbetween them, seems to precisely endanger boring readers.\n\nI'd probably just go for\n\n- Add --all option to clusterdb, reindexdb, vacuumdb to process objects in all\n databases matching a pattern (Nathan Bossart)\n\nor such. The precise details of how the option works for the different\ncommands doesn't need to be stated in the release notes, that's more of a\nreference documentation thing. But if you want to include it, we can do\nsomething like\n\n Specifically, --all can now be used with --table (all commands), --schema\n (reindexdb, vacuumdb), and --exclude-schema (reindexdb, vacuumdb).\n\n\n> > - an entry about adding long options to pg_archivecleanup\n>\n> Well, that is a user-visible change. Should it not be listed?\n\nIf you are concerned about the length of the release notes and as a\nconsequence not including more impactful performance changes, then no, it\nshouldn't. It doesn't break anyones current scripts, it doesn't enable\nanything new.\n\n\n> > - two entries about grantable maintenance rights, once via pg_maintain, once\n> > per-table\n>\n> Well, one is a GRANT and another is a role, so merging them seemed like\n> it would be too confusing.\n\nI don't think it has to be.\n\nMaybe something roughly like\n\n- Allow granting the right to perform maintenance operations (Nathan Bossart)\n\n The permission can be granted on a per-table basis using the MAINTAIN\n privilege and on a system wide basis via the pg_maintain role.\n\n Operations that can be controlled are VACUUM, ANALYZE, REINDEX, REFRESH\n MATERIALIZED VIEW, CLUSTER, and LOCK TABLE.\n\n\nI'm again mostly reacting to your concern that the release notes are getting\ntoo boring to read. Repeated content, like in the current formulation, imo\ndoes endanger that. Current it is:\n\n- Add per-table GRANT permission MAINTAIN to control maintenance operations (Nathan Bossart)\n\n The operations are VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER, and LOCK TABLE.\n\n- Add user-grantable role pg_maintain to control maintenance operations (Nathan Bossart)\n\n The operations are VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER, and LOCK TABLE.\n\n\n\n> > - separate entries about pg_stat_reset_slru(), pg_stat_reset_shared(\"slru\"),\n>\n> They are different functions with different detail text.\n\nSo what? You can change their text. Making it three entries makes it harder\nfor a reader that doesn't care about resetting stats to skip over the details.\n\nMake it something like\n\n- Improve control over resetting statistics (Atsushi Torikoshi, Bharath\n Rupireddy)\n\n pg_stat_reset_shared() can now reset all shared statistics, by passing NULL;\n pg_stat_reset_shared(NULL) also resets SLRU statistics;\n pg_stat_reset_shared(\"slru\") resets SLRU statistics, which was already\n possible using pg_stat_reset_slru(NULL).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 May 2024 11:23:29 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 24, 2024 at 1:50 PM Andres Freund <[email protected]> wrote:\n> Bruce, just about everyone seems to disagree with your current approach. And\n> not just this year, this has been a discussion in most if not all release note\n> threads of the last few years.\n\n+1.\n\n> People, including me, *have* addressed your criteria, but you just waved those\n> concerns away. It's hard to continue discussing criteria when it doesn't at\n> all feel like a conversation.\n\nAt one point on this thread, Bruce said \"I am particularly critical if\nI start to wonder, \"Why does the author _think_ I should care about\nthis?\" because it feels like the author is writing for him/herself and\nnot the audience.\"\n\nWhenever this sort of thing has come up in the past, and I pushed\nback, Bruce seemed to respond along these lines: he seemed to suggest\nthat there was some kind of conflict of interests involved. This isn't\ncompletely unreasonable, of course -- my motivations aren't wholly\nirrelevant. But for the most part they're *not* very relevant, and\nwouldn't be even if Bruce's worst suspicions were actually true. In\nprinciple it shouldn't matter that I'm biased, if I happen to be\ncorrect in some relevant sense.\n\nEverybody has some kind of bias. Even if my bias in these matters was\na significant factor (which I tend to doubt), I don't think that it's\nfair to suggest that it's self-serving or careerist. My bias was\nprobably present before I even began work on the feature in question.\nI tend to work on things because I believe that they're important --\nit's not the other way around (at least not to a significant degree).\n\n> In the end, these are patches to the source code, I don't think you can just\n> wave away widespread disagreement with your changes. That's not how we do\n> postgres development.\n\nIn lots of cases (a large minority of cases) the problem isn't even\nreally with the emphasis of one type of item over another/the\ninclusion or non-inclusion of some individual item. It's actually a\nproblem with the information being presented in the most useful way.\n\nOften I've suggested what I believe to be a better wording on the\nmerits (usually less obscure and more accessible language), only to be\nmet with the same sort of resistance from Bruce. If I've put a huge\namount of work into the item (as is usually the case), then I think\nthat I am at least entitled to a fair hearing.\n\nI don't expect Bruce to meet me halfway, or even for him to meet me a\nquarter of the way -- somebody has to be empowered to say no here\n(even to very senior community members). I just don't think that he\nhas seriously considered my feedback in this area over the years. Not\nalways, not consistently, but often enough for it to seem like a real\nproblem.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 24 May 2024 14:26:48 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 24, 2024 at 10:50:28AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-23 23:27:04 -0400, Bruce Momjian wrote:\n> > On Thu, May 23, 2024 at 11:11:10PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <[email protected]> writes:\n> > > I am not sure Bruce that you realize that your disregard for\n> > > performance improvements is shared by nobody. Arguably,\n> > > performance is 90% of what we do these days, and it's also\n> > > 90% of what users care about.\n> >\n> > Please stop saying I don't document performance. I have already\n> > explained enough which performance items I choose. Please address my\n> > criteria or suggest new criteria.\n> \n> Bruce, just about everyone seems to disagree with your current approach. And\n> not just this year, this has been a discussion in most if not all release note\n> threads of the last few years.\n> \n> People, including me, *have* addressed your criteria, but you just waved those\n> concerns away. It's hard to continue discussing criteria when it doesn't at\n> all feel like a conversation.\n> \n> In the end, these are patches to the source code, I don't think you can just\n> wave away widespread disagreement with your changes. That's not how we do\n> postgres development.\n\nWell, let's start with a new section for PG 17 that lists these. Is it\n20 items, 50, or 150? I have no idea, but without the user-visible\nfilter, I am unable to determine what not-included performance features\nare worthy of the release notes.\n\nCan someone do that? There is no reason other committers can't change\nthe release notes. Yes, I realize we are looking for a consistent\nvoice, but the new section can probably have its own style, and I can\nmake adjustments if desired.\n\nAlso, I think this has gone unaddressed so long because if we skip a\nuser-visible change, users complain, but I don't remember anyone\ncomplaining about skipped performance changes.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 25 May 2024 22:10:36 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, May 24, 2024 at 11:23:29AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-05-22 18:33:03 -0400, Bruce Momjian wrote:\n> > On Tue, May 21, 2024 at 09:40:28AM -0700, Andres Freund wrote:\n> > > On 2024-05-18 11:13:54 -0400, Bruce Momjian wrote:\n> > > I agree keeping things reasonably short is important. But I don't think you're\n> > > evenly applying it as a goal.\n> > >\n> > > Just skimming the notes from the end, I see\n> > > - an 8 entries long pg_stat_statements section\n> >\n> > What item did you want to remove? Those are all user-visible changes.\n> \n> My point here was not that we necessarily need to remove those, but that their\n> impact to users is smaller than many of the performance impacts you disregard.\n\nI liked all your detailed suggestions so applied the attached patch. \n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 25 May 2024 23:41:48 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 01:22:51PM +0200, Álvaro Herrera wrote:\n> Hello,\n> \n> Regarding this item\n> \n> : Allow the SLRU cache sizes to be configured (Andrey Borodin, Dilip Kumar)\n> : \n> : The new server variables are commit_timestamp_buffers,\n> : multixact_member_buffers, multixact_offset_buffers, notify_buffers,\n> : serializable_buffers, subtransaction_buffers, and transaction_buffers.\n> \n> I hereby request to be listed as third author of this feature.\n> \n> Also, I'd like to suggest to make it more verbose, as details might be\n> useful to users. Mention that scalability is improved, because\n> previously we've suggested to recompile with larger #defines, but to be\n> cautious because values too high degrade performance. Also mention the\n> point that some of these grow with shared_buffers is user-visible enough\n> that it warrants an explicit mention. How about like this:\n> \n> : Allow the SLRU cache sizes to be configured and improve performance of\n> : larger caches\n> : (Andrey Borodin, Dilip Kumar, Álvaro Herrera)\n> : \n> : The new server variables are commit_timestamp_buffers,\n> : multixact_member_buffers, multixact_offset_buffers, notify_buffers,\n> : serializable_buffers, subtransaction_buffers, and transaction_buffers.\n> : commit_timestamp_buffers, transaction_buffers and\n> : subtransaction_buffers scale up automatically with shared_buffers.\n\nYes, I like that, patch applied.\n\n> These three items\n> \n> : Allow pg_stat_reset_shared() to reset all shared statistics (Atsushi Torikoshi)\n> : \n> : This is done by passing NULL.\n> : \n> : Allow pg_stat_reset_shared('slru') to clear SLRU statistics (Atsushi Torikoshi)\n> : \n> : Now pg_stat_reset_shared(NULL) also resets SLRU statistics.\n> : \n> : Allow pg_stat_reset_slru() to reset all SLRU statistics (Bharath Rupireddy)\n> : \n> : The command pg_stat_reset_slru(NULL) already did this.\n> \n> seem a bit repetitive. (I think the first one is also wrong, because it\n> says you have to pass NULL, but in reality you can also not give an\n> argument and it works.) Can we make them a single item? Maybe\n> something like\n> \n> : Improve reset routines for shared statistics (Atsushi Torikoshi, Bharath Rupireddy)\n> :\n> : Resetting all shared statistics can now be done with\n> : pg_stat_reset_shared() or pg_stat_reset_shared(NULL), while SLRU\n> : statistics can now be reset with pg_stat_reset_shared('slru'),\n> : pg_stat_reset_slru() and pg_stat_reset_slru(NULL).\n\nAndres already suggested improvement for this, and I posted the applied\npatch. Can you see if that is good or can be improved? Thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 25 May 2024 23:49:03 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 04:54:28PM -0300, Marcos Pegoraro wrote:\n> • Rename SLRU columns in system view pg_stat_slru (Alvaro Herrera)\n> \n> The column names accepted by pg_stat_slru_rest() are also changed.\n> \n> Is pg_stat_slru_rest() correct ? \n\nOops, typo, fixed, thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 25 May 2024 23:50:34 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 23, 2024 at 08:19:15PM -0400, Peter Geoghegan wrote:\n> On Wed, May 22, 2024 at 6:50 PM Bruce Momjian <[email protected]> wrote:\n> > Agreed, patch applied, thanks.\n> \n> The item for my commit 5bf748b8 currently reads:\n> \n> \"Allow btree indexes to more efficiently find array matches\"\n> \n> I think that this isn't likely to mean much to most users. It seems\n> like it'd be a lot clearer if the wording was more in line with the\n> beta1 announcement, which talks about the work as an enhancement to\n> index scans that use an IN ( ) condition. Specifically referencing\n> IN() (as opposed to something about arrays or array conditions) is\n> likely to make the item much more understandable.\n> \n> Referencing array matches makes me think of a GIN index on an array\n> column. While ScalarArrayOps do use an array under the cover, that's\n> mostly an implementation detail. At least it is to users that\n> exclusively use IN(), likely the majority (that's the SQL standard\n> syntax).\n> \n> For context, the Postgres 9.2 release notes described the feature that\n> my work directly built on as follows:\n> \n> \"Allow indexed_col op ANY(ARRAY[...]) conditions to be used in plain\n> index scans and index-only scans\"\n> \n> This was a very accurate description of this earlier work. Similar\n> wording could be used now, but that doesn't seem great to me either.\n> Simply because this wording also doesn't reference IN() conditions in\n> index quals.\n\nAgreed. I changed it to:\n\n\tAllow btree indexes to more efficiently find a set of values, such as\n\tthose supplied by IN subqueries\n\nIs that good?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 25 May 2024 23:57:01 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-May-25, Bruce Momjian wrote:\n\n> On Thu, May 23, 2024 at 01:22:51PM +0200, Álvaro Herrera wrote:\n\n> > Can we make them a single item? Maybe something like\n> > \n> > : Improve reset routines for shared statistics (Atsushi Torikoshi, Bharath Rupireddy)\n> > :\n> > : Resetting all shared statistics can now be done with\n> > : pg_stat_reset_shared() or pg_stat_reset_shared(NULL), while SLRU\n> > : statistics can now be reset with pg_stat_reset_shared('slru'),\n> > : pg_stat_reset_slru() and pg_stat_reset_slru(NULL).\n> \n> Andres already suggested improvement for this, and I posted the applied\n> patch. Can you see if that is good or can be improved? Thanks.\n\nYeah, looks good, thanks.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"\n\n\n", "msg_date": "Sun, 26 May 2024 10:10:00 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sun, May 26, 2024 at 10:10:00AM +0200, Álvaro Herrera wrote:\n> On 2024-May-25, Bruce Momjian wrote:\n> \n> > On Thu, May 23, 2024 at 01:22:51PM +0200, Álvaro Herrera wrote:\n> \n> > > Can we make them a single item? Maybe something like\n> > > \n> > > : Improve reset routines for shared statistics (Atsushi Torikoshi, Bharath Rupireddy)\n> > > :\n> > > : Resetting all shared statistics can now be done with\n> > > : pg_stat_reset_shared() or pg_stat_reset_shared(NULL), while SLRU\n> > > : statistics can now be reset with pg_stat_reset_shared('slru'),\n> > > : pg_stat_reset_slru() and pg_stat_reset_slru(NULL).\n> > \n> > Andres already suggested improvement for this, and I posted the applied\n> > patch. Can you see if that is good or can be improved? Thanks.\n> \n> Yeah, looks good, thanks.\n\nWow, that's great. My head started to spin trying to make sense of how\nthose three entries connected. Glad we were able to condense them, and\nthe new result is easier to read.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 26 May 2024 08:42:25 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sun, 26 May 2024 at 15:57, Bruce Momjian <[email protected]> wrote:\n> Agreed. I changed it to:\n>\n> Allow btree indexes to more efficiently find a set of values, such as\n> those supplied by IN subqueries\n>\n> Is that good?\n\nI think this needs further adjustment. An \"IN subquery\" is an IN\nclause which contains a subquery. As far as I understand it,\n5bf748b86 does nothing to improve those. It's there to improve IN with\na set of values such as IN(1,2,3).\n\nMaybe \"IN subqueries\" can be replaced with \"an SQL IN clause\".\n\nDavid\n\n\n", "msg_date": "Tue, 28 May 2024 14:44:28 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, May 28, 2024 at 02:44:28PM +1200, David Rowley wrote:\n> On Sun, 26 May 2024 at 15:57, Bruce Momjian <[email protected]> wrote:\n> > Agreed. I changed it to:\n> >\n> > Allow btree indexes to more efficiently find a set of values, such as\n> > those supplied by IN subqueries\n> >\n> > Is that good?\n> \n> I think this needs further adjustment. An \"IN subquery\" is an IN\n> clause which contains a subquery. As far as I understand it,\n> 5bf748b86 does nothing to improve those. It's there to improve IN with\n> a set of values such as IN(1,2,3).\n> \n> Maybe \"IN subqueries\" can be replaced with \"an SQL IN clause\".\n\nOkay, I went with:\n\n\tAllow btree indexes to more efficiently find a set of values,\n\tsuch as those supplied by IN clauses using constants (Peter Geoghegan,\n\tMatthias van de Meent)\n\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 28 May 2024 00:20:21 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 9 May 2024 at 05:03, Bruce Momjian <[email protected]> wrote:\n>\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-17.html\n\nI noticed a couple more things. This item:\n\n Add functions to convert integers to hex and binary strings\n\nshould read:\n\n Add functions to convert integers to binary and octal strings\n\n\nThe \"Improve psql tab completion\" item should include this commit:\n\nAuthor: Michael Paquier <[email protected]>\n2024-05-01 [2800fbb2b] Add tab completion for EXPLAIN (MEMORY|SERIALIZE)\n\nand credit Jian He.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 5 Jun 2024 23:46:17 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Jun 5, 2024 at 11:46:17PM +0100, Dean Rasheed wrote:\n> On Thu, 9 May 2024 at 05:03, Bruce Momjian <[email protected]> wrote:\n> >\n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-17.html\n> \n> I noticed a couple more things. This item:\n> \n> Add functions to convert integers to hex and binary strings\n> \n> should read:\n> \n> Add functions to convert integers to binary and octal strings\n> \n> \n> The \"Improve psql tab completion\" item should include this commit:\n> \n> Author: Michael Paquier <[email protected]>\n> 2024-05-01 [2800fbb2b] Add tab completion for EXPLAIN (MEMORY|SERIALIZE)\n> \n> and credit Jian He.\n\nAgreed, attached patch applied.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 5 Jun 2024 20:53:09 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nI noticed that PG17's release note for commit cafe10565 is \"Allow psql\nconnections to be canceled with control-C (Tristan Partin)\", but this\nsummary seems wrong to me.\n\nWe already had ^C connection (query) cancellation for quite some time\nbefore this patch. What's new with that patch, is that we now also can\ncancel connection attempts with ^C while we're still connecting (i.e.,\nwe haven't yet authenticated and are trying to move the connection\nstate forward).\nI think a better wording would be \"Allow psql connection attempts to\nbe canceled with control-C (Tristan Partin)\", or \"Allow psql\nconnections to be canceled with control-C while psql is still\nconnecting (Tristan Partin)\".\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 5 Jul 2024 19:51:38 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, Jul 5, 2024 at 07:51:38PM +0200, Matthias van de Meent wrote:\n> Hi,\n> \n> I noticed that PG17's release note for commit cafe10565 is \"Allow psql\n> connections to be canceled with control-C (Tristan Partin)\", but this\n> summary seems wrong to me.\n> \n> We already had ^C connection (query) cancellation for quite some time\n> before this patch. What's new with that patch, is that we now also can\n> cancel connection attempts with ^C while we're still connecting (i.e.,\n> we haven't yet authenticated and are trying to move the connection\n> state forward).\n> I think a better wording would be \"Allow psql connection attempts to\n> be canceled with control-C (Tristan Partin)\", or \"Allow psql\n> connections to be canceled with control-C while psql is still\n> connecting (Tristan Partin)\".\n\nI see your point. I committed a change to use this wording:\n\n Allow psql connection attempts to be canceled with control-C\n (Tristan Partin)\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 5 Jul 2024 16:53:02 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nIn the PG17 release notes, I noticed it is mentioned as\n\"pg_attribute.stxstattarget\" which seems incorrect.\nIn my opinion, it should be \"pg_statistic_ext.stxstattarget\" because the\n\"stxstattarget\" column is part of the \"pg_statistic_ext\" catalog.\n\nRegards,\nKisoon Kwon\nBitnine Global (www.bitnine.net)\n\n2024년 7월 6일 (토) 오전 5:53, Bruce Momjian <[email protected]>님이 작성:\n\n> On Fri, Jul 5, 2024 at 07:51:38PM +0200, Matthias van de Meent wrote:\n> > Hi,\n> >\n> > I noticed that PG17's release note for commit cafe10565 is \"Allow psql\n> > connections to be canceled with control-C (Tristan Partin)\", but this\n> > summary seems wrong to me.\n> >\n> > We already had ^C connection (query) cancellation for quite some time\n> > before this patch. What's new with that patch, is that we now also can\n> > cancel connection attempts with ^C while we're still connecting (i.e.,\n> > we haven't yet authenticated and are trying to move the connection\n> > state forward).\n> > I think a better wording would be \"Allow psql connection attempts to\n> > be canceled with control-C (Tristan Partin)\", or \"Allow psql\n> > connections to be canceled with control-C while psql is still\n> > connecting (Tristan Partin)\".\n>\n> I see your point. I committed a change to use this wording:\n>\n> Allow psql connection attempts to be canceled with control-C\n> (Tristan Partin)\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Only you can decide what is important to you.\n>\n>\n>\n\nHi,In the PG17 release notes, I noticed it is mentioned as \"pg_attribute.stxstattarget\" which seems incorrect.In my opinion, it should be \"pg_statistic_ext.stxstattarget\" because the \"stxstattarget\" column is part of the \"pg_statistic_ext\" catalog.Regards,Kisoon KwonBitnine Global (www.bitnine.net)2024년 7월 6일 (토) 오전 5:53, Bruce Momjian <[email protected]>님이 작성:On Fri, Jul  5, 2024 at 07:51:38PM +0200, Matthias van de Meent wrote:\n> Hi,\n> \n> I noticed that PG17's release note for commit cafe10565 is \"Allow psql\n> connections to be canceled with control-C (Tristan Partin)\", but this\n> summary seems wrong to me.\n> \n> We already had ^C connection (query) cancellation for quite some time\n> before this patch. What's new with that patch, is that we now also can\n> cancel connection attempts with ^C while we're still connecting (i.e.,\n> we haven't yet authenticated and are trying to move the connection\n> state forward).\n> I think a better wording would be \"Allow psql connection attempts to\n> be canceled with control-C (Tristan Partin)\", or \"Allow psql\n> connections to be canceled with control-C while psql is still\n> connecting (Tristan Partin)\".\n\nI see your point.  I committed a change to use this wording:\n\n      Allow psql connection attempts to be canceled with control-C\n      (Tristan Partin)\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  Only you can decide what is important to you.", "msg_date": "Wed, 17 Jul 2024 15:32:45 +0900", "msg_from": "Kisoon Kwon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hi,\n\nOn Thu, 9 May 2024 00:03:50 -0400\nBruce Momjian <[email protected]> wrote:\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n> \n> \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n> \n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> \trelease-16: 206\n> \trelease-17: 188\n> \n> I welcome feedback. For some reason it was an easier job than usual.\n\nI found the following in the release notes:\n\n Change file boundary handling of two WAL file name functions \n (Kyotaro Horiguchi, Andres Freund, Bruce Momjian)\n\n The functions pg_walfile_name() and pg_walfile_name_offset() used to report the previous \n LSN segment number when the LSN was on a file segment boundary; it now returns the LSN segment. \n\nIt might be trivial, but, reading the associated commit message , I think it would be more explicit\nfor users to rewrite the last statement to\n\n\"it now returns the current LSN segment.\"\n\nRegards,\nYugo Nagata\n\n\n\n> \n> -- \n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n> \n> \n\n\n-- \nYugo Nagata <[email protected]>\n\n\n", "msg_date": "Fri, 26 Jul 2024 13:22:24 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "> Add server variable huge_page_size to report the use of huge pages by\n\nThe new variable is huge_page_status; h_p_size is several years old.\n\nBTW, I was surprised that these were included:\n\n+2024-02-28 [363eb0599] Convert README to Markdown.\n+2024-01-25 [7014c9a4b] Doc: improve documentation for jsonpath behavior.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 8 Aug 2024 08:55:53 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Jul 17, 2024 at 03:32:45PM +0900, Kisoon Kwon wrote:\n> Hi,\n> \n> In the PG17 release notes, I noticed it is mentioned as\n> \"pg_attribute.stxstattarget\" which seems incorrect.\n> In my opinion, it should be \"pg_statistic_ext.stxstattarget\" because the\n> \"stxstattarget\" column is part of the \"pg_statistic_ext\" catalog.\n\nYou are right, fixed in the attached patch. Sorry for the delay.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 16 Aug 2024 12:53:33 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, Jul 26, 2024 at 01:22:24PM +0900, Yugo Nagata wrote:\n> I found the following in the release notes:\n> \n> Change file boundary handling of two WAL file name functions \n> (Kyotaro Horiguchi, Andres Freund, Bruce Momjian)\n> \n> The functions pg_walfile_name() and pg_walfile_name_offset() used to report the previous \n> LSN segment number when the LSN was on a file segment boundary; it now returns the LSN segment. \n> \n> It might be trivial, but, reading the associated commit message , I think it would be more explicit\n> for users to rewrite the last statement to\n> \n> \"it now returns the current LSN segment.\"\n\nAgreed, applied patch attached. Sorry for the delay.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Fri, 16 Aug 2024 13:02:19 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, Aug 8, 2024 at 08:55:53AM -0500, Justin Pryzby wrote:\n> > Add server variable huge_page_size to report the use of huge pages by\n> \n> The new variable is huge_page_status; h_p_size is several years old.\n\nFixed. I created this mistake when I was adding links to the SGML file.\n\n> BTW, I was surprised that these were included:\n> \n> +2024-02-28 [363eb0599] Convert README to Markdown.\n> +2024-01-25 [7014c9a4b] Doc: improve documentation for jsonpath behavior.\n\nI try to mention significant doc changes, and have done so in the past.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 16 Aug 2024 13:20:28 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "While freely acknowledging that I am biased because I wrote it, I am a bit\nsurprised to see the DSM registry left out of the release notes (commit\n8b2bcf3, docs are here [0]). This feature is intended to allow modules to\nallocate shared memory after startup, i.e., without requiring the module to\nbe loaded via shared_preload_libraries. IMHO that is worth mentioning.\n\n[0] https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-SHARED-ADDIN-AFTER-STARTUP\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 3 Sep 2024 10:44:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "hi.\n\nAllow partitions to be merged using ALTER TABLE ... MERGE PARTITIONS\n(Dmitry Koval)\nAllow partitions to be split using ALTER TABLE ... SPLIT PARTITION\n(Dmitry Koval)\n\nalso these two items got reverted? see\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=3890d90c1508125729ed20038d90513694fc3a7b\n\n\n", "msg_date": "Wed, 4 Sep 2024 19:18:52 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 4, 2024 at 07:18:52PM +0800, jian he wrote:\n> hi.\n> \n> Allow partitions to be merged using ALTER TABLE ... MERGE PARTITIONS\n> (Dmitry Koval)\n> Allow partitions to be split using ALTER TABLE ... SPLIT PARTITION\n> (Dmitry Koval)\n> \n> also these two items got reverted? see\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=3890d90c1508125729ed20038d90513694fc3a7b\n\nI don't see them in the PG 17 release notes at:\n\n\thttps://www.postgresql.org/docs/17/release-17.html\n\nI did just remove the tab complete comment for this though.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 5 Sep 2024 21:49:48 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, Sep 3, 2024 at 10:44:01AM -0500, Nathan Bossart wrote:\n> While freely acknowledging that I am biased because I wrote it, I am a bit\n> surprised to see the DSM registry left out of the release notes (commit\n> 8b2bcf3, docs are here [0]). This feature is intended to allow modules to\n> allocate shared memory after startup, i.e., without requiring the module to\n> be loaded via shared_preload_libraries. IMHO that is worth mentioning.\n> \n> [0] https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-SHARED-ADDIN-AFTER-STARTUP\n\nThat seems more infrastructure/extension author stuff which isn't\nnormally mentioned in the release notes. I think such people really\nneed to look at all the commit messages.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 5 Sep 2024 21:51:25 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-Sep-05, Bruce Momjian wrote:\n\n> That seems more infrastructure/extension author stuff which isn't\n> normally mentioned in the release notes. I think such people really\n> need to look at all the commit messages.\n\nAre you saying all extension authors should be reading the complete git\nlog for every single major release? That's a strange position to take.\nIsn't this a good fit for \"E.1.3.10. Source Code\"?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Sat, 7 Sep 2024 11:55:09 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, Sep 7, 2024 at 11:55:09AM +0200, Álvaro Herrera wrote:\n> On 2024-Sep-05, Bruce Momjian wrote:\n> \n> > That seems more infrastructure/extension author stuff which isn't\n> > normally mentioned in the release notes. I think such people really\n> > need to look at all the commit messages.\n> \n> Are you saying all extension authors should be reading the complete git\n> log for every single major release? That's a strange position to take.\n> Isn't this a good fit for \"E.1.3.10. Source Code\"?\n\nYes. There are so many changes at the source code level it is unwise to\ntry and get them into the main release notes. If someone wants to\ncreate an addendum, like was suggested for pure performance\nimprovements, that would make sense.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 9 Sep 2024 22:46:50 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, 10 Sept 2024 at 04:47, Bruce Momjian <[email protected]> wrote:\n> Yes. There are so many changes at the source code level it is unwise to\n> try and get them into the main release notes. If someone wants to\n> create an addendum, like was suggested for pure performance\n> improvements, that would make sense.\n\nI agree that the release notes cannot fit every change. But I also\ndon't think any extension author reads the complete git commit log\nevery release, so taking the stance that they should be seems\nunhelpful. And the \"Source Code\" section does exist so at some level\nyou seem to disagree with that too. So what is the way to decide that\nsomething makes the cut for the \"Source Code\" section?\n\nI think as an extension author there are usually three types of\nchanges that are relevant:\n1. New APIs/hooks that are meant for extension authors\n2. Stuff that causes my existing code to not compile anymore\n3. Stuff that changes behaviour of existing APIs code in a\nincompatible but silent way\n\nFor 1, I think adding them to the release notes makes total sense,\nespecially if the new APIs are documented not only in source code, but\nalso on the website. Nathan his change is of this type, so I agree\nwith him it should be in the release notes.\n\nFor 2, I'll be able to easily find the PG commit that caused the\ncompilation failure by grepping git history for the old API. So having\nthese changes in the release notes seems unnecessary.\n\nFor 3, it would be very useful if it would be in the release notes,\nbut I think in many cases it's hard to know what commits do this. So\nunless it's obviously going to break a bunch of extensions silently, I\nthink we don't have to add such changes to the release notes.\n\n\n", "msg_date": "Tue, 10 Sep 2024 08:28:42 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, Sep 05, 2024 at 09:51:25PM -0400, Bruce Momjian wrote:\n> On Tue, Sep 3, 2024 at 10:44:01AM -0500, Nathan Bossart wrote:\n> > While freely acknowledging that I am biased because I wrote it, I am a bit\n> > surprised to see the DSM registry left out of the release notes (commit\n> > 8b2bcf3, docs are here [0]). This feature is intended to allow modules to\n> > allocate shared memory after startup, i.e., without requiring the module to\n> > be loaded via shared_preload_libraries. IMHO that is worth mentioning.\n> > \n> > [0] https://www.postgresql.org/docs/devel/xfunc-c.html#XFUNC-SHARED-ADDIN-AFTER-STARTUP\n> \n> That seems more infrastructure/extension author stuff which isn't\n> normally mentioned in the release notes.\n\nIf I understand the feature correctly, it allows extensions to be just\nCREATEd without having them to be added to shared_preload_libraries,\ni.e. saving the organization an instance restart/downtime.\n\nThat seems important enough for end-users to know, even if they will\nneed to wait for extension authors to catch up to this (but I guess a\nlot will).\n\n\nMichael\n\n\n", "msg_date": "Tue, 10 Sep 2024 09:11:19 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-Sep-10, Jelte Fennema-Nio wrote:\n\n> I think as an extension author there are usually three types of\n> changes that are relevant:\n>\n> 1. New APIs/hooks that are meant for extension authors\n\n> For 1, I think adding them to the release notes makes total sense,\n> especially if the new APIs are documented not only in source code, but\n> also on the website. Nathan his change is of this type, so I agree\n> with him it should be in the release notes.\n\nI agree. The volume of such items should be pretty small.\n\n> 3. Stuff that changes behaviour of existing APIs code in a\n> incompatible but silent way\n\n> For 3, it would be very useful if it would be in the release notes,\n> but I think in many cases it's hard to know what commits do this. So\n> unless it's obviously going to break a bunch of extensions silently, I\n> think we don't have to add such changes to the release notes.\n\nWhile we cannot be 100% vigilant (and it doesn't seem likely for\nautomated tools to detect this), we try to avoid API changes that would\nstill compile but behave incompatibly. In many review discussions you\ncan see suggestions to change some function signature so that\nthird-party authors would be aware that they need to adapt their code to\nnew behavior, turning cases of (3) into (2). I agree that these don't\nneed release notes items.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/\n\n\n", "msg_date": "Tue, 10 Sep 2024 10:51:39 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Mon, Sep 9, 2024 at 11:29 PM Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Tue, 10 Sept 2024 at 04:47, Bruce Momjian <[email protected]> wrote:\n> > Yes. There are so many changes at the source code level it is unwise to\n> > try and get them into the main release notes. If someone wants to\n> > create an addendum, like was suggested for pure performance\n> > improvements, that would make sense.\n>\n> I agree that the release notes cannot fit every change. But I also\n> don't think any extension author reads the complete git commit log\n> every release, so taking the stance that they should be seems\n> unhelpful. And the \"Source Code\" section does exist so at some level\n> you seem to disagree with that too. So what is the way to decide that\n> something makes the cut for the \"Source Code\" section?\n>\n> I think as an extension author there are usually three types of\n> changes that are relevant:\n> 1. New APIs/hooks that are meant for extension authors\n> 2. Stuff that causes my existing code to not compile anymore\n> 3. Stuff that changes behaviour of existing APIs code in a\n> incompatible but silent way\n>\n> For 1, I think adding them to the release notes makes total sense,\n> especially if the new APIs are documented not only in source code, but\n> also on the website. Nathan his change is of this type, so I agree\n> with him it should be in the release notes.\n\n+1. I think that the increment JSON parser that is already mentioned\nin the release note would fall in this type too; it's not a feature\naimed just for extension authors, but it's kind of source and internal\nchanges IMO. Since the DSM registry feature is described in the doc, I\nthink it would make sense to have it in the release notes and probably\nhas a link to the \"Requesting Shared Memory After Startup\" section.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 09:52:42 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, Sep 10, 2024 at 08:28:42AM +0200, Jelte Fennema-Nio wrote:\n> I think as an extension author there are usually three types of\n> changes that are relevant:\n> 1. New APIs/hooks that are meant for extension authors\n> 2. Stuff that causes my existing code to not compile anymore\n> 3. Stuff that changes behaviour of existing APIs code in a\n> incompatible but silent way\n> \n> For 1, I think adding them to the release notes makes total sense,\n> especially if the new APIs are documented not only in source code, but\n> also on the website. Nathan his change is of this type, so I agree\n> with him it should be in the release notes.\n> \n> For 2, I'll be able to easily find the PG commit that caused the\n> compilation failure by grepping git history for the old API. So having\n> these changes in the release notes seems unnecessary.\n> \n> For 3, it would be very useful if it would be in the release notes,\n> but I think in many cases it's hard to know what commits do this. So\n> unless it's obviously going to break a bunch of extensions silently, I\n> think we don't have to add such changes to the release notes.\n\nSo, we are looking at this commit:\n\n commit b5a9b18cd0b\n Author: Thomas Munro <[email protected]>\n Date: Wed Apr 3 00:17:06 2024 +1300\n\n Provide API for streaming relation data.\n\n Introduce an abstraction allowing relation data to be accessed as a\n stream of buffers, with an implementation that is more efficient than\n the equivalent sequence of ReadBuffer() calls.\n\n Client code supplies a callback that can say which block number it wants\n next, and then consumes individual buffers one at a time from the\n stream. This division puts read_stream.c in control of how far ahead it\n can see and allows it to read clusters of neighboring blocks with\n StartReadBuffers(). It also issues POSIX_FADV_WILLNEED advice ahead of\n time when random access is detected.\n\n Other variants of I/O stream will be proposed in future work (for\n example to support recovery, whose LsnReadQueue device in\n xlogprefetcher.c is a distant cousin of this code and should eventually\n be replaced by this), but this basic API is sufficient for many common\n executor usage patterns involving predictable access to a single fork of\n a single relation.\n\n Several patches using this API are proposed separately.\n\n This stream concept is loosely based on ideas from Andres Freund on how\n we should pave the way for later work on asynchronous I/O.\n\nYou are right that I do mention changes specifically designed for the\nuse of extensions, but there is no mention in the commit message of its\nuse for extensions. In fact, I thought this was too low-level to be of\nuse for extensions. However, if people feel it should be added, we have\nenough time to add it.\n\nI also mention changes that are _likely_ to affect extensions, but not\nall changes that could affect extensions. An interesting idea would be\nto report all function signature changes in each major release in some\nway.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Wed, 11 Sep 2024 10:10:26 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, Sep 10, 2024 at 09:52:42AM -0700, Masahiko Sawada wrote:\n> On Mon, Sep 9, 2024 at 11:29 PM Jelte Fennema-Nio <[email protected]> wrote:\n> >\n> > On Tue, 10 Sept 2024 at 04:47, Bruce Momjian <[email protected]> wrote:\n> > > Yes. There are so many changes at the source code level it is unwise to\n> > > try and get them into the main release notes. If someone wants to\n> > > create an addendum, like was suggested for pure performance\n> > > improvements, that would make sense.\n> >\n> > I agree that the release notes cannot fit every change. But I also\n> > don't think any extension author reads the complete git commit log\n> > every release, so taking the stance that they should be seems\n> > unhelpful. And the \"Source Code\" section does exist so at some level\n> > you seem to disagree with that too. So what is the way to decide that\n> > something makes the cut for the \"Source Code\" section?\n> >\n> > I think as an extension author there are usually three types of\n> > changes that are relevant:\n> > 1. New APIs/hooks that are meant for extension authors\n> > 2. Stuff that causes my existing code to not compile anymore\n> > 3. Stuff that changes behaviour of existing APIs code in a\n> > incompatible but silent way\n> >\n> > For 1, I think adding them to the release notes makes total sense,\n> > especially if the new APIs are documented not only in source code, but\n> > also on the website. Nathan his change is of this type, so I agree\n> > with him it should be in the release notes.\n> \n> +1. I think that the increment JSON parser that is already mentioned\n> in the release note would fall in this type too; it's not a feature\n> aimed just for extension authors, but it's kind of source and internal\n> changes IMO. Since the DSM registry feature is described in the doc, I\n> think it would make sense to have it in the release notes and probably\n> has a link to the \"Requesting Shared Memory After Startup\" section.\n\nThis commit?\n\n\tcommit 8b2bcf3f287\n\tAuthor: Nathan Bossart <[email protected]>\n\tDate: Fri Jan 19 14:24:36 2024 -0600\n\t\n\t Introduce the dynamic shared memory registry.\n\nYes, we have time to add it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Wed, 11 Sep 2024 10:12:58 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 11, 2024 at 10:12:58AM -0400, Bruce Momjian wrote:\n> On Tue, Sep 10, 2024 at 09:52:42AM -0700, Masahiko Sawada wrote:\n>> On Mon, Sep 9, 2024 at 11:29 PM Jelte Fennema-Nio <[email protected]> wrote:\n>> > For 1, I think adding them to the release notes makes total sense,\n>> > especially if the new APIs are documented not only in source code, but\n>> > also on the website. Nathan his change is of this type, so I agree\n>> > with him it should be in the release notes.\n>> \n>> +1. I think that the increment JSON parser that is already mentioned\n>> in the release note would fall in this type too; it's not a feature\n>> aimed just for extension authors, but it's kind of source and internal\n>> changes IMO. Since the DSM registry feature is described in the doc, I\n>> think it would make sense to have it in the release notes and probably\n>> has a link to the \"Requesting Shared Memory After Startup\" section.\n> \n> This commit?\n> \n> \tcommit 8b2bcf3f287\n> \tAuthor: Nathan Bossart <[email protected]>\n> \tDate: Fri Jan 19 14:24:36 2024 -0600\n> \t\n> \t Introduce the dynamic shared memory registry.\n> \n> Yes, we have time to add it.\n\nYes, that's the one.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 11 Sep 2024 09:36:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Hello,\n\nI noticed that these two items in the current notes are separate:\n\n<!--\nAuthor: Alvaro Herrera <[email protected]>\n2024-03-25 [374c7a229] Allow specifying an access method for partitioned tables\nAuthor: Alvaro Herrera <[email protected]>\n2024-03-28 [e2395cdbe] ALTER TABLE: rework determination of access method ID\n-->\n\n <listitem>\n <para>\n Allow specification of partitioned <link linkend=\"tableam\">table\n access methods</link> (Justin Pryzby, Soumyadeep Chakraborty,\n Michael Paquier)\n </para>\n </listitem>\n\n<!--\nAuthor: Michael Paquier <[email protected]>\n2024-03-08 [d61a6cad6] Add support for DEFAULT in ALTER TABLE .. SET ACCESS MET\n-->\n\n <listitem>\n <para>\n Add <literal>DEFAULT</literal> setting for <literal>ALTER TABLE\n .. SET ACCESS METHOD</literal> (Michael Paquier)\n </para>\n </listitem>\n\nThey are very very closely related, so I suggest they should be\ntogether as a single item. Also, the first one is somewhat strangely\nworded IMO (we don't have \"partitioned table access methods\" -- rather,\nwe have table access methods for partitioned tables). Maybe something\nlike\n\n* Improve ALTER TABLE ... SET ACCESS METHOD\n\n This command can now also be applied to partitioned tables, so that it\n can <link to \"https://www.postgresql.org/docs/17/sql-altertable.html#SQL-ALTERTABLE-DESC-SET-ACCESS-METHOD\">\n influence partitions created later</link>. (Justin, Soumyadeep, Michaël)\n\n In addition, it now accepts the value DEFAULT to reset a previously\n set value. (Michaël)\n\n\nThere's also a bunch of items on EXPLAIN, which could perhaps be grouped\nin a single item with sub-paras for each individual change; I'd also\nmove it to the bottom of E.1.3.2.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"This is a foot just waiting to be shot\" (Andrew Dunstan)\n\n\n", "msg_date": "Wed, 11 Sep 2024 19:50:40 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 11, 2024 at 09:36:35AM -0500, Nathan Bossart wrote:\n> On Wed, Sep 11, 2024 at 10:12:58AM -0400, Bruce Momjian wrote:\n> > On Tue, Sep 10, 2024 at 09:52:42AM -0700, Masahiko Sawada wrote:\n> >> On Mon, Sep 9, 2024 at 11:29 PM Jelte Fennema-Nio <[email protected]> wrote:\n> >> > For 1, I think adding them to the release notes makes total sense,\n> >> > especially if the new APIs are documented not only in source code, but\n> >> > also on the website. Nathan his change is of this type, so I agree\n> >> > with him it should be in the release notes.\n> >> \n> >> +1. I think that the increment JSON parser that is already mentioned\n> >> in the release note would fall in this type too; it's not a feature\n> >> aimed just for extension authors, but it's kind of source and internal\n> >> changes IMO. Since the DSM registry feature is described in the doc, I\n> >> think it would make sense to have it in the release notes and probably\n> >> has a link to the \"Requesting Shared Memory After Startup\" section.\n> > \n> > This commit?\n> > \n> > \tcommit 8b2bcf3f287\n> > \tAuthor: Nathan Bossart <[email protected]>\n> > \tDate: Fri Jan 19 14:24:36 2024 -0600\n> > \t\n> > \t Introduce the dynamic shared memory registry.\n> > \n> > Yes, we have time to add it.\n> \n> Yes, that's the one.\n\nAttached patch applied, with commit URL link.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"", "msg_date": "Fri, 13 Sep 2024 16:17:31 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, Sep 13, 2024 at 04:17:31PM -0400, Bruce Momjian wrote:\n> Attached patch applied, with commit URL link.\n\nLooks good, thanks.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 13 Sep 2024 16:00:07 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 11, 2024 at 07:50:40PM +0200, Álvaro Herrera wrote:\n> Hello,\n> \n> I noticed that these two items in the current notes are separate:\n> \n> <!--\n> Author: Alvaro Herrera <[email protected]>\n> 2024-03-25 [374c7a229] Allow specifying an access method for partitioned tables\n> Author: Alvaro Herrera <[email protected]>\n> 2024-03-28 [e2395cdbe] ALTER TABLE: rework determination of access method ID\n> -->\n> \n> <listitem>\n> <para>\n> Allow specification of partitioned <link linkend=\"tableam\">table\n> access methods</link> (Justin Pryzby, Soumyadeep Chakraborty,\n> Michael Paquier)\n> </para>\n> </listitem>\n> \n> <!--\n> Author: Michael Paquier <[email protected]>\n> 2024-03-08 [d61a6cad6] Add support for DEFAULT in ALTER TABLE .. SET ACCESS MET\n> -->\n> \n> <listitem>\n> <para>\n> Add <literal>DEFAULT</literal> setting for <literal>ALTER TABLE\n> .. SET ACCESS METHOD</literal> (Michael Paquier)\n> </para>\n> </listitem>\n> \n> They are very very closely related, so I suggest they should be\n> together as a single item. Also, the first one is somewhat strangely\n> worded IMO (we don't have \"partitioned table access methods\" -- rather,\n> we have table access methods for partitioned tables). Maybe something\n> like\n\nYes, agree, the wording needs improvement, patch attached.\n\n> * Improve ALTER TABLE ... SET ACCESS METHOD\n> \n> This command can now also be applied to partitioned tables, so that it\n> can <link to \"https://www.postgresql.org/docs/17/sql-altertable.html#SQL-ALTERTABLE-DESC-SET-ACCESS-METHOD\">\n> influence partitions created later</link>. (Justin, Soumyadeep, Michaël)\n> \n> In addition, it now accepts the value DEFAULT to reset a previously\n> set value. (Michaël)\n\nI moved the two items next to each other, but I am concerned combining\nthe partition feature with the DEFAULT features is just making it too\ncomplicated to understand.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"", "msg_date": "Fri, 13 Sep 2024 18:25:41 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 11, 2024 at 07:50:40PM +0200, Álvaro Herrera wrote:\n> There's also a bunch of items on EXPLAIN, which could perhaps be grouped\n> in a single item with sub-paras for each individual change; I'd also\n> move it to the bottom of E.1.3.2.\n\nOh, I hadn't noticed I have five EXPLAIN items --- that is enough to\nmake a new section, done at:\n\n\thttps://momjian.us/pgsql_docs/release-17.html#RELEASE-17-EXPLAIN\n\nI don't think I can combine the EXPLAIN items without making them too\ncomplex.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Sat, 14 Sep 2024 09:38:46 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, 11 Sept 2024 at 16:10, Bruce Momjian <[email protected]> wrote:\n> You are right that I do mention changes specifically designed for the\n> use of extensions, but there is no mention in the commit message of its\n> use for extensions. In fact, I thought this was too low-level to be of\n> use for extensions. However, if people feel it should be added, we have\n> enough time to add it.\n\nAnother new API that is useful for extension authors is the following\none (I'm obviously biased since I'm the author, and I don't know if\nthere's still time):\n\ncommit 14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\nAuthor: Nathan Bossart <[email protected]>\nDate: Thu Jan 4 16:09:34 2024 -0600\n\n Add macros for looping through a List without a ListCell.\n\n Many foreach loops only use the ListCell pointer to retrieve the\n content of the cell, like so:\n\n ListCell *lc;\n\n foreach(lc, mylist)\n {\n int myint = lfirst_int(lc);\n\n ...\n }\n\n This commit adds a few convenience macros that automatically\n declare the loop variable and retrieve the current cell's contents.\n This allows us to rewrite the previous loop like this:\n\n foreach_int(myint, mylist)\n {\n ...\n }\n\n> An interesting idea would be\n> to report all function signature changes in each major release in some\n> way.\n\nI think that might be useful, but it very much depends how long that\nlist gets. If it gets too long I think authors will just try to\ncompile and only look at the ones that break for them.\n\n\n", "msg_date": "Tue, 17 Sep 2024 10:01:28 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 2024-Sep-11, Bruce Momjian wrote:\n\n> An interesting idea would be to report all function signature changes\n> in each major release in some way.\n\nHmm, extension authors are going to realize this as soon as they try to\ncompile, so it doesn't seem necessary. Having useful APIs _added_ is a\ndifferent matter, because those might help them realize that they can\nremove parts (or #ifdef-out for newer PG versions) of their code, or add\nnew features; there's no Clippit saying \"it looks like you're compiling\nfor Postgres 18, would you like to ...?\".\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The important things in the world are problems with society that we don't\nunderstand at all. The machines will become more complicated but they won't\nbe more complicated than the societies that run them.\" (Freeman Dyson)\n\n\n", "msg_date": "Tue, 17 Sep 2024 10:19:29 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Tue, Sep 17, 2024 at 10:01:28AM +0200, Jelte Fennema-Nio wrote:\n> On Wed, 11 Sept 2024 at 16:10, Bruce Momjian <[email protected]> wrote:\n> > You are right that I do mention changes specifically designed for the\n> > use of extensions, but there is no mention in the commit message of its\n> > use for extensions. In fact, I thought this was too low-level to be of\n> > use for extensions. However, if people feel it should be added, we have\n> > enough time to add it.\n> \n> Another new API that is useful for extension authors is the following\n> one (I'm obviously biased since I'm the author, and I don't know if\n> there's still time):\n> \n> commit 14dd0f27d7cd56ffae9ecdbe324965073d01a9ff\n> Author: Nathan Bossart <[email protected]>\n> Date: Thu Jan 4 16:09:34 2024 -0600\n> \n> Add macros for looping through a List without a ListCell.\n> \n> Many foreach loops only use the ListCell pointer to retrieve the\n> content of the cell, like so:\n> \n> ListCell *lc;\n> \n> foreach(lc, mylist)\n> {\n> int myint = lfirst_int(lc);\n> \n> ...\n> }\n> \n> This commit adds a few convenience macros that automatically\n> declare the loop variable and retrieve the current cell's contents.\n> This allows us to rewrite the previous loop like this:\n> \n> foreach_int(myint, mylist)\n> {\n> ...\n> }\n\nCan someone else comment on the idea of adding this release note item? \nI don't feel confident in my ability to evaluate this. I obviously did\nnot see it as significant the first time.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Wed, 18 Sep 2024 17:33:18 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 18, 2024 at 05:33:18PM -0400, Bruce Momjian wrote:\n> On Tue, Sep 17, 2024 at 10:01:28AM +0200, Jelte Fennema-Nio wrote:\n>> Another new API that is useful for extension authors is the following\n>> one (I'm obviously biased since I'm the author, and I don't know if\n>> there's still time):\n>> \n>> Add macros for looping through a List without a ListCell.\n> \n> Can someone else comment on the idea of adding this release note item? \n> I don't feel confident in my ability to evaluate this. I obviously did\n> not see it as significant the first time.\n\nI'm not sure precisely what criteria you use to choose what goes in the\nrelease notes, but this one seems like a judgement call to me. My initial\nreaction is that it shouldn't be included, but I do see some items with a\nsimilar scope, such as \"Remove some SPI macros.\"\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 18 Sep 2024 20:09:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Wed, Sep 18, 2024 at 9:09 PM Nathan Bossart <[email protected]> wrote:\n> On Wed, Sep 18, 2024 at 05:33:18PM -0400, Bruce Momjian wrote:\n> > On Tue, Sep 17, 2024 at 10:01:28AM +0200, Jelte Fennema-Nio wrote:\n> >> Another new API that is useful for extension authors is the following\n> >> one (I'm obviously biased since I'm the author, and I don't know if\n> >> there's still time):\n> >>\n> >> Add macros for looping through a List without a ListCell.\n> >\n> > Can someone else comment on the idea of adding this release note item?\n> > I don't feel confident in my ability to evaluate this. I obviously did\n> > not see it as significant the first time.\n>\n> I'm not sure precisely what criteria you use to choose what goes in the\n> release notes, but this one seems like a judgement call to me. My initial\n> reaction is that it shouldn't be included, but I do see some items with a\n> similar scope, such as \"Remove some SPI macros.\"\n\nI wouldn't mention either this or \"Remove some unused SPI macros\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:23:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, Sep 19, 2024 at 12:23:21PM -0400, Robert Haas wrote:\n> On Wed, Sep 18, 2024 at 9:09 PM Nathan Bossart <[email protected]> wrote:\n> > On Wed, Sep 18, 2024 at 05:33:18PM -0400, Bruce Momjian wrote:\n> > > On Tue, Sep 17, 2024 at 10:01:28AM +0200, Jelte Fennema-Nio wrote:\n> > >> Another new API that is useful for extension authors is the following\n> > >> one (I'm obviously biased since I'm the author, and I don't know if\n> > >> there's still time):\n> > >>\n> > >> Add macros for looping through a List without a ListCell.\n> > >\n> > > Can someone else comment on the idea of adding this release note item?\n> > > I don't feel confident in my ability to evaluate this. I obviously did\n> > > not see it as significant the first time.\n> >\n> > I'm not sure precisely what criteria you use to choose what goes in the\n> > release notes, but this one seems like a judgement call to me. My initial\n> > reaction is that it shouldn't be included, but I do see some items with a\n> > similar scope, such as \"Remove some SPI macros.\"\n> \n> I wouldn't mention either this or \"Remove some unused SPI macros\".\n\nI mentioned the SPI macros because that could lead to breakage, and\nthere might be applications, as well as extensions, that use it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Thu, 19 Sep 2024 13:04:40 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, Sep 19, 2024 at 1:04 PM Bruce Momjian <[email protected]> wrote:\n> I mentioned the SPI macros because that could lead to breakage, and\n> there might be applications, as well as extensions, that use it.\n\nSure, this is all a judgement call. I don't think it's particularly\nlikely that many people are relying on those macros, though, and if\nthey are, they will mostly likely find out that they're gone when they\ntry to compile, rather than from reading the release notes. Likewise,\nI feel that the new list iteration macros are both optional and minor,\nso there's not really a reason to tell people about them. But opinions\nwill vary, and that's fine. I just mentioned my opinion since you\nseemed to be asking. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Sep 2024 13:23:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 5/9/24 12:03 AM, Bruce Momjian wrote:\r\n> I have committed the first draft of the PG 17 release notes; you can\r\n> see the results here:\r\n> \r\n> \trelease-17: 188\r\n> \r\n> I welcome feedback. For some reason it was an easier job than usual.\r\n\r\nAttached is a proposal for the major features section. This borrows from \r\nthe release announcement draft[1] and lists out features and themes that \r\nhave broad user impact. This was a bit challenging for this release, \r\nbecause there are a lot of great features in PG17 that add up to a very \r\nspecial release.\r\n\r\nFeedback welcome.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://git.postgresql.org/gitweb/?p=press.git;a=blob;f=releases/17/release.en.md;hb=HEAD", "msg_date": "Fri, 20 Sep 2024 10:02:25 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, 2024-09-20 at 10:02 -0400, Jonathan S. Katz wrote:\n> Attached is a proposal for the major features section. This borrows from \n> the release announcement draft[1] and lists out features and themes that \n> have broad user impact. This was a bit challenging for this release, \n> because there are a lot of great features in PG17 that add up to a very \n> special release.\n> \n> Feedback welcome.\n\nI would have added the platform-independent binary collation provider.\nAnd perhaps \"pg_createsubscriber\": that can be a game-changer for setting\nup logical replication.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 20 Sep 2024 18:55:56 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On 9/20/24 12:55 PM, Laurenz Albe wrote:\r\n> On Fri, 2024-09-20 at 10:02 -0400, Jonathan S. Katz wrote:\r\n>> Attached is a proposal for the major features section. This borrows from\r\n>> the release announcement draft[1] and lists out features and themes that\r\n>> have broad user impact. This was a bit challenging for this release,\r\n>> because there are a lot of great features in PG17 that add up to a very\r\n>> special release.\r\n>>\r\n>> Feedback welcome.\r\n> \r\n> I would have added the platform-independent binary collation provider.\r\n> And perhaps \"pg_createsubscriber\": that can be a game-changer for setting\r\n> up logical replication.\r\n\r\nI was on the fence about that, mostly because it'd make that sentence \r\ntoo much of a mouthful, but I do agree.\r\n\r\nIIRC (didn't get to check) we did have a precedent for sublists in the \r\nmajor features, so I broke this one up. Please see attached.\r\n\r\nJonathan", "msg_date": "Fri, 20 Sep 2024 13:47:43 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, Sep 20, 2024 at 01:47:43PM -0400, Jonathan Katz wrote:\n> On 9/20/24 12:55 PM, Laurenz Albe wrote:\n> > On Fri, 2024-09-20 at 10:02 -0400, Jonathan S. Katz wrote:\n> > > Attached is a proposal for the major features section. This borrows from\n> > > the release announcement draft[1] and lists out features and themes that\n> > > have broad user impact. This was a bit challenging for this release,\n> > > because there are a lot of great features in PG17 that add up to a very\n> > > special release.\n> > > \n> > > Feedback welcome.\n> > \n> > I would have added the platform-independent binary collation provider.\n> > And perhaps \"pg_createsubscriber\": that can be a game-changer for setting\n> > up logical replication.\n> \n> I was on the fence about that, mostly because it'd make that sentence too\n> much of a mouthful, but I do agree.\n> \n> IIRC (didn't get to check) we did have a precedent for sublists in the major\n> features, so I broke this one up. Please see attached.\n\nPatch applied to PG 17.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:00:24 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Patch applied to PG 17.\n\nI don't see a push?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:05:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, 2024-09-20 at 13:47 -0400, Jonathan S. Katz wrote:\n> Please see attached.\n\n> + <listitem>\n> + <para>\n> + Various query performance improvements, including to sequential reads\n> + using streaming I/O, write throughput under high concurrency, and\n> + searches over multiple values in a <link linkend=\"btree\">btree</link>\n> + index.\n> + </para>\n> + </listitem>\n\nPerhaps that last part could be \"and searches over IN-lists in a b-tree index\".\nIt might be technically less correct, but I'd expect that it gives more people\nthe right idea.\n\n> + <para>\n> + <link\n> + linkend=\"app-pgcreatesubscriber\"><application>pg_createsubscriber</application></link>,\n> + a utility that logical replicas from physical standbys\n> + </para>\n\nThere's a verb missing: \"a utility that *creates* logical replicas...\"\n\n> + <para>\n> + <link\n> + linkend=\"pgupgrade\"><application>pg_upgrade</application></link> now\n> + preserves replication slots on both publishers and subscribers\n> + </para>\n\nI wonder if we should omit \"on both publishers and subscribers\".\nIt preserves replication slots anywhere, right?\n\n> + <listitem>\n> + <para>\n> + New client-side connection option, <link\n> + linkend=\"libpq-connect-sslnegotiation\"><literal>sslnegotiation=direct</literal></link>,\n> + that allows direct TLS handshakes that avoids a round-trip negotation.\n> + </para>\n> + </listitem>\n\nIt should be \"that avoid\". Plural.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 20 Sep 2024 22:13:57 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Fri, Sep 20, 2024 at 04:05:11PM -0400, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Patch applied to PG 17.\n> \n> I don't see a push?\n\nPush was delayed because my test script found some uncommitted files due\nto earlier testing. Should be fine now.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:20:04 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> [ assorted corrections ]\n\nI fixed a couple of these before seeing your message.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:21:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, 9 May 2024 00:03:50 -0400\nBruce Momjian <[email protected]> wrote:\n\n> I have committed the first draft of the PG 17 release notes; you can\n> see the results here:\n\nI propose to improve the following description in \"Migration to Version 17\"\nsection by adding CREATE INDEX and CREATE MATERIALIZED VIEW into the command list.\n\n <para>\n Change functions to use a safe <xref linkend=\"guc-search-path\"/>\n during maintenance operations (Jeff Davis)\n <ulink url=\"&commit_baseurl;2af07e2f7\">&sect;</ulink>\n </para>\n\nIt is suggested in the thread [1] that users could not notice the behaviour\nof CREATE INDEX is changed because the explicit command name is not listed in\nthe release notes. So, I think it is better to add CREATE INDEX and\nCREATE MATERIALIZED VIEW into the command list. \n\nI've attached a patch.\n\n[1] https://www.postgresql.org/message-id/flat/20240926125110.67e52f4f7a388af539367213%40sraoss.co.jp#71d4b5d6c842ba038e1e4e99c110b688\n\n> \n> \thttps://momjian.us/pgsql_docs/release-17.html\n> \n> It will be improved until the final release. The item count is 188,\n> which is similar to recent releases:\n> \n> \trelease-10: 189\n> \trelease-11: 170\n> \trelease-12: 180\n> \trelease-13: 178\n> \trelease-14: 220\n> \trelease-15: 184\n> \trelease-16: 206\n> \trelease-17: 188\n> \n> I welcome feedback. For some reason it was an easier job than usual.\n> \n> -- \n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Only you can decide what is important to you.\n> \n> \n\n\n-- \nYugo Nagata <[email protected]>", "msg_date": "Thu, 26 Sep 2024 14:19:21 +0900", "msg_from": "Yugo Nagata <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, Sep 21, 2024 at 1:50 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Fri, Sep 20, 2024 at 04:05:11PM -0400, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > Patch applied to PG 17.\n> >\n> > I don't see a push?\n>\n> Push was delayed because my test script found some uncommitted files due\n> to earlier testing. Should be fine now.\n>\n\n<para>\n <link\n linkend=\"app-pgcreatesubscriber\"><application>pg_createsubscriber</application></link>,\n a utility that creates logical replicas from physical standbys\n </para>\n\nThis description is okay but according to me, the more compelling use\ncase is that this new utility helps to allow online upgrades of\nphysical replication setup as explained in the blog [1]. See the\nsection: \"Upgrading Streaming (Physical) Replication Setup\".\n\n </listitem>\n <listitem>\n <para>\n <link\n linkend=\"pgupgrade\"><application>pg_upgrade</application></link> now\n preserves replication slots on both publishers and subscribers\n </para>\n\nIt is better to write the above statement as:\n\"pg_upgrade</application></link> now preserves replication slots on\npublishers and full subscription's state on subscribers\". This is\nbecause replication slots are preserved on publishers. The subscribers\npreserve the subscription state.\n\n[1] - http://amitkapila16.blogspot.com/2024/09/online-upgrading-logical-and-physical.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 26 Sep 2024 15:08:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, Sep 26, 2024 at 02:19:21PM +0900, Yugo Nagata wrote:\n> On Thu, 9 May 2024 00:03:50 -0400\n> Bruce Momjian <[email protected]> wrote:\n> \n> > I have committed the first draft of the PG 17 release notes; you can\n> > see the results here:\n> \n> I propose to improve the following description in \"Migration to Version 17\"\n> section by adding CREATE INDEX and CREATE MATERIALIZED VIEW into the command list.\n> \n> <para>\n> Change functions to use a safe <xref linkend=\"guc-search-path\"/>\n> during maintenance operations (Jeff Davis)\n> <ulink url=\"&commit_baseurl;2af07e2f7\">&sect;</ulink>\n> </para>\n> \n> It is suggested in the thread [1] that users could not notice the behaviour\n> of CREATE INDEX is changed because the explicit command name is not listed in\n> the release notes. So, I think it is better to add CREATE INDEX and\n> CREATE MATERIALIZED VIEW into the command list. \n> \n> I've attached a patch.\n\nIt this a valid change? Seems so.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Sat, 28 Sep 2024 21:19:11 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, Sep 26, 2024 at 03:08:52PM +0530, Amit Kapila wrote:\n> On Sat, Sep 21, 2024 at 1:50 AM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Fri, Sep 20, 2024 at 04:05:11PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <[email protected]> writes:\n> > > > Patch applied to PG 17.\n> > >\n> > > I don't see a push?\n> >\n> > Push was delayed because my test script found some uncommitted files due\n> > to earlier testing. Should be fine now.\n> >\n> \n> <para>\n> <link\n> linkend=\"app-pgcreatesubscriber\"><application>pg_createsubscriber</application></link>,\n> a utility that creates logical replicas from physical standbys\n> </para>\n> \n> This description is okay but according to me, the more compelling use\n> case is that this new utility helps to allow online upgrades of\n> physical replication setup as explained in the blog [1]. See the\n> section: \"Upgrading Streaming (Physical) Replication Setup\".\n> \n> </listitem>\n> <listitem>\n> <para>\n> <link\n> linkend=\"pgupgrade\"><application>pg_upgrade</application></link> now\n> preserves replication slots on both publishers and subscribers\n> </para>\n> \n> It is better to write the above statement as:\n> \"pg_upgrade</application></link> now preserves replication slots on\n> publishers and full subscription's state on subscribers\". This is\n> because replication slots are preserved on publishers. The subscribers\n> preserve the subscription state.\n\nSo, as I understand it, this preservation only happens when the _old_\nPostgres version is 17+. Do we want to try and explain that in the\nPostgres 17 release notes?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Sat, 28 Sep 2024 21:20:22 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sun, Sep 29, 2024 at 6:50 AM Bruce Momjian <[email protected]> wrote:\n>\n> On Thu, Sep 26, 2024 at 03:08:52PM +0530, Amit Kapila wrote:\n> > On Sat, Sep 21, 2024 at 1:50 AM Bruce Momjian <[email protected]> wrote:\n> > >\n> > > On Fri, Sep 20, 2024 at 04:05:11PM -0400, Tom Lane wrote:\n> > > > Bruce Momjian <[email protected]> writes:\n> > > > > Patch applied to PG 17.\n> > > >\n> > > > I don't see a push?\n> > >\n> > > Push was delayed because my test script found some uncommitted files due\n> > > to earlier testing. Should be fine now.\n> > >\n> >\n> > <para>\n> > <link\n> > linkend=\"app-pgcreatesubscriber\"><application>pg_createsubscriber</application></link>,\n> > a utility that creates logical replicas from physical standbys\n> > </para>\n> >\n> > This description is okay but according to me, the more compelling use\n> > case is that this new utility helps to allow online upgrades of\n> > physical replication setup as explained in the blog [1]. See the\n> > section: \"Upgrading Streaming (Physical) Replication Setup\".\n> >\n> > </listitem>\n> > <listitem>\n> > <para>\n> > <link\n> > linkend=\"pgupgrade\"><application>pg_upgrade</application></link> now\n> > preserves replication slots on both publishers and subscribers\n> > </para>\n> >\n> > It is better to write the above statement as:\n> > \"pg_upgrade</application></link> now preserves replication slots on\n> > publishers and full subscription's state on subscribers\". This is\n> > because replication slots are preserved on publishers. The subscribers\n> > preserve the subscription state.\n>\n> So, as I understand it, this preservation only happens when the _old_\n> Postgres version is 17+.\n>\n\nYes.\n\n> Do we want to try and explain that in the\n> Postgres 17 release notes?\n>\n\nIt would be good if we can capture that information without bloating\nthe release document. However, this information is already present in\npg_upgrade docs, so users have a way to know the same even if we can't\nmention it in the release notes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 29 Sep 2024 18:33:29 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Sat, 28 Sep 2024 21:19:11 -0400\nBruce Momjian <[email protected]> wrote:\n\n> On Thu, Sep 26, 2024 at 02:19:21PM +0900, Yugo Nagata wrote:\n> > On Thu, 9 May 2024 00:03:50 -0400\n> > Bruce Momjian <[email protected]> wrote:\n> > \n> > > I have committed the first draft of the PG 17 release notes; you can\n> > > see the results here:\n> > \n> > I propose to improve the following description in \"Migration to Version 17\"\n> > section by adding CREATE INDEX and CREATE MATERIALIZED VIEW into the command list.\n> > \n> > <para>\n> > Change functions to use a safe <xref linkend=\"guc-search-path\"/>\n> > during maintenance operations (Jeff Davis)\n> > <ulink url=\"&commit_baseurl;2af07e2f7\">&sect;</ulink>\n> > </para>\n> > \n> > It is suggested in the thread [1] that users could not notice the behaviour\n> > of CREATE INDEX is changed because the explicit command name is not listed in\n> > the release notes. So, I think it is better to add CREATE INDEX and\n> > CREATE MATERIALIZED VIEW into the command list. \n> > \n> > I've attached a patch.\n> \n> It this a valid change? Seems so.\n\nYes. This change on CREATE INDEX was introduced by 2af07e2f7 together with\nother commands, but it was missed to be mentioned in the commit message\nalthough the description was added to the documentation.\n\nThe change on CEATE MATERIALIZED VIEW was introduced by a separate commit\nb4da732fd, since which the REFRESH logic is used when creating a matview.\nShould we add here a link to that commit, too?\n\nRegards,\nYugo Nagata\n\n> -- \n> Bruce Momjian <[email protected]> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> When a patient asks the doctor, \"Am I going to die?\", he means \n> \"Am I going to die soon?\"\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Mon, 30 Sep 2024 14:20:21 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" } ]
[ { "msg_contents": "In [0] I had noticed that we have no automated verification that global \nvariables are declared in header files. (For global functions, we have \nthis through -Wmissing-prototypes.) As I mentioned there, I discovered \nthe Clang compiler option -Wmissing-variable-declarations, which does \nexactly that. Clang has supported this for quite some time, and GCC 14, \nwhich was released a few days ago, now also supports it. I went and \ninstalled this option into the standard build flags and cleaned up the \nwarnings it found, which revealed a number of interesting things.\n\nI think these checks are important. We have been trying to mark global \nvariables as PGDLLIMPORT consistently, but that only catches variables \ndeclared in header files. Also, a threading project would surely \nbenefit from global variables (thread-local variables?) having \nconsistent declarations.\n\nAttached are patches organized by sub-topic. The most dubious stuff is \nin patches 0006 and 0007. A bunch of GUC-related variables are not in \nheader files but are pulled in via ad-hoc extern declarations. I can't \nrecognize an intentional scheme there, probably just done for \nconvenience or copied from previous practice. These should be organized \ninto appropriate header files.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/[email protected]", "msg_date": "Thu, 9 May 2024 11:23:32 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "consider -Wmissing-variable-declarations" }, { "msg_contents": "On 09/05/2024 12:23, Peter Eisentraut wrote:\n> In [0] I had noticed that we have no automated verification that global\n> variables are declared in header files. (For global functions, we have\n> this through -Wmissing-prototypes.) As I mentioned there, I discovered\n> the Clang compiler option -Wmissing-variable-declarations, which does\n> exactly that. Clang has supported this for quite some time, and GCC 14,\n> which was released a few days ago, now also supports it. I went and\n> installed this option into the standard build flags and cleaned up the\n> warnings it found, which revealed a number of interesting things.\n\nNice! More checks like this is good in general.\n\n> Attached are patches organized by sub-topic. The most dubious stuff is\n> in patches 0006 and 0007. A bunch of GUC-related variables are not in\n> header files but are pulled in via ad-hoc extern declarations. I can't\n> recognize an intentional scheme there, probably just done for\n> convenience or copied from previous practice. These should be organized\n> into appropriate header files.\n\n+1 for moving all these to header files. Also all the \"other stuff\" in \npatch 0007.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 10 May 2024 12:53:10 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "On 10.05.24 11:53, Heikki Linnakangas wrote:\n> On 09/05/2024 12:23, Peter Eisentraut wrote:\n>> In [0] I had noticed that we have no automated verification that global\n>> variables are declared in header files.  (For global functions, we have\n>> this through -Wmissing-prototypes.)  As I mentioned there, I discovered\n>> the Clang compiler option -Wmissing-variable-declarations, which does\n>> exactly that.  Clang has supported this for quite some time, and GCC 14,\n>> which was released a few days ago, now also supports it.  I went and\n>> installed this option into the standard build flags and cleaned up the\n>> warnings it found, which revealed a number of interesting things.\n> \n> Nice! More checks like this is good in general.\n> \n>> Attached are patches organized by sub-topic.  The most dubious stuff is\n>> in patches 0006 and 0007.  A bunch of GUC-related variables are not in\n>> header files but are pulled in via ad-hoc extern declarations.  I can't\n>> recognize an intentional scheme there, probably just done for\n>> convenience or copied from previous practice.  These should be organized\n>> into appropriate header files.\n> \n> +1 for moving all these to header files. Also all the \"other stuff\" in \n> patch 0007.\n\nI have found a partial explanation for the \"other stuff\". We have in \nlaunch_backend.c:\n\n/*\n * The following need to be available to the save/restore_backend_variables\n * functions. They are marked NON_EXEC_STATIC in their home modules.\n */\nextern slock_t *ShmemLock;\nextern slock_t *ProcStructLock;\nextern PGPROC *AuxiliaryProcs;\nextern PMSignalData *PMSignalState;\nextern pg_time_t first_syslogger_file_time;\nextern struct bkend *ShmemBackendArray;\nextern bool redirection_done;\n\nSo these are notionally static variables that had to be sneakily \nexported for the purposes of EXEC_BACKEND.\n\n(This probably also means my current patch set won't work cleanly on \nEXEC_BACKEND builds. I'll need to check that further.)\n\nHowever, it turns out that that comment is not completely true. \nShmemLock, ShmemBackendArray, and redirection_done are not in fact \nNON_EXEC_STATIC. I think they probably once were, but then they were \nneeded elsewhere and people thought, if launch_backend.c (formerly \npostmaster.c) gets at them via its own extern declaration, then I will \ndo that too.\n\nShmemLock has been like that for a longer time, but ShmemBackendArray \nand redirection_done are new like that in PG17, probably from all the \npostmaster.c refactoring.\n\nShmemLock and redirection_done have now escaped for wider use and should \nbe in header files, as my patches are proposing.\n\nShmemBackendArray only exists if EXEC_BACKEND, so it's fine, but the \ncomment is slightly misleading. Maybe sticking a NON_EXEC_STATIC onto \nShmemBackendArray, even though it's useless, would make this more \nconsistent.\n\n\n\n", "msg_date": "Tue, 14 May 2024 08:36:15 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "Here is an updated patch set. I have implemented proper solutions for \nthe various hacks in the previous patch set. So this patch set should \nnow be ready for proper consideration.\n\nThe way I have organized it here is that patches 0002 through 0008 \nshould be improvements in their own right.\n\nThe remaining two patches 0009 and 0010 are workarounds that are just \nnecessary to satisfy -Wmissing-variable-declarations. I haven't made up \nmy mind if I'd want to take the bison patch 0010 like this and undo it \nlater if we move to pure parsers everywhere, or instead wait for the \npure parsers to arrive before we install -Wmissing-variable-declarations \nfor everyone.\n\nObviously, people might also have opinions on some details of where \nexactly to put the declarations etc. I have tried to follow existing \npatterns as much as possible, but those are also not necessarily great \nin all cases.", "msg_date": "Tue, 18 Jun 2024 09:41:55 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "Hi,\n\n+many for doing this in principle\n\n\n> -const char *EAN13_range[][2] = {\n> +static const char *EAN13_range[][2] = {\n> \t{\"000\", \"019\"},\t\t\t\t/* GS1 US */\n> \t{\"020\", \"029\"},\t\t\t\t/* Restricted distribution (MO defined) */\n> \t{\"030\", \"039\"},\t\t\t\t/* GS1 US */\n\n> -const char *ISBN_range[][2] = {\n> +static const char *ISBN_range[][2] = {\n> \t{\"0-00\", \"0-19\"},\n> \t{\"0-200\", \"0-699\"},\n> \t{\"0-7000\", \"0-8499\"},\n> @@ -967,7 +967,7 @@ const char *ISBN_range[][2] = {\n> */\n\nI think these actually ought be \"static const char *const\" - right now the\ntable is mutable, unless this day ends in *day and I've confused myself with C\nsyntax again.\n\n\n\n\n> /* Hook to check passwords in CreateRole() and AlterRole() */\n> check_password_hook_type check_password_hook = NULL;\n> diff --git a/src/backend/postmaster/launch_backend.c b/src/backend/postmaster/launch_backend.c\n> index bdfa238e4fe..bb1b0ac2b9c 100644\n> --- a/src/backend/postmaster/launch_backend.c\n> +++ b/src/backend/postmaster/launch_backend.c\n> @@ -176,7 +176,7 @@ typedef struct\n> \tbool\t\tshmem_attach;\n> } child_process_kind;\n> \n> -child_process_kind child_process_kinds[] = {\n> +static child_process_kind child_process_kinds[] = {\n> \t[B_INVALID] = {\"invalid\", NULL, false},\n> \n> \t[B_BACKEND] = {\"backend\", BackendMain, true},\n\nThis really ought to be const as well and is new. Unless somebody protests\nI'm going to make it so soon.\n\n\nStructs like these, containing pointers, make for nice helpers in\nexploitation. We shouldn't make it easier by unnecessarily making them\nmutable.\n\n\n> diff --git a/src/bin/pg_archivecleanup/pg_archivecleanup.c b/src/bin/pg_archivecleanup/pg_archivecleanup.c\n> index 07bf356b70c..5a124385b7c 100644\n> --- a/src/bin/pg_archivecleanup/pg_archivecleanup.c\n> +++ b/src/bin/pg_archivecleanup/pg_archivecleanup.c\n> @@ -19,17 +19,18 @@\n> #include \"common/logging.h\"\n> #include \"getopt_long.h\"\n> \n> -const char *progname;\n> +static const char *progname;\n\nHm, this one I'm not so sure about. The backend version is explicitly globally\nvisible, and I don't see why we shouldn't do the same for other binaries.\n\n\n\n> From d89312042eb76c879d699380a5e2ed0bc7956605 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <[email protected]>\n> Date: Sun, 16 Jun 2024 23:52:06 +0200\n> Subject: [PATCH v2 05/10] Fix warnings from -Wmissing-variable-declarations\n> under EXEC_BACKEND\n> \n> The NON_EXEC_STATIC variables need a suitable declaration in a header\n> file under EXEC_BACKEND.\n> \n> Also fix the inconsistent application of the volatile qualifier for\n> PMSignalState, which was revealed by this change.\n\nI'm very very unenthused about adding volatile to more places. It's rarely\ncorrect and often slow. But I guess this doesn't really make it any worse.\n\n\n> +#ifdef TRACE_SYNCSCAN\n> +#include \"access/syncscan.h\"\n> +#endif\n\nI'd just include it unconditionally.\n\n\n\n\n> From f99c8712ff3dc2156c3e437cfa14f1f1a7f09079 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <[email protected]>\n> Date: Wed, 8 May 2024 13:49:37 +0200\n> Subject: [PATCH v2 09/10] Fix -Wmissing-variable-declarations warnings for\n> float.c special case\n> \n> Discussion: https://www.postgresql.org/message-id/flat/[email protected]\n> ---\n> src/backend/utils/adt/float.c | 5 +++++\n> 1 file changed, 5 insertions(+)\n> \n> diff --git a/src/backend/utils/adt/float.c b/src/backend/utils/adt/float.c\n> index cbbb8aecafc..bf047ee1b4c 100644\n> --- a/src/backend/utils/adt/float.c\n> +++ b/src/backend/utils/adt/float.c\n> @@ -56,6 +56,11 @@ static float8 cot_45 = 0;\n> * compiler to know that, else it might try to precompute expressions\n> * involving them. See comments for init_degree_constants().\n> */\n> +extern float8 degree_c_thirty;\n> +extern float8 degree_c_forty_five;\n> +extern float8 degree_c_sixty;\n> +extern float8 degree_c_one_half;\n> +extern float8 degree_c_one;\n> float8\t\tdegree_c_thirty = 30.0;\n> float8\t\tdegree_c_forty_five = 45.0;\n> float8\t\tdegree_c_sixty = 60.0;\n\nYikes, this is bad code. Relying on extern to have effects like this will just\nbreak with lto. But not the responsibility of this patch.\n\n\n> From 649e8086df1f175e843b26cad41a698c8c074c09 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <[email protected]>\n> Date: Wed, 8 May 2024 13:49:37 +0200\n> Subject: [PATCH v2 10/10] Fix -Wmissing-variable-declarations warnings in\n> bison code\n> \n> Add extern declarations for global variables produced by bison.\n\n:(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 08:02:38 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "I have committed the first few of these. (The compiler warning flag \nitself is not activated yet.) This should allow you to proceed with \nyour patches that add various const qualifiers. I'll come back to the \nrest later.\n\n\nOn 18.06.24 17:02, Andres Freund wrote:\n>> diff --git a/src/bin/pg_archivecleanup/pg_archivecleanup.c b/src/bin/pg_archivecleanup/pg_archivecleanup.c\n>> index 07bf356b70c..5a124385b7c 100644\n>> --- a/src/bin/pg_archivecleanup/pg_archivecleanup.c\n>> +++ b/src/bin/pg_archivecleanup/pg_archivecleanup.c\n>> @@ -19,17 +19,18 @@\n>> #include \"common/logging.h\"\n>> #include \"getopt_long.h\"\n>> \n>> -const char *progname;\n>> +static const char *progname;\n> \n> Hm, this one I'm not so sure about. The backend version is explicitly globally\n> visible, and I don't see why we shouldn't do the same for other binaries.\n\nWe have in various programs a mix of progname with static linkage and \nwith external linkage. AFAICT, this is merely determined by whether \nthere are multiple source files that need it, not by some higher-level \nscheme.\n\n>> From d89312042eb76c879d699380a5e2ed0bc7956605 Mon Sep 17 00:00:00 2001\n>> From: Peter Eisentraut <[email protected]>\n>> Date: Sun, 16 Jun 2024 23:52:06 +0200\n>> Subject: [PATCH v2 05/10] Fix warnings from -Wmissing-variable-declarations\n>> under EXEC_BACKEND\n>>\n>> The NON_EXEC_STATIC variables need a suitable declaration in a header\n>> file under EXEC_BACKEND.\n>>\n>> Also fix the inconsistent application of the volatile qualifier for\n>> PMSignalState, which was revealed by this change.\n> \n> I'm very very unenthused about adding volatile to more places. It's rarely\n> correct and often slow. But I guess this doesn't really make it any worse.\n\nYeah, it's not always clear with volatile, but in this one case it's \nprobably better to keep it consistent rather than having to cast it away \nor something.\n\n>> +#ifdef TRACE_SYNCSCAN\n>> +#include \"access/syncscan.h\"\n>> +#endif\n> \n> I'd just include it unconditionally.\n\nMy thinking here was that if we apply an include file cleaner (like \niwyu) sometime, it would flag this include as unused. This way it's \nclearer what it's for.\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 08:30:49 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "I have committed all of the fixes that I had previously posted, but \nbefore actually activating the warning option, I found another small \nhiccup with the Bison files.\n\nBefore Bison 3.4, the generated parser implementation files run afoul of \n-Wmissing-variable-declarations (in spite of commit ab61c40bfa2) because \ndeclarations for yylval and possibly yylloc are missing. The generated \nheader files contain an extern declaration, but the implementation files \ndon't include the header files. Since Bison 3.4, the generated \nimplementation files automatically include the generated header files, \nso then it works.\n\nTo make this work with older Bison versions as well, I made a patch to \ninclude the generated header file from the .y file.\n\n(With older Bison versions, the generated implementation file contains \neffectively a copy of the header file pasted in, so including the header \nfile is redundant. But we know this works anyway because the core \ngrammar uses this arrangement already.)", "msg_date": "Fri, 26 Jul 2024 11:07:25 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "On 26.07.24 11:07, Peter Eisentraut wrote:\n> I have committed all of the fixes that I had previously posted, but \n> before actually activating the warning option, I found another small \n> hiccup with the Bison files.\n\nThis has all been committed now.\n\n\n\n", "msg_date": "Sat, 3 Aug 2024 14:15:14 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This has all been committed now.\n\nVarious buildfarm animals are complaining about\n\ng++ -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -Wmissing-variable-declarations -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -Wno-cast-function-type-strict -g -O2 -Wno-deprecated-declarations -fPIC -fvisibility=hidden -shared -o llvmjit.so llvmjit.o llvmjit_error.o llvmjit_inline.o llvmjit_wrap.o llvmjit_deform.o llvmjit_expr.o -L../../../../src/port -L../../../../src/common -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/home/centos/17-lancehead/buildroot/HEAD/inst/lib',--enable-new-dtags -fvisibility=hidden -lLLVM-17 \ng++: error: unrecognized command line option \\342\\200\\230-Wmissing-variable-declarations\\342\\200\\231; did you mean \\342\\200\\230-Wmissing-declarations\\342\\200\\231?\nmake[2]: *** [../../../../src/Makefile.shlib:261: llvmjit.so] Error 1\n\nIt looks like we are passing CFLAGS not CXXFLAGS to this particular\ng++ invocation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 03 Aug 2024 16:46:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "On 03.08.24 22:46, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> This has all been committed now.\n> \n> Various buildfarm animals are complaining about\n> \n> g++ -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -Wmissing-variable-declarations -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -Wno-cast-function-type-strict -g -O2 -Wno-deprecated-declarations -fPIC -fvisibility=hidden -shared -o llvmjit.so llvmjit.o llvmjit_error.o llvmjit_inline.o llvmjit_wrap.o llvmjit_deform.o llvmjit_expr.o -L../../../../src/port -L../../../../src/common -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/home/centos/17-lancehead/buildroot/HEAD/inst/lib',--enable-new-dtags -fvisibility=hidden -lLLVM-17\n> g++: error: unrecognized command line option \\342\\200\\230-Wmissing-variable-declarations\\342\\200\\231; did you mean \\342\\200\\230-Wmissing-declarations\\342\\200\\231?\n> make[2]: *** [../../../../src/Makefile.shlib:261: llvmjit.so] Error 1\n> \n> It looks like we are passing CFLAGS not CXXFLAGS to this particular\n> g++ invocation.\n\nChanging this seems to have done the trick.\n\n\n\n", "msg_date": "Mon, 5 Aug 2024 07:40:39 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "On Wed, Jun 19, 2024 at 3:02 AM Andres Freund <[email protected]> wrote:\n> > -const char *EAN13_range[][2] = {\n> > +static const char *EAN13_range[][2] = {\n> > {\"000\", \"019\"}, /* GS1 US */\n> > {\"020\", \"029\"}, /* Restricted distribution (MO defined) */\n> > {\"030\", \"039\"}, /* GS1 US */\n>\n> > -const char *ISBN_range[][2] = {\n> > +static const char *ISBN_range[][2] = {\n> > {\"0-00\", \"0-19\"},\n> > {\"0-200\", \"0-699\"},\n> > {\"0-7000\", \"0-8499\"},\n> > @@ -967,7 +967,7 @@ const char *ISBN_range[][2] = {\n> > */\n\nFYI these ones generate -Wunused-variable warnings from headerscheck\non CI, though it doesn't fail the task. Hmm, these aren't really\nheaders, are they?\n\n\n", "msg_date": "Wed, 28 Aug 2024 15:31:13 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: consider -Wmissing-variable-declarations" }, { "msg_contents": "On 28.08.24 05:31, Thomas Munro wrote:\n> On Wed, Jun 19, 2024 at 3:02 AM Andres Freund <[email protected]> wrote:\n>>> -const char *EAN13_range[][2] = {\n>>> +static const char *EAN13_range[][2] = {\n>>> {\"000\", \"019\"}, /* GS1 US */\n>>> {\"020\", \"029\"}, /* Restricted distribution (MO defined) */\n>>> {\"030\", \"039\"}, /* GS1 US */\n>>\n>>> -const char *ISBN_range[][2] = {\n>>> +static const char *ISBN_range[][2] = {\n>>> {\"0-00\", \"0-19\"},\n>>> {\"0-200\", \"0-699\"},\n>>> {\"0-7000\", \"0-8499\"},\n>>> @@ -967,7 +967,7 @@ const char *ISBN_range[][2] = {\n>>> */\n> \n> FYI these ones generate -Wunused-variable warnings from headerscheck\n> on CI, though it doesn't fail the task. Hmm, these aren't really\n> headers, are they?\n\nYes, it looks like these ought to be excluded from checking:\n\ndiff --git a/src/tools/pginclude/headerscheck \nb/src/tools/pginclude/headerscheck\nindex 436e2b92a33..3fc737d2cc1 100755\n--- a/src/tools/pginclude/headerscheck\n+++ b/src/tools/pginclude/headerscheck\n@@ -138,6 +138,12 @@ do\n test \"$f\" = src/pl/tcl/pltclerrcodes.h && continue\n\n # Also not meant to be included standalone.\n+ test \"$f\" = contrib/isn/EAN13.h && continue\n+ test \"$f\" = contrib/isn/ISBN.h && continue\n+ test \"$f\" = contrib/isn/ISMN.h && continue\n+ test \"$f\" = contrib/isn/ISSN.h && continue\n+ test \"$f\" = contrib/isn/UPC.h && continue\n+\n test \"$f\" = src/include/common/unicode_nonspacing_table.h && continue\n test \"$f\" = src/include/common/unicode_east_asian_fw_table.h && \ncontinue\n\n\n\n", "msg_date": "Fri, 30 Aug 2024 09:27:05 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: consider -Wmissing-variable-declarations" } ]
[ { "msg_contents": "Hi everyone, I just installed Postgres and pg_tle extension as I was\nlooking to contribute to pg_tle.\n\nSomehow, I am unable to update the shared_preload_libraries. It feels like\nALTER has happened but the SPL value is not updated:\n\n> test=# show shared_preload_libraries;\n> shared_preload_libraries\n> --------------------------\n>\n> (1 row)\n>\n> test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';\n> ALTER SYSTEM\n> test=# SELECT pg_reload_conf();\n> pg_reload_conf\n> ----------------\n> t\n> (1 row)\n>\n> test=# show shared_preload_libraries;\n> shared_preload_libraries\n> --------------------------\n>\n> (1 row)\n>\n> test=#\n>\n\nI'm unable to open the postgresql.conf file to update it either. I provided\nthe correct macbook password above. But it is not accepted! :/\n\n> rajanx@b0be835adb74 postgresql % cat\n> /usr/local/pgsql/data/postgresql.auto.conf\n> cat: /usr/local/pgsql/data/postgresql.auto.conf: Permission denied\n\nrajanx@b0be835adb74 postgresql % su cat\n> /usr/local/pgsql/data/postgresql.auto.conf\n> Password:\n> su: Sorry\n>\n\nPlease help! Thank you. :)\n-- \nRegards\nRajan Pandey\n\nHi everyone, I just installed Postgres and pg_tle extension as I was looking to contribute to pg_tle. Somehow, I am unable to update the shared_preload_libraries. It feels like ALTER has happened but the SPL value is not updated:test=# show shared_preload_libraries; shared_preload_libraries -------------------------- (1 row)test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';ALTER SYSTEMtest=# SELECT pg_reload_conf(); pg_reload_conf ---------------- t(1 row)test=# show shared_preload_libraries; shared_preload_libraries -------------------------- (1 row)test=# I'm unable to open the postgresql.conf file to update it either. I provided the correct macbook password above. But it is not accepted! :/rajanx@b0be835adb74 postgresql % cat /usr/local/pgsql/data/postgresql.auto.confcat: /usr/local/pgsql/data/postgresql.auto.conf: Permission deniedrajanx@b0be835adb74 postgresql % su cat /usr/local/pgsql/data/postgresql.auto.confPassword:su: SorryPlease help! Thank you. :)-- RegardsRajan Pandey", "msg_date": "Thu, 9 May 2024 15:20:36 +0530", "msg_from": "Rajan Pandey <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "Hi\n\n\nOn Thu, May 9, 2024 at 2:50 PM Rajan Pandey <[email protected]>\nwrote:\n\n> Hi everyone, I just installed Postgres and pg_tle extension as I was\n> looking to contribute to pg_tle.\n>\n> Somehow, I am unable to update the shared_preload_libraries. It feels like\n> ALTER has happened but the SPL value is not updated:\n>\n>> test=# show shared_preload_libraries;\n>> shared_preload_libraries\n>> --------------------------\n>>\n>> (1 row)\n>>\n>> test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';\n>> ALTER SYSTEM\n>> test=# SELECT pg_reload_conf();\n>> pg_reload_conf\n>> ----------------\n>> t\n>> (1 row)\n>>\n>> test=# show shared_preload_libraries;\n>> shared_preload_libraries\n>> --------------------------\n>>\n>> (1 row)\n>>\n>> test=#\n>>\n>\n>\nI'm unable to open the postgresql.conf file to update it either. I provided\n> the correct macbook password above. But it is not accepted! :/\n>\n>> rajanx@b0be835adb74 postgresql % cat\n>> /usr/local/pgsql/data/postgresql.auto.conf\n>> cat: /usr/local/pgsql/data/postgresql.auto.conf: Permission denied\n>\n> rajanx@b0be835adb74 postgresql % su cat\n>> /usr/local/pgsql/data/postgresql.auto.conf\n>> Password:\n>> su: Sorry\n>>\n>\nThe issue is related with permissions, please make sure which user did the\ninstallation and update the postgresql.auto.conf file with that user\npermissions.\n\nRegards\nKashif Zeeshan\nBitnine Global\n\n>\n> Please help! Thank you. :)\n> --\n> Regards\n> Rajan Pandey\n>\n\nHiOn Thu, May 9, 2024 at 2:50 PM Rajan Pandey <[email protected]> wrote:Hi everyone, I just installed Postgres and pg_tle extension as I was looking to contribute to pg_tle. Somehow, I am unable to update the shared_preload_libraries. It feels like ALTER has happened but the SPL value is not updated:test=# show shared_preload_libraries; shared_preload_libraries -------------------------- (1 row)test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';ALTER SYSTEMtest=# SELECT pg_reload_conf(); pg_reload_conf ---------------- t(1 row)test=# show shared_preload_libraries; shared_preload_libraries -------------------------- (1 row)test=#  I'm unable to open the postgresql.conf file to update it either. I provided the correct macbook password above. But it is not accepted! :/rajanx@b0be835adb74 postgresql % cat /usr/local/pgsql/data/postgresql.auto.confcat: /usr/local/pgsql/data/postgresql.auto.conf: Permission deniedrajanx@b0be835adb74 postgresql % su cat /usr/local/pgsql/data/postgresql.auto.confPassword:su: SorryThe issue is related with permissions, please make sure which user did the installation and update the  postgresql.auto.conf file with that user permissions. RegardsKashif ZeeshanBitnine GlobalPlease help! Thank you. :)-- RegardsRajan Pandey", "msg_date": "Thu, 9 May 2024 14:59:40 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re:" }, { "msg_contents": "Rajan Pandey <[email protected]> writes:\n>> test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';\n>> ALTER SYSTEM\n>> test=# SELECT pg_reload_conf();\n>> pg_reload_conf\n>> ----------------\n>> t\n>> (1 row)\n\nChanging shared_preload_libraries requires a postmaster restart,\nnot just config reload. The postmaster log would have told you\nthat, but pg_reload_conf() can't really see the effects of its\nsignal.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 May 2024 09:29:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Thu, May 9, 2024 at 2:50 PM Rajan Pandey <[email protected]> wrote:\n>\n> Hi everyone, I just installed Postgres and pg_tle extension as I was looking to contribute to pg_tle.\n>\n> Somehow, I am unable to update the shared_preload_libraries. It feels like ALTER has happened but the SPL value is not updated:\n>>\n>> test=# show shared_preload_libraries;\n>> shared_preload_libraries\n>> --------------------------\n>>\n>> (1 row)\n>>\n>> test=# ALTER SYSTEM SET shared_preload_libraries TO 'pg_tle';\n>> ALTER SYSTEM\n>> test=# SELECT pg_reload_conf();\n>> pg_reload_conf\n>> ----------------\n>> t\n>> (1 row)\n>>\n>> test=# show shared_preload_libraries;\n>> shared_preload_libraries\n>> --------------------------\n>>\n>> (1 row)\n>>\n>> test=#\n>\n>\n> I'm unable to open the postgresql.conf file to update it either. I provided the correct macbook password above. But it is not accepted! :/\n>>\n>> rajanx@b0be835adb74 postgresql % cat /usr/local/pgsql/data/postgresql.auto.conf\n>> cat: /usr/local/pgsql/data/postgresql.auto.conf: Permission denied\n>>\n>> rajanx@b0be835adb74 postgresql % su cat /usr/local/pgsql/data/postgresql.auto.conf\n>> Password:\n>> su: Sorry\n\n\n\n\nIt seems you're trying to use the su command to switch to another user\nto read a file, but the su command is not the appropriate command for\nthis purpose on macOS.\n\nIf you want to read a file located in another user's directory, and\nyou have the necessary permissions to access that file, you can simply\nuse the cat command without su. Here's how you can do it:\n\n\n\n>\n>\n> Please help! Thank you. :)\n> --\n> Regards\n> Rajan Pandey\n\n\n", "msg_date": "Fri, 10 May 2024 08:14:57 +0500", "msg_from": "zaidagilist <[email protected]>", "msg_from_op": false, "msg_subject": "Re:" } ]
[ { "msg_contents": "Hi hackers,\n\nI detected two problems about ECPG.\nI show my opinion. Please comment.\nIf it's correct, I will prepare a patch.\nThank you.\n\n1.\nIt is indefinite what PGTYPEStimestamp_from_asc() returns in error.\nThe following is written in document(36.6.8. Special Constants of pgtypeslib):\n A value of type timestamp representing an invalid time stamp.\n This is returned by the function PGTYPEStimestamp_from_asc on parse error.\n Note that due to the internal representation of the timestamp data type,\n PGTYPESInvalidTimestamp is also a valid timestamp at the same time.\n It is set to 1899-12-31 23:59:59. In order to detect errors,\n make sure that your application does not only test for PGTYPESInvalidTimestamp\n but also for errno != 0 after each call to PGTYPEStimestamp_from_asc.\n\nHowever, PGTYPESInvalidTimestamp is not defined anywhere.\nIt no loger exists at REL6_2 that is the oldest branch.\nAt current implementation, PGTYPEStimestamp_from_asc returns -1.\n\nSo we must fix the document so that users write as follows:\n\n r = PGTYPEStimestamp_from_asc(.....);\n if (r < 0 || errno != 0)\n goto error;\n\n\n2.\nRegression test of pgtypelib is not robust (maybe incorrect).\nOur test failed although there is no bug actually.\n\nI think block2 and block3 should be swapped.\n\n---[src/interfaces/ecpg/test/pgtypeslib/dt_test.pgc]---\n\n // block1 (my comment)\n ts1 = PGTYPEStimestamp_from_asc(\"96-02-29\", NULL);\n text = PGTYPEStimestamp_to_asc(ts1);\n printf(\"timestamp_to_asc1: %s\\n\", text);\n PGTYPESchar_free(text);\n\n // block2\n ts1 = PGTYPEStimestamp_from_asc(\"1994-02-11 26:10:35\", NULL);\n text = PGTYPEStimestamp_to_asc(ts1);\n printf(\"timestamp_to_asc3: %s\\n\", text);\n PGTYPESchar_free(text);\n\n // The following comment is for block1 clearly.\n/* abc-03:10:35-def-02/11/94-gh */\n/* 12345678901234567890123456789 */\n\n // block3\n // Maybe the following is for 'ts1' returned in block1.\n // In our environment, 'out' is indefinite because PGTYPEStimestamp_fmt_asc()\n // didn't complete and the area is not initialized.\n out = (char*) malloc(32);\n i = PGTYPEStimestamp_fmt_asc(&ts1, out, 31, \"abc-%X-def-%x-ghi%%\");\n printf(\"timestamp_fmt_asc: %d: %s\\n\", i, out);\n free(out);\n\n------------------------------------\n\nBest Regards\nRyo Matsumura\n\n\nBest Regards\nRyo Matsumura\n\n", "msg_date": "Thu, 9 May 2024 10:54:31 +0000", "msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug: PGTYPEStimestamp_from_asc() in ECPG pgtypelib" }, { "msg_contents": "\"Ryo Matsumura (Fujitsu)\" <[email protected]> writes:\n> It is indefinite what PGTYPEStimestamp_from_asc() returns in error.\n> The following is written in document(36.6.8. Special Constants of pgtypeslib):\n> A value of type timestamp representing an invalid time stamp.\n> This is returned by the function PGTYPEStimestamp_from_asc on parse error.\n\n> However, PGTYPESInvalidTimestamp is not defined anywhere.\n\nUgh.\n\n> At current implementation, PGTYPEStimestamp_from_asc returns -1.\n\nIt looks to me like it returns 0 (\"noresult\"). Where are you seeing\n-1?\n\nThat documentation has more problems too: it claims that \"endptr\"\nis unimplemented, which looks like a lie to me: the code is there\nto do it, and there are several callers that depend on it.\n\n> Regression test of pgtypelib is not robust (maybe incorrect).\n> Our test failed although there is no bug actually.\n\n> I think block2 and block3 should be swapped.\n\nHm, block3 seems to be completely nuts. It looks like the code is\nrejecting the input (probably because \"26\" is out of range) and\nreturning zero, because what we see in the expected-stdout file is:\n\ntimestamp_to_asc3: 2000-01-01 00:00:00\n\nWe also see that the expected output from the PGTYPEStimestamp_fmt_asc\nstep is:\n\ntimestamp_fmt_asc: 0: abc-00:00:00-def-01/01/00-ghi%\n\nwhich is consistent with that, but not very much like what the\ncomment is expecting. I'm a bit inclined to just drop \"block 3\".\nIf we want to keep some kind of test of the error behavior,\nit doesn't belong right there, and it should be showing what errno\ngets set to.\n\nAs for the \"lack of robustness\", I'll bet the problem you are\nseeing is that the test uses the %X/%x format specifiers which\nare locale-dependent. But how come we haven't noticed that\nbefore? Have you added a setlocale() call somewhere?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 May 2024 15:39:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: PGTYPEStimestamp_from_asc() in ECPG pgtypelib" }, { "msg_contents": "Hi Tom,\nThank you for comment.\n\n>> At current implementation, PGTYPEStimestamp_from_asc returns -1.\n> It looks to me like it returns 0 (\"noresult\"). Where are you seeing -1?\n\nI took a mistake. Sorry.\nPGTYPEStimestamp_from_asc returns 0(noresult).\nPGTYPEStimestamp_fmt_asc given 'noresult' returns -1.\n\n\n> But how come we haven't noticed that\n> before? Have you added a setlocale() call somewhere?\n\nI didn't notice to this point.\nI added setlocale() to ECPG in my local branch.\nI will test again after removing it.\nIt looks to me like existing ECPG code does not include setlocale().\n\nSo Please ignore about behavior of PGTYPEStimestamp_fmt_asc().\nI want to continue to discuss about PGTYPEStimestamp_from_asc.\n\n\nCurrent PGTYPEStimestamp_from_asc() returns 0, but should we return -1?\nThe document claims about return that \"It is set to 1899-12-31 23:59:59.\".\n\nI wonder.\nIt is the incompatibility, but it may be allowed.\nbecause I think usual users may check with errno.\nOf course, the reason is weak.\nSome users may check with 0(noresult) from their experience.\n\n\n> That documentation has more problems too: it claims that \"endptr\"\n> is unimplemented, which looks like a lie to me: the code is there\n> to do it, and there are several callers that depend on it.\n\nI think so too. The followings have the same problem.\nPGTYPESdate_from_asc (ParseDate)\nPGTYPESinterval_from_asc (ParseDate)\nPGTYPEStimestamp_from_asc (ParseDate)\nPGTYPESnumeric_from_asc\n\n\n> which is consistent with that, but not very much like what the\n> comment is expecting. I'm a bit inclined to just drop \"block 3\".\n> If we want to keep some kind of test of the error behavior,\n> it doesn't belong right there, and it should be showing what errno\n> gets set to.\n\nIt is easy to exchange block3 to errno-checking.\nHowever if we just fix there, it is weird because there is\nno other errno-checking in dt_tests.\n\n> I'm a bit inclined to just drop \"block 3\".\nAre you concerned at the above point?\n\nBest Regards\nRyo Matsumura\n\n\n", "msg_date": "Fri, 10 May 2024 05:45:15 +0000", "msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: PGTYPEStimestamp_from_asc() in ECPG pgtypelib" }, { "msg_contents": "\"Ryo Matsumura (Fujitsu)\" <[email protected]> writes:\n>> But how come we haven't noticed that\n>> before? Have you added a setlocale() call somewhere?\n\n> I didn't notice to this point.\n> I added setlocale() to ECPG in my local branch.\n> I will test again after removing it.\n> It looks to me like existing ECPG code does not include setlocale().\n\n> So Please ignore about behavior of PGTYPEStimestamp_fmt_asc().\n\nIf that's the only ecpg test case that fails under a non-C locale,\nI think it'd be good to change it to use a non-locale-dependent\nformat string. Surely there's no compelling reason why it has to\nuse %x/%X rather than something more stable.\n\n> I want to continue to discuss about PGTYPEStimestamp_from_asc.\n> Current PGTYPEStimestamp_from_asc() returns 0, but should we return -1?\n\nNo, I think changing its behavior now after so many years would be a\nbad idea. In any case, what's better about -1? They are both legal\ntimestamp values. I think we'd better fix the docs to match what the\ncode actually does, and to tell people that they *must* check errno\nto detect errors.\n\n>> I'm a bit inclined to just drop \"block 3\".\n\n> Are you concerned at the above point?\n\nGiven your point that no other dt_test case is checking error\nbehavior, I'm not too concerned about dropping this one. If\nwe wanted to take the issue seriously, we'd need to add a bunch\nof new tests not just tweak this one. (It's far from obvious that\nthis was even meant to be a test of an error case --- it looks like\njust sloppily-verified testing to me. \"block 3\" must have been\ndropped in after the tests before and after it were written,\nwithout noticing that it changed their results.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 May 2024 11:10:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: PGTYPEStimestamp_from_asc() in ECPG pgtypelib" }, { "msg_contents": "# I'm sorry for my late response.\n\nI confirmed that the error of regression is caused by my code inserting setlocale() into ecpglib of local branch.\nNo other tests occur error in non-C locale.\n\nThe following is about other topics.\n\n\n1. About regression test\n\nWe should test the followings:\n- PGTYPEStimestamp_from_asc(\"1994-02-11 26:10:35\", NULL) returns 0.\n- PGTYPEStimestamp_fmt_asc() can accept format string including %x and %X.\n\necpglib should be affected by only setlocale() called by user application and\ndt_test.pgc does not call it. So the following test is the best, I think.\nPlease see attached patch for detail (fix_pgtypeslib_regress.patch).\n\n ts1 = PGTYPEStimestamp_from_asc(\"1994-02-11 3:10:35\", NULL);\n text = PGTYPEStimestamp_to_asc(ts1);\n printf(\"timestamp_to_asc2: %s\\n\", text);\n PGTYPESchar_free(text);\n\n/* abc-03:10:35-def-02/11/94-gh */\n/* 12345678901234567890123456789 */\n\n out = (char*) malloc(32);\n i = PGTYPEStimestamp_fmt_asc(&ts1, out, 31, \"abc-%X-def-%x-ghi%%\");\n printf(\"timestamp_fmt_asc: %d: %s\\n\", i, out);\n free(out);\n\n ts1 = PGTYPEStimestamp_from_asc(\"1994-02-11 26:10:35\", NULL);\n text = PGTYPEStimestamp_to_asc(ts1);\n printf(\"timestamp_to_asc3: %s\\n\", text);\n PGTYPESchar_free(text);\n\nWe should also add tests that check PGTYPEStimestamp_*() set errno for invalid input correctly,\nbut I want to leave the improvement to the next timing when implement for timestamp is changed.\n(Maybe the timing will not come.)\n\n\n2. About document of PGTYPEStimestamp_from_asc() and PGTYPESInvalidTimestamp\n\n0 returned by PGTYPEStimestamp_from_asc () is a valid timestamp as you commented and\nwe should not break compatibility.\nSo we should remove the document for PGTYPESInvalidTimestamp and add one for checking errno\nto description of PGTYPEStimestamp_from_asc().\nPlease see attached patch for detail (fix_PGTYPESInvalidTimestamp_doc.patch).\n\n\n3. About endptr of *_from_asc()\n> PGTYPESdate_from_asc (ParseDate)\n> PGTYPEStimestamp_from_asc (ParseDate)\n> PGTYPESinterval_from_asc (ParseDate)\n> PGTYPESnumeric_from_asc\n\nBasically, they return immediately just after detecting invalid format.\nHowever, after passing the narrow parse, they could fails (e.g. failure of DecodeInterval(), DecodeISO8601Interval(), malloc(), and so on).\n\nSo we should write as follows:\n If the function detects invalid format,\n then it stores the address of the first invalid character in\n endptr. However, don't assume it successed if\n endptr points to end of input because other\n processing(e.g. memory allocation) could fails.\n Therefore, you should check return value and errno for detecting error.\n You can safely endptr to NULL.\n\nI also found pandora box that description of the followings don't show their behavior when it fails.\nI fix doc including them. Please see attached patch(fix_pgtypeslib_funcs_docs.patch).\n- PGTYPESdate_from_asc() # sets errno. (can not check return value)\n- PGTYPESdate_defmt_asc() # returns -1 and sets errno\n- PGTYPEStimestamp_to_asc() # returns NULL and sets errno\n- PGTYPEStimestamp_defmt_asc() # just returns 1 and doesn't set errno!\n- PGTYPESinterval_new() # returns NULL and sets errno\n- PGTYPESinterval_from_asc() # returns NULL and sets errno\n- PGTYPESinterval_to_asc() # returns NULL and sets errno\n- PGTYPESinterval_copy # currently always return 0\n- PGTYPESdecimal_new() # returns NULL and sets errno\n\n\n4. Bug of PGTYPEStimestamp_defmt_asc()\nPGTYPEStimestamp_defmt_asc() doesn't set errno on failure.\nI didn't make a patch for it yet.\n\nBest Regards\nRyo Matsumura", "msg_date": "Fri, 7 Jun 2024 08:09:03 +0000", "msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: PGTYPEStimestamp_from_asc() in ECPG pgtypelib" } ]
[ { "msg_contents": "Some findings\n\n\n1.\n\n>>Remove adminpack contrib extension (Daniel Gustafsson)\n\n>>This was used by non-end-of-life pgAdmin III.\n\nPerhaps you mean now-end-of-life (s/non/now/)\n\n2.\n>>All specification of partitioned table access methods (Justin Pryzby, >>Soumyadeep Chakraborty, Michael Paquier)\n\nperhaps you mean Allow, otherwise meaning not clear.\n\n3.\n>> Add some long options to pg_archivecleanup (Atsushi Torikoshi)\n>>The long options are --debug, --dry-run, and /--strip-extension.\n\nThe slash should be omitted.\n\nHans Buschmann\n\n\n\n\n\n\n\n\nSome findings\n\n\n1.\n>>Remove adminpack contrib extension (Daniel Gustafsson)\n\n>>This was used by non-end-of-life\n pgAdmin III. \n\n\nPerhaps you mean now-end-of-life (s/non/now/)\n\n\n2.\n>>All specification of partitioned table access methods\n (Justin Pryzby, >>Soumyadeep Chakraborty, Michael Paquier) \n\n\n\nperhaps you mean Allow, otherwise meaning not clear.\n\n\n3.\n>> Add some long options to pg_archivecleanup\n (Atsushi Torikoshi)\n>>The long options are --debug, --dry-run, and /--strip-extension. \n\n\nThe slash should be omitted.\n\n\nHans Buschmann", "msg_date": "Thu, 9 May 2024 11:03:39 +0000", "msg_from": "Hans Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: First draft of PG 17 release notes" }, { "msg_contents": "On Thu, May 9, 2024 at 11:03:39AM +0000, Hans Buschmann wrote:\n> Some findings\n>\n>\n> 1.\n>\n> >>Remove adminpack contrib extension (Daniel Gustafsson)\n>\n> >>This was used by non-end-of-life pgAdmin III.\n>\n>\n> Perhaps you mean now-end-of-life (s/non/now/)\n\nYes, fixed to \"now end-of-life\"\n\n> 2.\n> >>All specification of partitioned table access methods (Justin Pryzby, >>\n> Soumyadeep Chakraborty, Michael Paquier)\n>\n> perhaps you mean Allow, otherwise meaning not clear.\n\nYep, fixed.\n\n> 3.\n> >> Add some long options to pg_archivecleanup (Atsushi Torikoshi)\n> >>The long options are --debug, --dry-run, and /--strip-extension.\n>\n> The slash should be omitted.\n\nFixed, not sure how that got in there.\n\nI have committed all outstanding fixes and updated the doc build:\n\n\thttps://momjian.us/pgsql_docs/release-17.html\n\nThank you for all the valuable feedback.\n\nIncidentally, the big surprise for me was the large number of optimizer\nimprovements.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Thu, 9 May 2024 11:27:44 -0400", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: First draft of PG 17 release notes" } ]
[ { "msg_contents": "Greetings,\n\nThe JDBC driver is currently keeping a per connection cache of types in the\ndriver. We are seeing cases where the number of columns is quite high. In\none case Prevent fetchFieldMetaData() from being run when unnecessary. ·\nIssue #3241 · pgjdbc/pgjdbc (github.com)\n<https://github.com/pgjdbc/pgjdbc/issues/3241> 2.6 Million columns.\n\nIf we knew that we were connecting to the same database we could use a\nsingle cache across connections.\n\nI think we would require a server/database identifier in the startup\nmessage.\n\nDave Cramer\n\nGreetings,The JDBC driver is currently keeping a per connection cache of types in the driver. We are seeing cases where the number of columns is quite high. In one case Prevent fetchFieldMetaData() from being run when unnecessary. · Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.If we knew that we were connecting to the same database we could use a single cache across connections.I think we would require a server/database identifier in the startup message.Dave Cramer", "msg_date": "Thu, 9 May 2024 08:06:11 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "request for database identifier in the startup packet" }, { "msg_contents": "On Thursday, May 9, 2024, Dave Cramer <[email protected]> wrote:\n\n> Greetings,\n>\n> The JDBC driver is currently keeping a per connection cache of types in\n> the driver. We are seeing cases where the number of columns is quite high.\n> In one case Prevent fetchFieldMetaData() from being run when unnecessary.\n> · Issue #3241 · pgjdbc/pgjdbc (github.com)\n> <https://github.com/pgjdbc/pgjdbc/issues/3241> 2.6 Million columns.\n>\n> If we knew that we were connecting to the same database we could use a\n> single cache across connections.\n>\n> I think we would require a server/database identifier in the startup\n> message.\n>\n\nI feel like pgbouncer ruins this plan.\n\nBut maybe you can construct a lookup key from some combination of data\nprovided by these functions:\nhttps://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-SESSION\n\nDavid J.\n\nOn Thursday, May 9, 2024, Dave Cramer <[email protected]> wrote:Greetings,The JDBC driver is currently keeping a per connection cache of types in the driver. We are seeing cases where the number of columns is quite high. In one case Prevent fetchFieldMetaData() from being run when unnecessary. · Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.If we knew that we were connecting to the same database we could use a single cache across connections.I think we would require a server/database identifier in the startup message.I feel like pgbouncer ruins this plan.But maybe you can construct a lookup key from some combination of data provided by these functions:https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-SESSIONDavid J.", "msg_date": "Thu, 9 May 2024 06:55:27 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "On Thu, May 9, 2024 at 8:06 AM Dave Cramer <[email protected]> wrote:\n> The JDBC driver is currently keeping a per connection cache of types in the driver. We are seeing cases where the number of columns is quite high. In one case Prevent fetchFieldMetaData() from being run when unnecessary. · Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.\n>\n> If we knew that we were connecting to the same database we could use a single cache across connections.\n>\n> I think we would require a server/database identifier in the startup message.\n\nI understand the desire to share the cache, but not why that would\nrequire any kind of change to the wire protocol.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 12:22:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "Dave Cramer\n\n\nOn Thu, 9 May 2024 at 12:22, Robert Haas <[email protected]> wrote:\n\n> On Thu, May 9, 2024 at 8:06 AM Dave Cramer <[email protected]> wrote:\n> > The JDBC driver is currently keeping a per connection cache of types in\n> the driver. We are seeing cases where the number of columns is quite high.\n> In one case Prevent fetchFieldMetaData() from being run when unnecessary. ·\n> Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.\n> >\n> > If we knew that we were connecting to the same database we could use a\n> single cache across connections.\n> >\n> > I think we would require a server/database identifier in the startup\n> message.\n>\n> I understand the desire to share the cache, but not why that would\n> require any kind of change to the wire protocol.\n>\n> The server identity is actually useful for many things such as knowing\nwhich instance of a cluster you are connected to.\nFor the cache however we can't use the IP address to determine which server\nwe are connected to as we could be connected to a pooler.\nKnowing exactly which server/database makes it relatively easy to have a\ncommon cache across connections. Getting that in the startup message seems\nlike a good place\n\nDave\n\nDave CramerOn Thu, 9 May 2024 at 12:22, Robert Haas <[email protected]> wrote:On Thu, May 9, 2024 at 8:06 AM Dave Cramer <[email protected]> wrote:\n> The JDBC driver is currently keeping a per connection cache of types in the driver. We are seeing cases where the number of columns is quite high. In one case Prevent fetchFieldMetaData() from being run when unnecessary. · Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.\n>\n> If we knew that we were connecting to the same database we could use a single cache across connections.\n>\n> I think we would require a server/database identifier in the startup message.\n\nI understand the desire to share the cache, but not why that would\nrequire any kind of change to the wire protocol.\nThe server identity is actually useful for many things such as knowing which instance of a cluster you are connected to.For the cache however we can't use the IP address to determine which server we are connected to as we could be connected to a pooler.Knowing exactly which server/database makes it relatively easy to have a common cache across connections. Getting that in the startup message seems like a good place Dave", "msg_date": "Thu, 9 May 2024 14:20:49 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "Hi,\n\nOn 2024-05-09 14:20:49 -0400, Dave Cramer wrote:\n> On Thu, 9 May 2024 at 12:22, Robert Haas <[email protected]> wrote:\n> > On Thu, May 9, 2024 at 8:06 AM Dave Cramer <[email protected]> wrote:\n> > > The JDBC driver is currently keeping a per connection cache of types in\n> > the driver. We are seeing cases where the number of columns is quite high.\n> > In one case Prevent fetchFieldMetaData() from being run when unnecessary. ·\n> > Issue #3241 · pgjdbc/pgjdbc (github.com) 2.6 Million columns.\n> > >\n> > > If we knew that we were connecting to the same database we could use a\n> > single cache across connections.\n> > >\n> > > I think we would require a server/database identifier in the startup\n> > message.\n> >\n> > I understand the desire to share the cache, but not why that would\n> > require any kind of change to the wire protocol.\n> >\n> > The server identity is actually useful for many things such as knowing\n> which instance of a cluster you are connected to.\n> For the cache however we can't use the IP address to determine which server\n> we are connected to as we could be connected to a pooler.\n> Knowing exactly which server/database makes it relatively easy to have a\n> common cache across connections. Getting that in the startup message seems\n> like a good place\n\nISTM that you could just as well query the information you'd like after\nconnecting. And that's going to be a lot more flexible than having to have\nprecisely the right information in the startup message, and most clients not\nneeding it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 May 2024 12:14:55 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "On Thu, May 9, 2024 at 3:14 PM Andres Freund <[email protected]> wrote:\n> ISTM that you could just as well query the information you'd like after\n> connecting. And that's going to be a lot more flexible than having to have\n> precisely the right information in the startup message, and most clients not\n> needing it.\n\nI agree with this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 15:19:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "On Thu, 9 May 2024 at 15:19, Robert Haas <[email protected]> wrote:\n\n> On Thu, May 9, 2024 at 3:14 PM Andres Freund <[email protected]> wrote:\n> > ISTM that you could just as well query the information you'd like after\n> > connecting. And that's going to be a lot more flexible than having to\n> have\n> > precisely the right information in the startup message, and most clients\n> not\n> > needing it.\n>\n> I agree with this.\n>\n> Well other than the extra round trip.\n\nThanks,\nDave\n\nOn Thu, 9 May 2024 at 15:19, Robert Haas <[email protected]> wrote:On Thu, May 9, 2024 at 3:14 PM Andres Freund <[email protected]> wrote:\n> ISTM that you could just as well query the information you'd like after\n> connecting. And that's going to be a lot more flexible than having to have\n> precisely the right information in the startup message, and most clients not\n> needing it.\n\nI agree with this.\nWell other than the extra round trip.Thanks,Dave", "msg_date": "Thu, 9 May 2024 15:33:40 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "On Thu, May 9, 2024 at 3:33 PM Dave Cramer <[email protected]> wrote:\n> On Thu, 9 May 2024 at 15:19, Robert Haas <[email protected]> wrote:\n>> On Thu, May 9, 2024 at 3:14 PM Andres Freund <[email protected]> wrote:\n>> > ISTM that you could just as well query the information you'd like after\n>> > connecting. And that's going to be a lot more flexible than having to have\n>> > precisely the right information in the startup message, and most clients not\n>> > needing it.\n>>\n>> I agree with this.\n>>\n> Well other than the extra round trip.\n\nI mean, sure, but we can't avoid that for everyone for everything.\nThere might be some way of doing something like this with, for\nexample, the infrastructure that was proposed to dynamically add stuff\nto the list of PGC_REPORT GUCs, if the values you need are GUCs\nalready, or were made so. But I think it's just not workable to\nunconditionally add a bunch of things to the startup packet. It'll\njust grow and grow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 15:39:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: request for database identifier in the startup packet" }, { "msg_contents": "On Thu, 9 May 2024 at 15:39, Robert Haas <[email protected]> wrote:\n\n> On Thu, May 9, 2024 at 3:33 PM Dave Cramer <[email protected]> wrote:\n> > On Thu, 9 May 2024 at 15:19, Robert Haas <[email protected]> wrote:\n> >> On Thu, May 9, 2024 at 3:14 PM Andres Freund <[email protected]>\n> wrote:\n> >> > ISTM that you could just as well query the information you'd like\n> after\n> >> > connecting. And that's going to be a lot more flexible than having to\n> have\n> >> > precisely the right information in the startup message, and most\n> clients not\n> >> > needing it.\n> >>\n> >> I agree with this.\n> >>\n> > Well other than the extra round trip.\n>\n> I mean, sure, but we can't avoid that for everyone for everything.\n> There might be some way of doing something like this with, for\n> example, the infrastructure that was proposed to dynamically add stuff\n> to the list of PGC_REPORT GUCs, if the values you need are GUCs\n> already, or were made so. But I think it's just not workable to\n> unconditionally add a bunch of things to the startup packet. It'll\n> just grow and grow.\n>\n\nI don't think this is unconditional. These are real world situations where\nhaving this information is useful.\nThat said, adding them everytime I ask for them would end up growing\nuncontrollably. This seems like a decent discussion to have with others.\n\nDave\n\nOn Thu, 9 May 2024 at 15:39, Robert Haas <[email protected]> wrote:On Thu, May 9, 2024 at 3:33 PM Dave Cramer <[email protected]> wrote:\n> On Thu, 9 May 2024 at 15:19, Robert Haas <[email protected]> wrote:\n>> On Thu, May 9, 2024 at 3:14 PM Andres Freund <[email protected]> wrote:\n>> > ISTM that you could just as well query the information you'd like after\n>> > connecting. And that's going to be a lot more flexible than having to have\n>> > precisely the right information in the startup message, and most clients not\n>> > needing it.\n>>\n>> I agree with this.\n>>\n> Well other than the extra round trip.\n\nI mean, sure, but we can't avoid that for everyone for everything.\nThere might be some way of doing something like this with, for\nexample, the infrastructure that was proposed to dynamically add stuff\nto the list of PGC_REPORT GUCs, if the values you need are GUCs\nalready, or were made so. But I think it's just not workable to\nunconditionally add a bunch of things to the startup packet. It'll\njust grow and grow.I don't think this is unconditional. These are real world situations where having this information is useful. That said, adding them everytime I ask for them would end up growing uncontrollably. This seems like a decent discussion to have with others.Dave", "msg_date": "Thu, 9 May 2024 15:51:57 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: request for database identifier in the startup packet" } ]
[ { "msg_contents": "Hi hackers,\n\nAfter several refactoring iterations, auxiliary processes are no\nlonger initialized from the bootstrapper. I think using the\nInitProcessing mode for initializing auxiliary processes is more\nappropriate.\n\nBest Regards,\nXing", "msg_date": "Thu, 9 May 2024 21:12:45 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Set appropriate processing mode for auxiliary processes." }, { "msg_contents": "On 09/05/2024 16:12, Xing Guo wrote:\n> Hi hackers,\n> \n> After several refactoring iterations, auxiliary processes are no\n> longer initialized from the bootstrapper. I think using the\n> InitProcessing mode for initializing auxiliary processes is more\n> appropriate.\n\nAt first I was sure this was introduced by my refactorings in v17, but \nin fact it's been like this forever. I agree that InitProcessing makes \nmuch more sense. The ProcessingMode variable is initialized to \nInitProcessing, so I think we can simply remove that line from \nAuxiliaryProcessMainCommon(). There are existing \n\"SetProcessingMode(InitProcessing)\" calls in other Main functions too \n(AutoVacLauncherMain, BackgroundWorkerMain, etc.), and I think those can \nalso be removed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 9 May 2024 17:13:14 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set appropriate processing mode for auxiliary processes." }, { "msg_contents": "On Thu, May 9, 2024 at 10:13 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 09/05/2024 16:12, Xing Guo wrote:\n> > Hi hackers,\n> >\n> > After several refactoring iterations, auxiliary processes are no\n> > longer initialized from the bootstrapper. I think using the\n> > InitProcessing mode for initializing auxiliary processes is more\n> > appropriate.\n>\n> At first I was sure this was introduced by my refactorings in v17, but\n> in fact it's been like this forever. I agree that InitProcessing makes\n> much more sense. The ProcessingMode variable is initialized to\n> InitProcessing, so I think we can simply remove that line from\n> AuxiliaryProcessMainCommon(). There are existing\n> \"SetProcessingMode(InitProcessing)\" calls in other Main functions too\n> (AutoVacLauncherMain, BackgroundWorkerMain, etc.), and I think those can\n> also be removed.\n\nGood catch! I agree with you.\n\nBest Regards,\nXing.", "msg_date": "Thu, 9 May 2024 22:55:57 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set appropriate processing mode for auxiliary processes." }, { "msg_contents": "Sorry, forget to add an assertion to guard our codes in my previous patch.", "msg_date": "Thu, 9 May 2024 23:10:41 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set appropriate processing mode for auxiliary processes." }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> At first I was sure this was introduced by my refactorings in v17, but \n> in fact it's been like this forever. I agree that InitProcessing makes \n> much more sense. The ProcessingMode variable is initialized to \n> InitProcessing, so I think we can simply remove that line from \n> AuxiliaryProcessMainCommon(). There are existing \n> \"SetProcessingMode(InitProcessing)\" calls in other Main functions too \n> (AutoVacLauncherMain, BackgroundWorkerMain, etc.), and I think those can \n> also be removed.\n\nThis only works if the postmaster can be trusted never to change the\nvariable; else children could inherit some other value via fork().\nIn that connection, it seems a bit scary that postmaster.c contains a\ncouple of calls \"SetProcessingMode(NormalProcessing)\". It looks like\nthey are in functions that should only be executed by child processes,\nbut should we try to move them somewhere else? Another idea could be\nto add an Assert to SetProcessingMode that insists that it can't be\nexecuted by the postmaster.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 May 2024 11:19:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set appropriate processing mode for auxiliary processes." }, { "msg_contents": "On Thu, May 9, 2024 at 11:19 PM Tom Lane <[email protected]> wrote:\n>\n> Heikki Linnakangas <[email protected]> writes:\n> > At first I was sure this was introduced by my refactorings in v17, but\n> > in fact it's been like this forever. I agree that InitProcessing makes\n> > much more sense. The ProcessingMode variable is initialized to\n> > InitProcessing, so I think we can simply remove that line from\n> > AuxiliaryProcessMainCommon(). There are existing\n> > \"SetProcessingMode(InitProcessing)\" calls in other Main functions too\n> > (AutoVacLauncherMain, BackgroundWorkerMain, etc.), and I think those can\n> > also be removed.\n>\n> This only works if the postmaster can be trusted never to change the\n> variable; else children could inherit some other value via fork().\n> In that connection, it seems a bit scary that postmaster.c contains a\n> couple of calls \"SetProcessingMode(NormalProcessing)\". It looks like\n> they are in functions that should only be executed by child processes,\n> but should we try to move them somewhere else?\n\nAfter checking calls to \"SetProcessingMode(NormalProcessing)\" in the\npostmaster.c, they are used in background worker specific functions\n(BackgroundWorkerInitializeConnectionByOid and\nBackgroundWorkerInitializeConnection). So I think it's a good idea to\nmove these functions to bgworker.c. Then, we can get rid of calling\n\"SetProcessingMode(NormalProcessing)\" in postmaster.c.\n\nI also noticed that there's an unnecessary call to\n\"BackgroundWorkerInitializeConnection\" in worker_spi.c (The worker_spi\nlauncher has set the dboid correctly).\n\nBest Regards,\nXing.", "msg_date": "Fri, 10 May 2024 10:58:33 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Set appropriate processing mode for auxiliary processes." }, { "msg_contents": "On 10/05/2024 05:58, Xing Guo wrote:\n> On Thu, May 9, 2024 at 11:19 PM Tom Lane <[email protected]> wrote:\n>>\n>> Heikki Linnakangas <[email protected]> writes:\n>>> At first I was sure this was introduced by my refactorings in v17, but\n>>> in fact it's been like this forever. I agree that InitProcessing makes\n>>> much more sense. The ProcessingMode variable is initialized to\n>>> InitProcessing, so I think we can simply remove that line from\n>>> AuxiliaryProcessMainCommon(). There are existing\n>>> \"SetProcessingMode(InitProcessing)\" calls in other Main functions too\n>>> (AutoVacLauncherMain, BackgroundWorkerMain, etc.), and I think those can\n>>> also be removed.\n>>\n>> This only works if the postmaster can be trusted never to change the\n>> variable; else children could inherit some other value via fork().\n>> In that connection, it seems a bit scary that postmaster.c contains a\n>> couple of calls \"SetProcessingMode(NormalProcessing)\". It looks like\n>> they are in functions that should only be executed by child processes,\n>> but should we try to move them somewhere else?\n> \n> After checking calls to \"SetProcessingMode(NormalProcessing)\" in the\n> postmaster.c, they are used in background worker specific functions\n> (BackgroundWorkerInitializeConnectionByOid and\n> BackgroundWorkerInitializeConnection). So I think it's a good idea to\n> move these functions to bgworker.c. Then, we can get rid of calling\n> \"SetProcessingMode(NormalProcessing)\" in postmaster.c.\n\nCommitted these first two patches. Thank you!\n\n> I also noticed that there's an unnecessary call to\n> \"BackgroundWorkerInitializeConnection\" in worker_spi.c (The worker_spi\n> launcher has set the dboid correctly).\n\nNo, you can call the launcher function with \"dboid=0\", and it's also 0 \nin the \"static\" registration at end of _PG_init(). This causes \nregression tests to fail too because of that. So I left out that patch.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 20:16:19 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Set appropriate processing mode for auxiliary processes." } ]
[ { "msg_contents": "$subject\n\nMake has one:\nhttps://www.postgresql.org/docs/current/docguide-build.html#DOCGUIDE-BUILD-SYNTAX-CHECK\n\nThis needs updating:\nhttps://www.postgresql.org/docs/current/docguide-build-meson.html\n\nI've been using \"ninja html\" which isn't shown here. Also, as a sanity\ncheck, running that command takes my system 1 minute. Any idea what\npercentile that falls into?\n\nDavid J.\n\n$subjectMake has one:https://www.postgresql.org/docs/current/docguide-build.html#DOCGUIDE-BUILD-SYNTAX-CHECKThis needs updating:https://www.postgresql.org/docs/current/docguide-build-meson.htmlI've been using \"ninja html\" which isn't shown here.  Also, as a sanity check, running that command takes my system 1 minute.  Any idea what percentile that falls into?David J.", "msg_date": "Thu, 9 May 2024 09:23:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> I've been using \"ninja html\" which isn't shown here. Also, as a sanity\n> check, running that command takes my system 1 minute. Any idea what\n> percentile that falls into?\n\nOn my no-longer-shiny-new workstation, \"make\" in doc/src/sgml\n(which builds just the HTML docs) takes right about 30s in HEAD.\nCan't believe that the overhead would be noticeably different\nbetween make and meson, since it's a simple command sequence\nwith no opportunity for parallelism.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 May 2024 12:33:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n\n> $subject\n>\n> Make has one:\n> https://www.postgresql.org/docs/current/docguide-build.html#DOCGUIDE-BUILD-SYNTAX-CHECK\n>\n> This needs updating:\n> https://www.postgresql.org/docs/current/docguide-build-meson.html\n>\n> I've been using \"ninja html\" which isn't shown here. Also, as a sanity\n> check, running that command takes my system 1 minute. Any idea what\n> percentile that falls into?\n\nMy laptop (8-core i7-11800H @ 2.30GHz) takes 22s to do `ninja html`\nafter `ninja clean`.\n\n> David J.\n\n- ilmari\n\n\n", "msg_date": "Thu, 09 May 2024 17:42:15 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n\n> $subject\n>\n> Make has one:\n> https://www.postgresql.org/docs/current/docguide-build.html#DOCGUIDE-BUILD-SYNTAX-CHECK\n>\n> This needs updating:\n> https://www.postgresql.org/docs/current/docguide-build-meson.html\n>\n> I've been using \"ninja html\" which isn't shown here.\n\nThe /devel/ version has a link to the full list of doc targets:\n\nhttps://www.postgresql.org/docs/devel/install-meson.html#TARGETS-MESON-DOCUMENTATION\n\nAttached is a patch which adds a check-docs target for meson, which\ntakes 0.3s on my laptop.\n\n- ilmari", "msg_date": "Thu, 09 May 2024 20:12:38 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "Hi,\n\nOn 2024-05-09 20:12:38 +0100, Dagfinn Ilmari Manns�ker wrote:\n> Attached is a patch which adds a check-docs target for meson, which\n> takes 0.3s on my laptop.\n\nNice.\n\n\n> +checkdocs = custom_target('check-docs',\n> + input: 'postgres.sgml',\n> + output: 'check-docs',\n> + depfile: 'postgres-full.xml.d',\n> + command: [xmllint, '--nonet', '--valid', '--noout',\n> + '--path', '@OUTDIR@', '@INPUT@'],\n> + depends: doc_generated,\n> + build_by_default: false,\n> +)\n> +alias_target('check-docs', checkdocs)\n\nIsn't the custom target redundant with postgres_full_xml? I.e. you could just\nhave the alias_target depend on postgres_full_xml?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 May 2024 12:36:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "On Thu, May 9, 2024 at 12:12 PM Dagfinn Ilmari Mannsåker <[email protected]>\nwrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n>\n> > I've been using \"ninja html\" which isn't shown here.\n>\n> The /devel/ version has a link to the full list of doc targets:\n>\n>\n> https://www.postgresql.org/docs/devel/install-meson.html#TARGETS-MESON-DOCUMENTATION\n\n\nI knew I learned about the html target from the docs. Forgot to use the\ndevel ones this time around.\n\n\n> Attached is a patch which adds a check-docs target for meson, which\n> takes 0.3s on my laptop.\n>\n\nThanks.\n\nDavid J.\n\nOn Thu, May 9, 2024 at 12:12 PM Dagfinn Ilmari Mannsåker <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> I've been using \"ninja html\" which isn't shown here.\n\nThe /devel/ version has a link to the full list of doc targets:\n\nhttps://www.postgresql.org/docs/devel/install-meson.html#TARGETS-MESON-DOCUMENTATIONI knew I learned about the html target from the docs.  Forgot to use the devel ones this time around.\nAttached is a patch which adds a check-docs target for meson, which\ntakes 0.3s on my laptop.\nThanks.David J.", "msg_date": "Thu, 9 May 2024 12:52:45 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n\n> Hi,\n>\n> On 2024-05-09 20:12:38 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Attached is a patch which adds a check-docs target for meson, which\n>> takes 0.3s on my laptop.\n>\n> Nice.\n>\n>\n>> +checkdocs = custom_target('check-docs',\n>> + input: 'postgres.sgml',\n>> + output: 'check-docs',\n>> + depfile: 'postgres-full.xml.d',\n>> + command: [xmllint, '--nonet', '--valid', '--noout',\n>> + '--path', '@OUTDIR@', '@INPUT@'],\n>> + depends: doc_generated,\n>> + build_by_default: false,\n>> +)\n>> +alias_target('check-docs', checkdocs)\n>\n> Isn't the custom target redundant with postgres_full_xml? I.e. you could just\n> have the alias_target depend on postgres_full_xml?\n\nWe could, but that would actually rebuild postgres-full.xml, not just\ncheck the syntax (but that only takes 0.1-0.2s longer), and only run if\nthe docs have been modified since it was last built (which I guess is\nfine, since if you haven't modified the docs you can't have introduced\nany syntax errors).\n\nIt's already possible to run that target directly, i.e.\n\n ninja doc/src/sgml/postgres-full.xml\n\nWe could just document that in the list of meson doc targets, but a\nshortcut alias would roll off the fingers more easily and be more\ndiscoverable.\n\n> Greetings,\n>\n> Andres Freund\n\n- ilmari\n\n\n", "msg_date": "Thu, 09 May 2024 20:53:27 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "Hi,\n\nOn 2024-05-09 09:23:37 -0700, David G. Johnston wrote:\n> This needs updating:\n> https://www.postgresql.org/docs/current/docguide-build-meson.html\n\nYou mean it should have a syntax target? Or that something else is out of\ndate?\n\n\n> Also, as a sanity check, running that command takes my system 1 minute. Any\n> idea what percentile that falls into?\n\nI think that's on the longer end - what OS/environment is this on? Even on\n~5yo CPU with turbo boost disabled it's 48s for me. FWIW, the single-page\nhtml is a good bit faster, 29s on the same system.\n\nI remember the build being a lot slower on windows, fwiw, due to the number of\nfiles being opened/created. I guess that might also be the case on slow\nstorage, due to filesystem journaling.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 May 2024 13:16:08 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "Hi,\n\nOn 2024-05-09 20:53:27 +0100, Dagfinn Ilmari Manns�ker wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-05-09 20:12:38 +0100, Dagfinn Ilmari Manns�ker wrote:\n> >> Attached is a patch which adds a check-docs target for meson, which\n> >> takes 0.3s on my laptop.\n> >> +checkdocs = custom_target('check-docs',\n> >> + input: 'postgres.sgml',\n> >> + output: 'check-docs',\n> >> + depfile: 'postgres-full.xml.d',\n> >> + command: [xmllint, '--nonet', '--valid', '--noout',\n> >> + '--path', '@OUTDIR@', '@INPUT@'],\n> >> + depends: doc_generated,\n> >> + build_by_default: false,\n> >> +)\n> >> +alias_target('check-docs', checkdocs)\n> >\n> > Isn't the custom target redundant with postgres_full_xml? I.e. you could just\n> > have the alias_target depend on postgres_full_xml?\n> \n> We could, but that would actually rebuild postgres-full.xml, not just\n> check the syntax (but that only takes 0.1-0.2s longer),\n\nI don't think this is performance critical enough to worry about 0.1s. If\nanything I think the performance argument goes the other way round - doing the\nvalidation work multiple times is a waste of time...\n\n\n> and only run if the docs have been modified since it was last built (which I\n> guess is fine, since if you haven't modified the docs you can't have\n> introduced any syntax errors).\n\nThat actually seems good to me.\n\n\n> It's already possible to run that target directly, i.e.\n> \n> ninja doc/src/sgml/postgres-full.xml\n> \n> We could just document that in the list of meson doc targets, but a\n> shortcut alias would roll off the fingers more easily and be more\n> discoverable.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 9 May 2024 13:18:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "On Thu, May 9, 2024 at 1:16 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-05-09 09:23:37 -0700, David G. Johnston wrote:\n> > This needs updating:\n> > https://www.postgresql.org/docs/current/docguide-build-meson.html\n>\n> You mean it should have a syntax target? Or that something else is out of\n> date?\n>\n>\nv17 looks good, I like the auto-generation. I failed to notice I was\nlooking at v16 when searching for a check docs target.\n\n\n> Also, as a sanity check, running that command takes my system 1 minute.\n> Any\n> > idea what percentile that falls into?\n>\n> I think that's on the longer end - what OS/environment is this on? Even on\n> ~5yo CPU with turbo boost disabled it's 48s for me. FWIW, the single-page\n> html is a good bit faster, 29s on the same system.\n>\n\nUbuntu 22.04 running in AWS Workspaces Power.\n\nAmazon EC2 t3.xlarge\nIntel® Xeon(R) Platinum 8259CL CPU @ 2.50GHz × 4\n16GiB Ram\n\nDavid J.\n\nOn Thu, May 9, 2024 at 1:16 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-05-09 09:23:37 -0700, David G. Johnston wrote:\n> This needs updating:\n> https://www.postgresql.org/docs/current/docguide-build-meson.html\n\nYou mean it should have a syntax target? Or that something else is out of\ndate?\nv17 looks good, I like the auto-generation.  I failed to notice I was looking at v16 when searching for a check docs target.\n> Also, as a sanity check, running that command takes my system 1 minute.  Any\n> idea what percentile that falls into?\n\nI think that's on the longer end - what OS/environment is this on? Even on\n~5yo CPU with turbo boost disabled it's 48s for me.  FWIW, the single-page\nhtml is a good bit faster, 29s on the same system.Ubuntu 22.04 running in AWS Workspaces Power.Amazon EC2 t3.xlargeIntel® Xeon(R) Platinum 8259CL CPU @ 2.50GHz × 416GiB RamDavid J.", "msg_date": "Thu, 9 May 2024 13:30:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n\n> Hi,\n>\n> On 2024-05-09 20:53:27 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> Andres Freund <[email protected]> writes:\n>> > On 2024-05-09 20:12:38 +0100, Dagfinn Ilmari Mannsåker wrote:\n>> >> Attached is a patch which adds a check-docs target for meson, which\n>> >> takes 0.3s on my laptop.\n>> >> +checkdocs = custom_target('check-docs',\n>> >> + input: 'postgres.sgml',\n>> >> + output: 'check-docs',\n>> >> + depfile: 'postgres-full.xml.d',\n>> >> + command: [xmllint, '--nonet', '--valid', '--noout',\n>> >> + '--path', '@OUTDIR@', '@INPUT@'],\n>> >> + depends: doc_generated,\n>> >> + build_by_default: false,\n>> >> +)\n>> >> +alias_target('check-docs', checkdocs)\n>> >\n>> > Isn't the custom target redundant with postgres_full_xml? I.e. you could just\n>> > have the alias_target depend on postgres_full_xml?\n>> \n>> We could, but that would actually rebuild postgres-full.xml, not just\n>> check the syntax (but that only takes 0.1-0.2s longer),\n>\n> I don't think this is performance critical enough to worry about 0.1s. If\n> anything I think the performance argument goes the other way round - doing the\n> validation work multiple times is a waste of time...\n\nThese targets are both build_by_default: false, so the checkdocs target\nwon't both be built when building the docs. But reusing the\npostgres_full_xml target will save half a second of the half-minute\nbuild time if you then go on to actually build the docs without changing\nanything after the syntax check passes.\n\n>> and only run if the docs have been modified since it was last built (which I\n>> guess is fine, since if you haven't modified the docs you can't have\n>> introduced any syntax errors).\n>\n> That actually seems good to me.\n>\n>\n>> It's already possible to run that target directly, i.e.\n>> \n>> ninja doc/src/sgml/postgres-full.xml\n>> \n>> We could just document that in the list of meson doc targets, but a\n>> shortcut alias would roll off the fingers more easily and be more\n>> discoverable.\n>\n> Agreed.\n\nHere's a v2 patch that does it that way. Should we document that\ncheck-docs actually builds postgres-full.xml, and if so, where?\n\n> Greetings,\n>\n> Andres Freund\n\n- ilmari", "msg_date": "Thu, 09 May 2024 22:29:56 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there an undocumented Syntax Check in Meson?" } ]
[ { "msg_contents": "Hi,\n\nJust a few reminders about the open items list at\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items --\n\n- Please don't add issues to this list unless they are the result of\ndevelopment done during this release cycle. This is not a\ngeneral-purpose bug tracker.\n\n- The owner of an item is the person who committed the patch that\ncaused the problem, because that committer is responsible for cleaning\nup the mess. Of course, the patch author is warmly invited to help,\nespecially if they have aspirations of being a committer some day\nthemselves. Other help is welcome, too.\n\n- Fixing the stuff on this list is a time-boxed activity. We want to\nput out a release on time. If the stuff listed here doesn't get fixed,\nthe release management team will have to do something about it, like\nstart yelling at people, or forcing patches to be reverted, which will\nbe no fun for anyone involved, including but not limited to the\nrelease management team.\n\nA great number of things that were added as open items have already\nbeen resolved, but some of the remaining items have been there for a\nwhile. Here's a quick review of what's on the list as of this moment:\n\n* Incorrect Assert in heap_end/rescan for BHS. Either the description\nof this item is inaccurate, or we've been unable to fix an incorrect\nassert after more than a month. I interpret\nhttps://www.postgresql.org/message-id/54858BA1-084E-4F7D-B2D1-D15505E512FF%40yesql.se\nas a vote in favor of committing some patch by Melanie to fix this.\nEither Tomas should commit that patch, or Melanie should commit that\npatch, or somebody should say why that patch shouldn't be committed,\nor someone should request more help determining whether that patch is\nindeed the correct fix, or something. But let's not just sit on this.\n\n* Register ALPN protocol id with IANA. From the mailing list thread,\nit is abundantly clear that IANA is in no hurry to finish dealing with\nwhat seems to be a completely pro forma request from our end. I think\nwe just have to be patient.\n\n* not null constraints break dump/restore. I asked whether all of the\nissues had been addressed here and Justin Pryzby opined that the only\nthing that was still relevant for this release was a possible test\ncase change, which I would personally consider a good enough reason to\nat least consider calling this done. But it's not clear to me whether\nJustin's opinion is the consensus position, or perhaps more\nrelevantly, whether it's Álvaro's position.\n\n* Temporal PKs allow duplicates with empty ranges. Peter Eisentraut\nhas started working with Paul Jungwirth on this. Looks good so far.\n\n* Rename sslnegotiation \"requiredirect.\" option to \"directonly\". I\nstill think Heikki has implemented the wrong behavior here, and I\ndon't think this renaming is going to make any difference one way or\nthe other in how understandable it is. But if we're going to leave the\nbehavior as-is and do the renaming, then let's get that done.\n\n* Race condition with local injection point detach. Discussion is ongoing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 15:28:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "open items" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n\n> * Register ALPN protocol id with IANA. From the mailing list thread,\n> it is abundantly clear that IANA is in no hurry to finish dealing with\n> what seems to be a completely pro forma request from our end. I think\n> we just have to be patient.\n\nThis appears to have been approved without anyone mentioning it on the\nlist, and the registry now lists \"postgresql\" at the bottom:\n\nhttps://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids\n\n- ilmari\n\n\n", "msg_date": "Thu, 09 May 2024 20:38:24 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" }, { "msg_contents": "On Thu, May 9, 2024 at 3:38 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > * Register ALPN protocol id with IANA. From the mailing list thread,\n> > it is abundantly clear that IANA is in no hurry to finish dealing with\n> > what seems to be a completely pro forma request from our end. I think\n> > we just have to be patient.\n>\n> This appears to have been approved without anyone mentioning it on the\n> list, and the registry now lists \"postgresql\" at the bottom:\n>\n> https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids\n\nNice, thanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 9 May 2024 15:39:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: open items" }, { "msg_contents": "On Thu, May 09, 2024 at 03:28:13PM -0400, Robert Haas wrote:\n> Just a few reminders about the open items list at\n> https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items --\n\nThanks for summarizing the situation.\n\n> * Race condition with local injection point detach. Discussion is ongoing.\n\nI have sent a patch for that yesterday, which I assume is going in the\nright direction to close entirely the loop:\nhttps://www.postgresql.org/message-id/Zjx9-2swyNg6E1y1%40paquier.xyz\n\nThere is still one point of detail related to the amount of\nflexibility we'd want for detachs (concurrent detach happening in\nparallel of an automated one in the shmem callback) that I'm not\nentirely sure about yet but I've proposed an idea to solve that as\nwell. I'm hopeful in getting that wrapped at the beginning of next\nweek.\n--\nMichael", "msg_date": "Fri, 10 May 2024 08:10:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" }, { "msg_contents": "On 2024-May-09, Robert Haas wrote:\n\n> * not null constraints break dump/restore. I asked whether all of the\n> issues had been addressed here and Justin Pryzby opined that the only\n> thing that was still relevant for this release was a possible test\n> case change, which I would personally consider a good enough reason to\n> at least consider calling this done. But it's not clear to me whether\n> Justin's opinion is the consensus position, or perhaps more\n> relevantly, whether it's Álvaro's position.\n\nI have fixed the reported issues, so as far as these specific items go,\nwe could close the reported open item.\n\nHowever, in doing so I realized that some code is more complex than it\nneeds to be, and exposes users to ugliness that they don't need to see,\nso I posted additional patches. I intend to get these committed today.\n\nA possible complaint is that the upgrade mechanics which are mostly in\npg_dump with some pieces in pg_upgrade are not very explicitly\ndocumented. There are already comments in all relevant places, but\nperhaps an overall picture is necessary. I'll see about this, probably\nas a long comment somewhere.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n\n\n", "msg_date": "Fri, 10 May 2024 13:14:16 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" }, { "msg_contents": "On Thu, May 9, 2024 at 3:28 PM Robert Haas <[email protected]> wrote:\n>\n> Just a few reminders about the open items list at\n> https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items --\n>\n> * Incorrect Assert in heap_end/rescan for BHS. Either the description\n> of this item is inaccurate, or we've been unable to fix an incorrect\n> assert after more than a month. I interpret\n> https://www.postgresql.org/message-id/54858BA1-084E-4F7D-B2D1-D15505E512FF%40yesql.se\n> as a vote in favor of committing some patch by Melanie to fix this.\n> Either Tomas should commit that patch, or Melanie should commit that\n> patch, or somebody should say why that patch shouldn't be committed,\n> or someone should request more help determining whether that patch is\n> indeed the correct fix, or something. But let's not just sit on this.\n\nSorry, yes, the trivial fix has been done for a while. There is one\noutstanding feedback on the patch: an update to one of the comments\nsuggested by Tomas. I got distracted by trying to repro and fix a bug\nfrom the section \"live issues affecting stable branches\". I will\nupdate this BHS patch by tonight and commit it once Tomas has a chance\nto +1.\n\nThanks,\nMelanie\n\n\n", "msg_date": "Fri, 10 May 2024 08:48:07 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" }, { "msg_contents": "> On 10 May 2024, at 14:48, Melanie Plageman <[email protected]> wrote:\n> \n> On Thu, May 9, 2024 at 3:28 PM Robert Haas <[email protected]> wrote:\n>> \n>> Just a few reminders about the open items list at\n>> https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items --\n>> \n>> * Incorrect Assert in heap_end/rescan for BHS. Either the description\n>> of this item is inaccurate, or we've been unable to fix an incorrect\n>> assert after more than a month. I interpret\n>> https://www.postgresql.org/message-id/54858BA1-084E-4F7D-B2D1-D15505E512FF%40yesql.se\n>> as a vote in favor of committing some patch by Melanie to fix this.\n\nIt's indeed a vote for that.\n\n>> Either Tomas should commit that patch, or Melanie should commit that\n>> patch, or somebody should say why that patch shouldn't be committed,\n>> or someone should request more help determining whether that patch is\n>> indeed the correct fix, or something. But let's not just sit on this.\n> \n> Sorry, yes, the trivial fix has been done for a while. There is one\n> outstanding feedback on the patch: an update to one of the comments\n> suggested by Tomas. I got distracted by trying to repro and fix a bug\n> from the section \"live issues affecting stable branches\". I will\n> update this BHS patch by tonight and commit it once Tomas has a chance\n> to +1.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 10 May 2024 15:28:03 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" }, { "msg_contents": "On Fri, May 10, 2024 at 8:48 AM Melanie Plageman\n<[email protected]> wrote:\n> Sorry, yes, the trivial fix has been done for a while. There is one\n> outstanding feedback on the patch: an update to one of the comments\n> suggested by Tomas. I got distracted by trying to repro and fix a bug\n> from the section \"live issues affecting stable branches\". I will\n> update this BHS patch by tonight and commit it once Tomas has a chance\n> to +1.\n\nGreat, thanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 May 2024 15:43:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: open items" }, { "msg_contents": "On Fri, May 10, 2024 at 7:14 AM Alvaro Herrera <[email protected]> wrote:\n> A possible complaint is that the upgrade mechanics which are mostly in\n> pg_dump with some pieces in pg_upgrade are not very explicitly\n> documented. There are already comments in all relevant places, but\n> perhaps an overall picture is necessary. I'll see about this, probably\n> as a long comment somewhere.\n\nI think that would be really helpful.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 10 May 2024 15:43:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: open items" }, { "msg_contents": "On 09/05/2024 22:39, Robert Haas wrote:\n> On Thu, May 9, 2024 at 3:38 PM Dagfinn Ilmari Mannsåker\n> <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> * Register ALPN protocol id with IANA. From the mailing list thread,\n>>> it is abundantly clear that IANA is in no hurry to finish dealing with\n>>> what seems to be a completely pro forma request from our end. I think\n>>> we just have to be patient.\n>>\n>> This appears to have been approved without anyone mentioning it on the\n>> list, and the registry now lists \"postgresql\" at the bottom:\n>>\n>> https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids\n> \n> Nice, thanks!\n\nCommitted the change from \"TBD-pgsql\" to \"postgresql\", thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 11 May 2024 19:01:14 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" }, { "msg_contents": "Hi,\n\nWe are down to three open items, all of which have proposed fixes.\nThat is great, but we need to keep things moving along, because\naccording to https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\nwe are due to release beta1 on May 23. That means that a release\nfreeze will be in effect from Saturday, May 18, which is four days\nfrom now. Since committing patches sometimes leads to unexpected\nsurprises, it would be best if the proposed fixes were put into place\nsooner rather than later, to allow time for any adjustments that may\nbe required.\n\n* Incorrect Assert in heap_end/rescan for BHS\nMelanie posted a new patch version 23 hours ago, Michael Paquier\nreviewed it 7 hours ago.\nSee https://www.postgresql.org/message-id/CAAKRu_a%2B5foybidkmh8FpFAV7iegxetPyPXQ5%3D%2B%2BkqZ%2BZDEUcg%40mail.gmail.com\n\n* Temporal PKs allow duplicates with empty ranges\nPeter proposes to revert the feature.\nSee https://www.postgresql.org/message-id/64c2b2ab-7ce9-475e-ac59-3bfec528bada%40eisentraut.org\n\n* Rename sslnegotiation \"requiredirect\" option to \"directonly\"\nLatest patch is at\nhttps://www.postgresql.org/message-id/3fdaf4b1-82d1-45bb-8175-f97ff53a1f01%40iki.fi\nand, I at least, like it\nThe basic proposal is to get rid of the idea of having a way to try\nboth modes (negotiated/direct) and make sslnegotiation just pick one\nbehavior or the other.\n\nThanks,\n\n...Robert\n\n\n", "msg_date": "Tue, 14 May 2024 09:52:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: open items" }, { "msg_contents": "On Tue, May 14, 2024 at 09:52:35AM -0400, Robert Haas wrote:\n> We are down to three open items, all of which have proposed fixes.\n> That is great, but we need to keep things moving along, because\n> according to https://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items\n> we are due to release beta1 on May 23. That means that a release\n> freeze will be in effect from Saturday, May 18, which is four days\n> from now. Since committing patches sometimes leads to unexpected\n> surprises, it would be best if the proposed fixes were put into place\n> sooner rather than later, to allow time for any adjustments that may\n> be required.\n\nAs of this minute, the open item list is empty @-@.\n\nThanks all for the various resolutions, updates and commits!\n--\nMichael", "msg_date": "Fri, 17 May 2024 09:11:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: open items" } ]
[ { "msg_contents": "As we know, the deadlock error message isn't the most friendly one. All the client gets back is process PIDs, transaction IDs, and lock types. You have to check the server log to retrieve lock details. This is tedious.\n\nIn one of my apps I even added a deadlock exception handler on the app side to query pg_stat_activity for processes involved in the deadlock and include their application names and queries in the exception message. It is a little racy but works well enough.\n\nIdeally I'd like to see that data coming from Postgres upon detecting the deadlock. That's why I made this small change.\n\nThe change makes the deadlock error look as follows - the new element is the application name or \"<insufficient privilege>\" in its place if the activity user doesn't match the current user (and the current use isn't a superuser):\n\npostgres=*> SELECT * FROM q WHERE id = 2 FOR UPDATE;\nERROR: deadlock detected\nDETAIL: Process 194520 (application_name: <insufficient privilege>) waits for ShareLock on transaction 776; blocked by process 194521.\nProcess 194521 (application_name: woof) waits for ShareLock on transaction 775; blocked by process 194520.\nHINT: See server log for query details.\nCONTEXT: while locking tuple (0,2) in relation \"q\"\n\nI added a new LocalPgBackendCurrentActivity struct combining application name and query string pointers and a sameProcess boolean. It is returned by value, since it's small. Performance-wise, this is a a part of the deadlock handler, if the DB hits it frequently, there are much more serious problems going on.\n\nI could extend it by sending the queries back to the client, with an identical security check, but this is a potential information exposure of whatever's in the query plaintext. Another extension is to replace \"(application_name: <insufficient privilege>)\" with something better like \"(unknown application_name)\", or even nothing.\n\nAttached patch is for master, 2fb7560c. It doesn't contain any tests.\n\nLet me know if you approve of the patch and if it makes sense to continue working on it.\n\nBest,\nKaroline", "msg_date": "Thu, 09 May 2024 23:44:03 +0000", "msg_from": "Karoline Pauls <[email protected]>", "msg_from_op": true, "msg_subject": "Augmenting the deadlock message with application_name" }, { "msg_contents": "On Thu, May 9, 2024 at 11:44:03PM +0000, Karoline Pauls wrote:\n> As we know, the deadlock error message isn't the most friendly one. All the\n> client gets back is process PIDs, transaction IDs, and lock types. You have to\n> check the server log to retrieve lock details. This is tedious.\n> \n> In one of my apps I even added a deadlock exception handler on the app side to\n> query pg_stat_activity for processes involved in the deadlock and include their\n> application names and queries in the exception message. It is a little racy but\n> works well enough.\n> \n> Ideally I'd like to see that data coming from Postgres upon detecting the\n> deadlock. That's why I made this small change.\n> \n> The change makes the deadlock error look as follows - the new element is the\n> application name or \"<insufficient privilege>\" in its place if the activity\n> user doesn't match the current user (and the current use isn't a superuser):\n> \n> postgres=*> SELECT * FROM q WHERE id = 2 FOR UPDATE;\n> ERROR: deadlock detected\n> DETAIL: Process 194520 (application_name: <insufficient privilege>) waits for\n> ShareLock on transaction 776; blocked by process 194521.\n> Process 194521 (application_name: woof) waits for ShareLock on transaction 775;\n> blocked by process 194520.\n> HINT: See server log for query details.\n> CONTEXT: while locking tuple (0,2) in relation \"q\"\n\nlog_line_prefix supports application name --- why would you not use\nthat?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 10 May 2024 15:17:18 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Augmenting the deadlock message with application_name" }, { "msg_contents": "On Friday, 10 May 2024 at 20:17, Bruce Momjian <[email protected]> wrote:\n>\n> log_line_prefix supports application name --- why would you not use\n> that?\n>\n\nlog_line_prefix is effective in the server log. This change is mostly about improving the message sent back to the client. While the server log is also changed to reflect the client message, it doesn't need to be.\n\nAdditionally, with `%a` added to log_line_prefix, the server log would only contain the application name of the client affected by the deadlock, not the application names of all other clients involved in it.\n\nExample server log with application names (here: a and b) added to the log prefix:\n\n2024-05-10 20:39:58.459 BST [197591] (a)ERROR: deadlock detected\n2024-05-10 20:39:58.459 BST [197591] (a)DETAIL: Process 197591 (application_name: a) waits for ShareLock on transaction 782; blocked by process 197586.\n Process 197586 (application_name: b) waits for ShareLock on transaction 781; blocked by process 197591.\n Process 197591, (application_name: a): SELECT * FROM q WHERE id = 2 FOR UPDATE;\n Process 197586, (application_name: b): SELECT * FROM q WHERE id = 1 FOR UPDATE;\n2024-05-10 20:39:58.459 BST [197591] (a)HINT: See server log for query details.\n2024-05-10 20:39:58.459 BST [197591] (a)CONTEXT: while locking tuple (0,2) in relation \"q\"\n2024-05-10 20:39:58.459 BST [197591] (a)STATEMENT: SELECT * FROM q WHERE id = 2 FOR UPDATE;\n\nAll log line prefixes refer to the application a. The message has both a and b.\n\nAnyway, the server log is not the important part here. The crucial UX feature is the client getting application names back, so browsing through server logs can be avoided.\n\nBest,\nKaroline\n\n\n", "msg_date": "Fri, 10 May 2024 20:10:58 +0000", "msg_from": "Karoline Pauls <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Augmenting the deadlock message with application_name" }, { "msg_contents": "Karoline Pauls <[email protected]> writes:\n> On Friday, 10 May 2024 at 20:17, Bruce Momjian <[email protected]> wrote:\n>> log_line_prefix supports application name --- why would you not use\n>> that?\n\n> log_line_prefix is effective in the server log. This change is mostly about improving the message sent back to the client. While the server log is also changed to reflect the client message, it doesn't need to be.\n\nIt's normally necessary to look at the server log anyway if you want\nto figure out what caused the deadlock, since the client message\nintentionally doesn't provide query texts. I think this proposal\ndoesn't move the goalposts noticeably: it seems likely to me that\nin many installations the sessions would mostly all have the same\napplication_name, or at best not-too-informative names like \"psql\"\nversus \"PostgreSQL JDBC Driver\". (If we thought these names *were*\nreally informative about what other sessions are doing, we'd probably\nhave to hide them from unprivileged users in pg_stat_activity, and\nthen there'd also be a security concern here.)\n\nOn the whole I'd reject this proposal as causing churn in\napplication-visible behavior for very little gain.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 May 2024 17:01:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Augmenting the deadlock message with application_name" }, { "msg_contents": "On Fri, May 10, 2024 at 08:10:58PM +0000, Karoline Pauls wrote:\n> On Friday, 10 May 2024 at 20:17, Bruce Momjian <[email protected]>\n> wrote:\n> >\n> > log_line_prefix supports application name --- why would you not use\n> > that?\n> >\n>\n> log_line_prefix is effective in the server log. This change is mostly\n> about improving the message sent back to the client. While the server\n> log is also changed to reflect the client message, it doesn't need to\n> be.\n\nI was hoping client_min_messages would show the application name, but it\ndoesn't but your bigger point is below.\n\n> Additionally, with `%a` added to log_line_prefix, the server log\n> would only contain the application name of the client affected by the\n> deadlock, not the application names of all other clients involved in\n> it.\n\nYeah, getting the application names of the pids reported in the log\nrequires looking backward in the logs to find out what the most recent\nlog line was for the pids involved.\n\nFrankly, I think it would be more useful to show the session_id in the\ndeadlock so you could then use that to look back to any details you want\nin the logs, not only the application name.\n\n> Anyway, the server log is not the important part here. The crucial\n> UX feature is the client getting application names back, so browsing\n> through server logs can be avoided.\n\nWell, we don't want to report too much information because it gets\nconfusing.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 10 May 2024 17:09:57 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Augmenting the deadlock message with application_name" } ]
[ { "msg_contents": "Hi,\n\nAnalyze logs within autovacuum uses specific variables\nVacuumPage{Hit,Miss,Dirty} to track the buffer usage count. However,\npgBufferUsage already provides block usage tracking and handles more cases\n(temporary tables, parallel workers...).\n\nThose variables were only used in two places, block usage reporting in\nverbose vacuum and analyze. 5cd72cc0c5017a9d4de8b5d465a75946da5abd1d\nremoved their usage in the vacuum command as part of a bugfix.\n\nThis patch replaces those Vacuum specific variables by pgBufferUsage\nin analyze. This makes VacuumPage{Hit,Miss,Dirty} unused and removable.\nThis commit removes both their calls in bufmgr and their declarations.\n\nRegards,\nAnthonin", "msg_date": "Fri, 10 May 2024 10:54:07 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Fri, May 10, 2024 at 10:54:07AM +0200, Anthonin Bonnefoy wrote:\n> This patch replaces those Vacuum specific variables by pgBufferUsage\n> in analyze. This makes VacuumPage{Hit,Miss,Dirty} unused and removable.\n> This commit removes both their calls in bufmgr and their declarations.\n\nHmm, yeah, it looks like you're right. I can track all the blocks\nread, hit and dirtied for VACUUM and ANALYZE in all the code path\nwhere these removed variables were incremented. This needs some\nruntime check to make sure that the calculations are consistent before\nand after the fact (cannot do that now).\n\n appendStringInfo(&buf, _(\"buffer usage: %lld hits, %lld misses, %lld dirtied\\n\"),\n- (long long) AnalyzePageHit,\n- (long long) AnalyzePageMiss,\n- (long long) AnalyzePageDirty);\n+ (long long) (bufferusage.shared_blks_hit + bufferusage.local_blks_hit),\n+ (long long) (bufferusage.shared_blks_read + bufferusage.local_blks_read),\n+ (long long) (bufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied));\n\nPerhaps this should say \"read\" rather than \"miss\" in the logs as the\ntwo read variables for the shared and local blocks are used? For\nconsistency, at least.\n\nThat's not material for v17, only for v18.\n--\nMichael", "msg_date": "Fri, 10 May 2024 19:40:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "Thanks for having a look.\n\nOn Fri, May 10, 2024 at 12:40 PM Michael Paquier <[email protected]>\nwrote:\n\n> This needs some runtime check to make sure that the calculations\n> are consistent before and after the fact (cannot do that now).\n>\nYeah, testing this is also a bit painful as buffer usage of analyze is only\ndisplayed in the logs during autoanalyze. While looking at this, I've\nthought of additional changes that could make testing easier and improve\nconsistency with VACUUM VERBOSE:\n- Have ANALYZE VERBOSE outputs the buffer usage stats\n- Add Wal usage to ANALYZE VERBOSE\n\nanalyze verbose output would look like:\npostgres=# analyze (verbose) pgbench_accounts ;\nINFO: analyzing \"public.pgbench_accounts\"\nINFO: \"pgbench_accounts\": scanned 1640 of 1640 pages, containing 100000\nlive rows and 0 dead rows; 30000 rows in sample, 100000 estimated total rows\nINFO: analyze of table \"postgres.public.pgbench_accounts\"\navg read rate: 124.120 MB/s, avg write rate: 0.110 MB/s\nbuffer usage: 533 hits, 1128 reads, 1 dirtied\nWAL usage: 12 records, 1 full page images, 5729 bytes\nsystem usage: CPU: user: 0.06 s, system: 0.00 s, elapsed: 0.07 s\n\nPerhaps this should say \"read\" rather than \"miss\" in the logs as the\n> two read variables for the shared and local blocks are used? For\n> consistency, at least.\n>\nSounds good.\n\nThat's not material for v17, only for v18.\n>\n Definitely\n\nI've split the patch in two parts\n1: Removal of the vacuum specific variables, this is the same as the\ninitial patch.\n2: Add buffer and wal usage to analyze verbose output + rename miss to\nreads\n\nRegards,\nAnthonin", "msg_date": "Tue, 28 May 2024 11:00:51 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "Hi Anthonin,\n\nI suggest assigning values\nbufferusage.shared_blks_read + bufferusage.local_blks_read\nand\nbufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied\nto new variables and using them. This would keep the changed lines within\nthe 80 symbols limit, and make the code more readable overall.\n\nI also believe that changing \"misses\" to \"reads\" should belong to 0001\npatch since we only change it because we replace AnalyzePageMiss with\nbufferusage.shared_blks_read + bufferusage.local_blks_read in 0001.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\n>\n\nHi Anthonin,I suggest assigning valuesbufferusage.shared_blks_read + bufferusage.local_blks_readandbufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtiedto new variables and using them. This would keep the changed lines withinthe 80 symbols limit, and make the code more readable overall.I also believe that changing \"misses\" to \"reads\" should belong to 0001patch since we only change it because we replace AnalyzePageMiss withbufferusage.shared_blks_read + bufferusage.local_blks_read in 0001.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/", "msg_date": "Fri, 5 Jul 2024 17:24:41 +0300", "msg_from": "Karina Litskevich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "I wrote:\n\n>\n> I suggest assigning values\n> bufferusage.shared_blks_read + bufferusage.local_blks_read\n> and\n> bufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtied\n> to new variables and using them. This would keep the changed lines within\n> the 80 symbols limit, and make the code more readable overall.\n>\n\nThe same applies to\nbufferusage.shared_blks_hit + bufferusage.local_blks_hit\n\nI wrote:I suggest assigning valuesbufferusage.shared_blks_read + bufferusage.local_blks_readandbufferusage.shared_blks_dirtied + bufferusage.local_blks_dirtiedto new variables and using them. This would keep the changed lines withinthe 80 symbols limit, and make the code more readable overall.The same applies tobufferusage.shared_blks_hit + bufferusage.local_blks_hit", "msg_date": "Fri, 5 Jul 2024 17:32:30 +0300", "msg_from": "Karina Litskevich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "Hi,\n\nThanks for the review, I've updated the patches with the suggestions:\n- moved renaming of misses to reads to the first patch\n- added intermediate variables for total blks usage\n\nI've also done some additional tests using the provided\nvacuum_analyze_buffer_usage.sql script. It relies on\npg_stat_statements to check the results (only pgss gives information\non dirtied buffers). It gives the following output:\n\n psql:vacuum_analyze_buffer_usage.sql:21: INFO: vacuuming\n\"postgres.pg_temp_7.vacuum_blks_stat_test\"\n ...\n buffer usage: 105 hits, 3 reads, 6 dirtied\n ...\n query | sum_hit | sum_read | sum_dirtied\n --------------------+---------+----------+-------------\n VACUUM (VERBOSE... | 105 | 3 | 6\n\nFor vacuum, we have the same results with SKIP_DATABASE_STATS. Without\nthis setting, we would have block usage generated by\nvac_update_datfrozenxid outside of vacuum_rel and therefore not\ntracked by the verbose output. For the second test, the second patch\nis needed to have ANALYZE (VERBOSE) output the block usage. It will\noutput the following:\n\n psql:vacuum_analyze_buffer_usage.sql:29: INFO: analyzing\n\"pg_temp_7.vacuum_blks_stat_test\"\n ...\n buffer usage: 84 hits, 33 reads, 2 dirtied\n ...\n query | sum_hit | sum_read | sum_dirtied\n ---------------------+---------+----------+-------------\n ANALYZE (VERBOSE... | 91 | 38 | 2\n\nThere's additional buffer hits/reads reported by pgss, those are from\nanalyze_rel opening the relations in try_relation_open and are not\ntracked by the ANALYZE VERBOSE.", "msg_date": "Mon, 8 Jul 2024 11:35:19 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Mon, Jul 8, 2024 at 2:35 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the review, I've updated the patches with the suggestions:\n> - moved renaming of misses to reads to the first patch\n> - added intermediate variables for total blks usage\n>\n\nThank you for working on this. 0001 patch looks good to me.\n\nI like the 0002 patch idea. But with this patch, ANALYZE VERBOSE\nwrites something like this:\n\nINFO: analyzing \"public.test\"\nINFO: \"test\": scanned 443 of 443 pages, containing 100000 live rows\nand 0 dead rows; 30000 rows in sample, 100000 estimated total rows\nINFO: analyze of table \"postgres.public.test\"\navg read rate: 38.446 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 265 hits, 187 reads, 0 dirtied\nWAL usage: 4 records, 0 full page images, 637 bytes\nsystem usage: CPU: user: 0.03 s, system: 0.00 s, elapsed: 0.03 s\n\nWhich seems not to be consistent with what we do in VACUUM VERBOSE in\nsome points. For example, in VACUUM VERBOSE outputs, we write\nstatistics of pages, tuples, buffer usage, and WAL usage in one INFO\nmessage:\n\nINFO: vacuuming \"postgres.public.test\"\nINFO: finished vacuuming \"postgres.public.test\": index scans: 0\npages: 0 removed, 443 remain, 1 scanned (0.23% of total)\ntuples: 0 removed, 100000 remain, 0 are dead but not yet removable\nremovable cutoff: 754, which was 0 XIDs old when operation ended\nfrozen: 0 pages from table (0.00% of total) had 0 tuples frozen\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead\nitem identifiers removed\navg read rate: 23.438 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 5 hits, 3 reads, 0 dirtied\nWAL usage: 0 records, 0 full page images, 0 bytes\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n\nI'd suggest writing analyze verbose messages as something like:\n\nINFO: finished analyzing \"postgres.public.test\"\npages: 443 of 443 scanned\ntuples: 100000 live tuples, 0 are dead; 30000 tuples in sample, 100000\nestimated total tuples\navg read rate: 38.446 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 265 hits, 187 reads, 0 dirtied\nWAL usage: 4 records, 0 full page images, 637 bytes\nsystem usage: CPU: user: 0.03 s, system: 0.00 s, elapsed: 0.03 s\n\nThe first line would vary depending on whether an autovacuum worker or\nnot. And the above suggestion includes a change of term \"row\" to\n\"tuple\" for better consistency with VACUUM VERBOSE outputs. I think it\nwould be great if autoanalyze also writes logs in the same format.\nIIUC with the patch, autoanalyze logs don't include the page and tuple\nstatistics.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jul 2024 13:59:10 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Mon, Jul 22, 2024 at 10:59 PM Masahiko Sawada <[email protected]> wrote:\n> The first line would vary depending on whether an autovacuum worker or\n> not. And the above suggestion includes a change of term \"row\" to\n> \"tuple\" for better consistency with VACUUM VERBOSE outputs. I think it\n> would be great if autoanalyze also writes logs in the same format.\n> IIUC with the patch, autoanalyze logs don't include the page and tuple\n> statistics.\n\nOne issue is that the number of scanned pages, live tuples and dead\ntuples is only available in acquire_sample_rows which is where the log\ncontaining those stats is emitted. I've tried to implement the\nfollowing in 0003:\n- Sampling functions now accept an AcquireSampleStats struct to store\npages and tuples stats\n- Log is removed from sampling functions\n- do_analyze_rel now outputs scanned and tuples statistics when\nrelevant. sampling from fdw doesn't provide those statistics so they\nare not displayed in those cases.\n\nThis ensures that analyze logs are only emitted in do_analyze_rel,\nallowing to display the same output for both autoanalyze and ANALYZE\nVERBOSE. With those changes, we have the following output for analyze\nverbose of a table:\n\nanalyze (verbose) pgbench_accounts ;\nINFO: analyzing \"public.pgbench_accounts\"\nINFO: analyze of table \"postgres.public.pgbench_accounts\"\npages: 1640 of 1640 scanned\ntuples: 100000 live tuples, 0 are dead; 30000 tuples in samples,\n100000 estimated total tuples\navg read rate: 174.395 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 285 hits, 1384 reads, 0 dirtied\nWAL usage: 14 records, 0 full page images, 1343 bytes\nsystem usage: CPU: user: 0.05 s, system: 0.00 s, elapsed: 0.06 s\n\nFor a file_fdw, the output will look like:\n\nanalyze (verbose) pglog;\nINFO: analyzing \"public.pglog\"\nINFO: analyze of table \"postgres.public.pglog\"\ntuples: 30000 tuples in samples, 60042 estimated total tuples\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 182 hits, 0 reads, 0 dirtied\nWAL usage: 118 records, 0 full page images, 13086 bytes\nsystem usage: CPU: user: 0.40 s, system: 0.00 s, elapsed: 0.41 s\n\nI've also slightly modified 0002 to display \"automatic analyze\" when\nwe're inside an autovacuum worker, similar to what's done with vacuum\noutput.\n\nRegards,\nAnthonin", "msg_date": "Wed, 24 Jul 2024 10:57:57 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Wed, Jul 24, 2024 at 1:58 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> On Mon, Jul 22, 2024 at 10:59 PM Masahiko Sawada <[email protected]> wrote:\n> > The first line would vary depending on whether an autovacuum worker or\n> > not. And the above suggestion includes a change of term \"row\" to\n> > \"tuple\" for better consistency with VACUUM VERBOSE outputs. I think it\n> > would be great if autoanalyze also writes logs in the same format.\n> > IIUC with the patch, autoanalyze logs don't include the page and tuple\n> > statistics.\n>\n> One issue is that the number of scanned pages, live tuples and dead\n> tuples is only available in acquire_sample_rows which is where the log\n> containing those stats is emitted. I've tried to implement the\n> following in 0003:\n> - Sampling functions now accept an AcquireSampleStats struct to store\n> pages and tuples stats\n> - Log is removed from sampling functions\n> - do_analyze_rel now outputs scanned and tuples statistics when\n> relevant. sampling from fdw doesn't provide those statistics so they\n> are not displayed in those cases.\n\nStudying how we write verbose log messages, it seems that currently\nANALYZE (autoanalyze) lets tables and FDWs write logs in its own\nformat. Which makes sense to me as some instruments for heap such as\ndead tuple might not be necessary for FDWs and FDW might want to write\nother information such as executed queries. An alternative idea would\nbe to pass StringInfo to AcquireSampleRowsFunc() so that callback can\nwrite its messages there. This is somewhat similar to what we do in\nthe EXPLAIN command (cf, ExplainPropertyText() etc). It could be too\nmuch but I think it could be better than writing logs in the single\nformat.\n\n>\n> I've also slightly modified 0002 to display \"automatic analyze\" when\n> we're inside an autovacuum worker, similar to what's done with vacuum\n> output.\n\n+1\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:34:46 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Sat, Jul 27, 2024 at 12:35 AM Masahiko Sawada <[email protected]> wrote:\n> An alternative idea would\n> be to pass StringInfo to AcquireSampleRowsFunc() so that callback can\n> write its messages there. This is somewhat similar to what we do in\n> the EXPLAIN command (cf, ExplainPropertyText() etc). It could be too\n> much but I think it could be better than writing logs in the single\n> format.\n\nI've tested this approach, it definitely looks better. I've added a\nlogbuf StringInfo to AcquireSampleRowsFunc which will receive the\nlogs. elevel was removed as it is not used anymore. Since everything\nis in the same log line, I've removed the relation name in the acquire\nsample functions.\n\nFor partitioned tables, I've also added the processed partition table\nbeing sampled. The output will look like:\n\nINFO: analyze of table \"postgres.public.test_partition\"\nSampling rows from child \"public.test_partition_1\"\npages: 5 of 5 scanned\ntuples: 999 live tuples, 0 are dead; 999 tuples in sample, 999\nestimated total tuples\nSampling rows from child \"public.test_partition_2\"\npages: 5 of 5 scanned\ntuples: 1000 live tuples, 0 are dead; 1000 tuples in sample, 1000\nestimated total tuples\navg read rate: 2.604 MB/s, avg write rate: 0.000 MB/s\n...\n\nFor a file_fdw, the output will be:\n\nINFO: analyze of table \"postgres.public.pglog\"\ntuples: 60043 tuples; 30000 tuples in sample\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n...\n\nRegards,\nAnthonin", "msg_date": "Mon, 29 Jul 2024 09:11:52 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "Hi,\n\nOn Mon, Jul 29, 2024 at 12:12 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> On Sat, Jul 27, 2024 at 12:35 AM Masahiko Sawada <[email protected]> wrote:\n> > An alternative idea would\n> > be to pass StringInfo to AcquireSampleRowsFunc() so that callback can\n> > write its messages there. This is somewhat similar to what we do in\n> > the EXPLAIN command (cf, ExplainPropertyText() etc). It could be too\n> > much but I think it could be better than writing logs in the single\n> > format.\n>\n\nI have one comment on 0001 patch:\n\n /*\n * Calculate the difference in the Page\nHit/Miss/Dirty that\n * happened as part of the analyze by\nsubtracting out the\n * pre-analyze values which we saved above.\n */\n- AnalyzePageHit = VacuumPageHit - AnalyzePageHit;\n- AnalyzePageMiss = VacuumPageMiss - AnalyzePageMiss;\n- AnalyzePageDirty = VacuumPageDirty - AnalyzePageDirty;\n+ memset(&bufferusage, 0, sizeof(BufferUsage));\n+ BufferUsageAccumDiff(&bufferusage,\n&pgBufferUsage, &startbufferusage);\n+\n+ total_blks_hit = bufferusage.shared_blks_hit +\n+ bufferusage.local_blks_hit;\n+ total_blks_read = bufferusage.shared_blks_read +\n+ bufferusage.local_blks_read;\n+ total_blks_dirtied = bufferusage.shared_blks_dirtied +\n+ bufferusage.local_blks_dirtied;\n\nThe comment should also be updated or removed.\n\nAnd here are some comments on 0002 patch:\n\n- TimestampDifference(starttime, endtime, &secs_dur, &usecs_dur);\n+ delay_in_ms = TimestampDifferenceMilliseconds(starttime, endtime);\n\nI think that this change is to make vacuum code consistent with\nanalyze code, particularly the following part:\n\n /*\n * We do not expect an analyze to take > 25 days and it simplifies\n * things a bit to use TimestampDifferenceMilliseconds.\n */\n delay_in_ms = TimestampDifferenceMilliseconds(starttime, endtime);\n\nHowever, as the above comment says, delay_in_ms can have a duration up\nto 25 days. I guess it would not be a problem for analyze cases but\ncould be in vacuum cases as vacuum could sometimes be running for a\nvery long time. I've seen vacuums running even for almost 1 month. So\nI think we should keep this part.\n\n---\n /* measure elapsed time iff autovacuum logging requires it */\n- if (AmAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n+ if (instrument)\n\nThe comment should also be updated.\n\n---\nCould you split the 0002 patch into two patches? One is to have\nANALYZE command (with VERBOSE option) write the buffer usage, and\nsecond one is to add WAL usage to both ANALYZE command and\nautoanalyze. I think adding WAL usage to ANALYZE could be\ncontroversial as it should not be WAL-intensive operation, so I'd like\nto keep them separate.\n\n\n> I've tested this approach, it definitely looks better. I've added a\n> logbuf StringInfo to AcquireSampleRowsFunc which will receive the\n> logs. elevel was removed as it is not used anymore. Since everything\n> is in the same log line, I've removed the relation name in the acquire\n> sample functions.\n>\n> For partitioned tables, I've also added the processed partition table\n> being sampled. The output will look like:\n>\n> INFO: analyze of table \"postgres.public.test_partition\"\n> Sampling rows from child \"public.test_partition_1\"\n> pages: 5 of 5 scanned\n> tuples: 999 live tuples, 0 are dead; 999 tuples in sample, 999\n> estimated total tuples\n> Sampling rows from child \"public.test_partition_2\"\n> pages: 5 of 5 scanned\n> tuples: 1000 live tuples, 0 are dead; 1000 tuples in sample, 1000\n> estimated total tuples\n> avg read rate: 2.604 MB/s, avg write rate: 0.000 MB/s\n> ...\n>\n> For a file_fdw, the output will be:\n>\n> INFO: analyze of table \"postgres.public.pglog\"\n> tuples: 60043 tuples; 30000 tuples in sample\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n> ...\n\nThank you for updating the patch. With your patch, I got the following\nlogs for when executing ANALYZE VERBOSE on a partitioned table:\n\npostgres(1:3971560)=# analyze (verbose) p;\nINFO: analyzing \"public.p\" inheritance tree\nINFO: analyze of table \"postgres.public.p\"\nSampling rows from child \"public.c1\"\npages: 10000 of 14750 scanned\ntuples: 2259833 live tuples, 0 are dead; 10000 tuples in sample,\n3333254 estimated total tuples\nSampling rows from child \"public.c2\"\npages: 10000 of 14750 scanned\ntuples: 2260000 live tuples, 0 are dead; 10000 tuples in sample,\n3333500 estimated total tuples\nSampling rows from child \"public.c3\"\npages: 10000 of 14750 scanned\ntuples: 2259833 live tuples, 0 are dead; 10000 tuples in sample,\n3333254 estimated total tuples\navg read rate: 335.184 MB/s, avg write rate: 0.031 MB/s\nbuffer usage: 8249 hits, 21795 reads, 2 dirtied\nWAL usage: 6 records, 1 full page images, 8825 bytes\nsystem usage: CPU: user: 0.46 s, system: 0.03 s, elapsed: 0.50 s\n:\n\nWhereas the current log messages are like follow:\n\nINFO: analyzing \"public.p\" inheritance tree\nINFO: \"c1\": scanned 10000 of 14750 pages, containing 2259833 live\nrows and 0 dead rows; 10000 rows in sample, 3333254 estimated total\nrows\nINFO: \"c2\": scanned 10000 of 14750 pages, containing 2259834 live\nrows and 0 dead rows; 10000 rows in sample, 3333255 estimated total\nrows\nINFO: \"c3\": scanned 10000 of 14750 pages, containing 2259833 live\nrows and 0 dead rows; 10000 rows in sample, 3333254 estimated total\nrows\n:\n\nIt seems to me that the current style is more concise and readable (3\nrows per table vs. 1 row per table). We might need to consider a\nbetter output format for partitioned tables as the number of\npartitions could be high. I don't have a good idea now, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jul 2024 16:13:11 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "Hi,\n\nOn Tue, Jul 30, 2024 at 1:13 AM Masahiko Sawada <[email protected]> wrote:\n> I have one comment on 0001 patch:\n> The comment should also be updated or removed.\n\nOk, I've removed the comment.\n\n> However, as the above comment says, delay_in_ms can have a duration up\n> to 25 days. I guess it would not be a problem for analyze cases but\n> could be in vacuum cases as vacuum could sometimes be running for a\n> very long time. I've seen vacuums running even for almost 1 month. So\n> I think we should keep this part.\n\nGood point, I've reverted to using TimestampDifference for vacuum.\n\n> /* measure elapsed time iff autovacuum logging requires it */\n> - if (AmAutoVacuumWorkerProcess() && params->log_min_duration >= 0)\n> + if (instrument)\n>\n> The comment should also be updated.\n\nUpdated.\n\n> Could you split the 0002 patch into two patches? One is to have\n> ANALYZE command (with VERBOSE option) write the buffer usage, and\n> second one is to add WAL usage to both ANALYZE command and\n> autoanalyze. I think adding WAL usage to ANALYZE could be\n> controversial as it should not be WAL-intensive operation, so I'd like\n> to keep them separate.\n\nI've split the patch, 0002 makes verbose outputs the same as\nautoanalyze logs with buffer/io/system while 0003 adds WAL usage\noutput.\n\n> It seems to me that the current style is more concise and readable (3\n> rows per table vs. 1 row per table). We might need to consider a\n> better output format for partitioned tables as the number of\n> partitions could be high. I don't have a good idea now, though.\n\nA possible change would be to pass an inh flag when an acquirefunc is\ncalled from acquire_inherited_sample_rows. The acquirefunc could then\nuse an alternative log format to append to logbuf. This way, we could\nhave a more compact format for partitioned tables.\n\nRegards,\nAnthonin", "msg_date": "Tue, 30 Jul 2024 09:21:25 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Tue, Jul 30, 2024 at 9:21 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n> A possible change would be to pass an inh flag when an acquirefunc is\n> called from acquire_inherited_sample_rows. The acquirefunc could then\n> use an alternative log format to append to logbuf. This way, we could\n> have a more compact format for partitioned tables.\n\nI've just tested this, the result isn't great as it creates an\ninconsistent output\n\nINFO: analyze of table \"postgres.public.test_partition\"\n\"test_partition_1\": scanned 5 of 5 pages, containing 999 live tuples\nand 0 dead tuples; 999 rows in sample, 999 estimated total rows\n\"test_partition_2\": scanned 5 of 5 pages, containing 1000 live tuples\nand 0 dead tuples; 1000 rows in sample, 1000 estimated total rows\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n...\nINFO: analyze of table \"postgres.public.test_partition_1\"\npages: 5 of 5 scanned\ntuples: 999 live tuples, 0 are dead; 999 tuples in sample, 999\nestimated total tuples\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n\nMaybe the best approach is to always use the compact form?\n\nINFO: analyze of table \"postgres.public.test_partition\"\n\"test_partition_1\": scanned 5 of 5 pages, containing 999 live tuples\nand 0 dead tuples; 999 tuples in sample, 999 estimated total tuples\n\"test_partition_2\": scanned 5 of 5 pages, containing 1000 live tuples\nand 0 dead tuples; 1000 tuples in sample, 1000 estimated total tuples\navg read rate: 1.953 MB/s, avg write rate: 0.000 MB/s\n...\nINFO: analyze of table \"postgres.public.test_partition_1\"\n\"test_partition_1\": scanned 5 of 5 pages, containing 999 live tuples\nand 0 dead tuples; 999 tuples in sample, 999 estimated total tuples\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n\nI've updated the patchset with those changes. 0004 introduces the\nStringInfo logbuf so we can output logs as a single log and during\nANALYZE VERBOSE while using the compact form.\n\nRegards,\nAnthonin", "msg_date": "Wed, 31 Jul 2024 09:03:24 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Wed, Jul 31, 2024 at 12:03 AM Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> On Tue, Jul 30, 2024 at 9:21 AM Anthonin Bonnefoy\n> <[email protected]> wrote:\n> > A possible change would be to pass an inh flag when an acquirefunc is\n> > called from acquire_inherited_sample_rows. The acquirefunc could then\n> > use an alternative log format to append to logbuf. This way, we could\n> > have a more compact format for partitioned tables.\n>\n> I've just tested this, the result isn't great as it creates an\n> inconsistent output\n>\n> INFO: analyze of table \"postgres.public.test_partition\"\n> \"test_partition_1\": scanned 5 of 5 pages, containing 999 live tuples\n> and 0 dead tuples; 999 rows in sample, 999 estimated total rows\n> \"test_partition_2\": scanned 5 of 5 pages, containing 1000 live tuples\n> and 0 dead tuples; 1000 rows in sample, 1000 estimated total rows\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n> ...\n> INFO: analyze of table \"postgres.public.test_partition_1\"\n> pages: 5 of 5 scanned\n> tuples: 999 live tuples, 0 are dead; 999 tuples in sample, 999\n> estimated total tuples\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n>\n> Maybe the best approach is to always use the compact form?\n>\n> INFO: analyze of table \"postgres.public.test_partition\"\n> \"test_partition_1\": scanned 5 of 5 pages, containing 999 live tuples\n> and 0 dead tuples; 999 tuples in sample, 999 estimated total tuples\n> \"test_partition_2\": scanned 5 of 5 pages, containing 1000 live tuples\n> and 0 dead tuples; 1000 tuples in sample, 1000 estimated total tuples\n> avg read rate: 1.953 MB/s, avg write rate: 0.000 MB/s\n> ...\n> INFO: analyze of table \"postgres.public.test_partition_1\"\n> \"test_partition_1\": scanned 5 of 5 pages, containing 999 live tuples\n> and 0 dead tuples; 999 tuples in sample, 999 estimated total tuples\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n>\n> I've updated the patchset with those changes. 0004 introduces the\n> StringInfo logbuf so we can output logs as a single log and during\n> ANALYZE VERBOSE while using the compact form.\n>\n\nFair point. I'll consider a better output format.\n\nMeanwhile, I think we can push 0001 and 0002 patches since they are in\ngood shape. I've updated commit messages to them and slightly changed\n0002 patch to write \"finished analyzing of table \\\"%s.%s.%s\\\" instead\nof \"analyze of table \\\"%s.%s.%s\\\".\n\nAlso, regarding 0003 patch, what is the main reason why we want to add\nWAL usage to analyze reports? I think that analyze normally does not\nwrite WAL records much so I'm not sure it's going to provide a good\ninsight for users.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 31 Jul 2024 12:36:13 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Wed, Jul 31, 2024 at 9:36 PM Masahiko Sawada <[email protected]> wrote:\n> Meanwhile, I think we can push 0001 and 0002 patches since they are in\n> good shape. I've updated commit messages to them and slightly changed\n> 0002 patch to write \"finished analyzing of table \\\"%s.%s.%s\\\" instead\n> of \"analyze of table \\\"%s.%s.%s\\\".\n\nWouldn't it make sense to do the same for autoanalyze and write\n\"finished automatic analyze of table \\\"%s.%s.%s\\\"\\n\" instead of\n\"automatic analyze of table \\\"%s.%s.%s\\\"\\n\"?\n\n> Also, regarding 0003 patch, what is the main reason why we want to add\n> WAL usage to analyze reports? I think that analyze normally does not\n> write WAL records much so I'm not sure it's going to provide a good\n> insight for users.\n\nThere was no strong reason except for consistency with VACUUM VERBOSE\noutput. But as you said, it's not really providing valuable\ninformation so it's probably better to keep the noise down and drop\nit.\n\nRegards,\nAnthonin Bonnefoy\n\n\n", "msg_date": "Thu, 1 Aug 2024 08:27:29 +0200", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Wed, Jul 31, 2024 at 11:27 PM Anthonin Bonnefoy\n<[email protected]> wrote:\n>\n> On Wed, Jul 31, 2024 at 9:36 PM Masahiko Sawada <[email protected]> wrote:\n> > Meanwhile, I think we can push 0001 and 0002 patches since they are in\n> > good shape. I've updated commit messages to them and slightly changed\n> > 0002 patch to write \"finished analyzing of table \\\"%s.%s.%s\\\" instead\n> > of \"analyze of table \\\"%s.%s.%s\\\".\n>\n> Wouldn't it make sense to do the same for autoanalyze and write\n> \"finished automatic analyze of table \\\"%s.%s.%s\\\"\\n\" instead of\n> \"automatic analyze of table \\\"%s.%s.%s\\\"\\n\"?\n\nI think that the current style is consistent with autovacuum logs:\n\n2024-08-01 16:04:48.088 PDT [12302] LOG: automatic vacuum of table\n\"postgres.public.test\": index scans: 0\n pages: 0 removed, 443 remain, 443 scanned (100.00% of total)\n tuples: 0 removed, 100000 remain, 0 are dead but not yet removable\n removable cutoff: 751, which was 0 XIDs old when operation ended\n new relfrozenxid: 739, which is 1 XIDs ahead of previous value\n frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n index scan not needed: 0 pages from table (0.00% of total) had\n0 dead item identifiers removed\n avg read rate: 0.000 MB/s, avg write rate: 1.466 MB/s\n buffer usage: 905 hits, 0 reads, 4 dirtied\n system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.02 s\n2024-08-01 16:04:48.125 PDT [12302] LOG: automatic analyze of table\n\"postgres.public.test\"\n avg read rate: 5.551 MB/s, avg write rate: 0.617 MB/s\n buffer usage: 512 hits, 27 reads, 3 dirtied\n system usage: CPU: user: 0.02 s, system: 0.00 s, elapsed: 0.03 s\n\nSince ANALYZE command writes the start log, I think it makes sense to\nwrite \"finished\" at the end of the operation:\n\n=# analyze verbose test;\nINFO: analyzing \"public.test\"\nINFO: \"test\": scanned 443 of 443 pages, containing 100000 live rows\nand 0 dead rows; 30000 rows in sample, 100000 estimated total rows\nINFO: finished analyzing table \"postgres.public.test\"\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 549 hits, 0 reads, 0 dirtied\nsystem usage: CPU: user: 0.02 s, system: 0.00 s, elapsed: 0.03 s\nANALYZE\n\n>\n> > Also, regarding 0003 patch, what is the main reason why we want to add\n> > WAL usage to analyze reports? I think that analyze normally does not\n> > write WAL records much so I'm not sure it's going to provide a good\n> > insight for users.\n>\n> There was no strong reason except for consistency with VACUUM VERBOSE\n> output. But as you said, it's not really providing valuable\n> information so it's probably better to keep the noise down and drop\n> it.\n\nOkay. I think writing WAL usage would not be very noisy and probably\ncould help some cases where (auto)analyze unexpectedly writes many WAL\nrecords (e.g., writing full page images much), and is consistent with\n(auto)vacuum logs as you mentioned. So let's go with this direction\nunless others think differently.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 1 Aug 2024 16:11:29 -0700", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" }, { "msg_contents": "On Fri, Aug 2, 2024 at 8:11 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Jul 31, 2024 at 11:27 PM Anthonin Bonnefoy\n> <[email protected]> wrote:\n> >\n> > On Wed, Jul 31, 2024 at 9:36 PM Masahiko Sawada <[email protected]> wrote:\n> > > Meanwhile, I think we can push 0001 and 0002 patches since they are in\n> > > good shape. I've updated commit messages to them and slightly changed\n> > > 0002 patch to write \"finished analyzing of table \\\"%s.%s.%s\\\" instead\n> > > of \"analyze of table \\\"%s.%s.%s\\\".\n> >\n> > Wouldn't it make sense to do the same for autoanalyze and write\n> > \"finished automatic analyze of table \\\"%s.%s.%s\\\"\\n\" instead of\n> > \"automatic analyze of table \\\"%s.%s.%s\\\"\\n\"?\n>\n> I think that the current style is consistent with autovacuum logs:\n>\n> 2024-08-01 16:04:48.088 PDT [12302] LOG: automatic vacuum of table\n> \"postgres.public.test\": index scans: 0\n> pages: 0 removed, 443 remain, 443 scanned (100.00% of total)\n> tuples: 0 removed, 100000 remain, 0 are dead but not yet removable\n> removable cutoff: 751, which was 0 XIDs old when operation ended\n> new relfrozenxid: 739, which is 1 XIDs ahead of previous value\n> frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n> index scan not needed: 0 pages from table (0.00% of total) had\n> 0 dead item identifiers removed\n> avg read rate: 0.000 MB/s, avg write rate: 1.466 MB/s\n> buffer usage: 905 hits, 0 reads, 4 dirtied\n> system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.02 s\n> 2024-08-01 16:04:48.125 PDT [12302] LOG: automatic analyze of table\n> \"postgres.public.test\"\n> avg read rate: 5.551 MB/s, avg write rate: 0.617 MB/s\n> buffer usage: 512 hits, 27 reads, 3 dirtied\n> system usage: CPU: user: 0.02 s, system: 0.00 s, elapsed: 0.03 s\n>\n> Since ANALYZE command writes the start log, I think it makes sense to\n> write \"finished\" at the end of the operation:\n>\n> =# analyze verbose test;\n> INFO: analyzing \"public.test\"\n> INFO: \"test\": scanned 443 of 443 pages, containing 100000 live rows\n> and 0 dead rows; 30000 rows in sample, 100000 estimated total rows\n> INFO: finished analyzing table \"postgres.public.test\"\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n> buffer usage: 549 hits, 0 reads, 0 dirtied\n> system usage: CPU: user: 0.02 s, system: 0.00 s, elapsed: 0.03 s\n> ANALYZE\n>\n\nCommitted 0001 and 0002 patches.\n\n> >\n> > > Also, regarding 0003 patch, what is the main reason why we want to add\n> > > WAL usage to analyze reports? I think that analyze normally does not\n> > > write WAL records much so I'm not sure it's going to provide a good\n> > > insight for users.\n> >\n> > There was no strong reason except for consistency with VACUUM VERBOSE\n> > output. But as you said, it's not really providing valuable\n> > information so it's probably better to keep the noise down and drop\n> > it.\n>\n> Okay. I think writing WAL usage would not be very noisy and probably\n> could help some cases where (auto)analyze unexpectedly writes many WAL\n> records (e.g., writing full page images much), and is consistent with\n> (auto)vacuum logs as you mentioned. So let's go with this direction\n> unless others think differently.\n\nI've updated the patch to add WAL usage to analyze. I'm going to push\nit this week, barring any objections.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 14 Aug 2024 14:36:57 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use pgBufferUsage for block reporting in analyze" } ]
[ { "msg_contents": "Hello hackers,\n\nI've investigated a recent buildfarm failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dodo&dt=2024-05-02%2006%3A40%3A36\n\nwhere the test failed due to a CRC error:\n2024-05-02 17:08:18.401 ACST [3406:7] LOG:  incorrect resource manager data checksum in record at 0/F14D7A60\n\n(Chipmunk produced similar errors as well:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2022-08-25%2019%3A40%3A11\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2024-03-22%2003%3A20%3A39\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2023-08-19%2006%3A43%3A20\n)\n\nand discovered that XLogRecordAssemble() calculates CRC over a buffer,\nthat might be modified by another process.\n\nWith the attached patch applied, the following test run:\necho \"\nautovacuum_naptime = 1\nautovacuum_vacuum_threshold = 1\n\nwal_consistency_checking = all\n\" >/tmp/temp.config\n\nfor ((i=1;i<=100;i++)); do echo \"iteration $i\"; TEMP_CONFIG=/tmp/temp.config TESTS=\"test_setup hash_index\" make \ncheck-tests -s || break; done\n\nfails for me on iterations 7, 10, 17:\nok 1         - test_setup                               2557 ms\nnot ok 2     - hash_index                              24719 ms\n# (test process exited with exit code 2)\n\npostmaster.log contains:\n2024-05-10 12:46:44.320 UTC checkpointer[1881151] LOG:  checkpoint starting: immediate force wait\n2024-05-10 12:46:44.365 UTC checkpointer[1881151] LOG:  checkpoint complete: wrote 41 buffers (0.3%); 0 WAL file(s) \nadded, 0 removed, 26 recycled; write=0.001 s, sync=0.001 s, total=0.046 s; sync files=0, longest=0.000 s, average=0.000 \ns; distance=439134 kB, estimate=527137 kB; lsn=0/3CE131F0, redo lsn=0/3CE13198\nTRAP: failed Assert(\"memcmp(block1_ptr, block1_copy, block1_len) == 0\"), File: \"xloginsert.c\", Line: 949, PID: 1881271\nExceptionalCondition at assert.c:52:13\nXLogRecordAssemble at xloginsert.c:953:1\nXLogInsert at xloginsert.c:520:9\nhashbucketcleanup at hash.c:844:14\nhashbulkdelete at hash.c:558:3\nindex_bulk_delete at indexam.c:760:1\nvac_bulkdel_one_index at vacuum.c:2498:10\nlazy_vacuum_one_index at vacuumlazy.c:2443:10\nlazy_vacuum_all_indexes at vacuumlazy.c:2026:26\nlazy_vacuum at vacuumlazy.c:1944:10\nlazy_scan_heap at vacuumlazy.c:1050:3\nheap_vacuum_rel at vacuumlazy.c:503:2\nvacuum_rel at vacuum.c:2214:2\nvacuum at vacuum.c:622:8\nautovacuum_do_vac_analyze at autovacuum.c:3102:2\ndo_autovacuum at autovacuum.c:2425:23\nAutoVacWorkerMain at autovacuum.c:1569:3\npgarch_die at pgarch.c:846:1\nStartChildProcess at postmaster.c:3929:5\nStartAutovacuumWorker at postmaster.c:3997:12\nprocess_pm_pmsignal at postmaster.c:3809:3\nServerLoop at postmaster.c:1667:5\nBackgroundWorkerInitializeConnection at postmaster.c:4156:1\nmain at main.c:184:3\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7fc71a8d7e40]\npostgres: autovacuum worker regression(_start+0x25)[0x556a8631a5e5]\n2024-05-10 12:46:45.038 UTC checkpointer[1881151] LOG:  checkpoint starting: immediate force wait\n2024-05-10 12:46:45.965 UTC autovacuum worker[1881275] LOG: automatic analyze of table \"regression.pg_catalog.pg_attribute\"\n         avg read rate: 0.000 MB/s, avg write rate: 5.409 MB/s\n         buffer usage: 1094 hits, 0 misses, 27 dirtied\n         system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.03 s\n2024-05-10 12:46:46.892 UTC postmaster[1881150] LOG:  server process (PID 1881271) was terminated by signal 6: Aborted\n2024-05-10 12:46:46.892 UTC postmaster[1881150] DETAIL:  Failed process was running: autovacuum: VACUUM ANALYZE \npublic.hash_cleanup_heap\n\n(This can be reproduced with 027_stream_regress, of course, but it would\ntake more time.)\n\nBest regards,\nAlexander", "msg_date": "Fri, 10 May 2024 16:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "WAL record CRC calculated incorrectly because of underlying buffer\n modification" }, { "msg_contents": "Hi,\n\nOn 2024-05-10 16:00:01 +0300, Alexander Lakhin wrote:\n> and discovered that XLogRecordAssemble() calculates CRC over a buffer,\n> that might be modified by another process.\n\nIf, with \"might\", you mean that it's legitimate for that buffer to be\nmodified, I don't think so. The bug is that something is modifying the buffer\ndespite it being exclusively locked.\n\nI.e. what we likely have here is a bug somewhere in the hash index code.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 May 2024 08:57:42 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Sat, May 11, 2024 at 3:57 AM Andres Freund <[email protected]> wrote:\n> On 2024-05-10 16:00:01 +0300, Alexander Lakhin wrote:\n> > and discovered that XLogRecordAssemble() calculates CRC over a buffer,\n> > that might be modified by another process.\n>\n> If, with \"might\", you mean that it's legitimate for that buffer to be\n> modified, I don't think so. The bug is that something is modifying the buffer\n> despite it being exclusively locked.\n>\n> I.e. what we likely have here is a bug somewhere in the hash index code.\n\nI don't have a good grip on the higher level locking protocols of\nhash.c, but one microscopic thing jumps out:\n\n /*\n * bucket buffer was not changed, but still needs to be\n * registered to ensure that we can acquire a cleanup lock on\n * it during replay.\n */\n if (!xlrec.is_primary_bucket_page)\n {\n uint8 flags = REGBUF_STANDARD |\nREGBUF_NO_IMAGE | REGBUF_NO_CHANGE;\n\n XLogRegisterBuffer(0, bucket_buf, flags);\n }\n\nThat registers a buffer that is pinned but not content-locked, and we\ntell xloginsert.c not to copy its image into the WAL, but it does it\nanyway because:\n\n /*\n * If needs_backup is true or WAL checking is enabled for current\n * resource manager, log a full-page write for the current block.\n */\n include_image = needs_backup || (info & XLR_CHECK_CONSISTENCY) != 0;\n\nSo I guess it copies the image on dodo, which has:\n\n 'PG_TEST_EXTRA' => 'ssl ldap\nkerberos wal_consistency_checking libpq_encryption xid_wraparound'\n\nPerhaps a no-image, no-change registered buffer should not be\nincluding an image, even for XLR_CHECK_CONSISTENCY? It's actually\nuseless for consistency checking too I guess, this issue aside,\nbecause it doesn't change anything so there is nothing to check.\n\n\n", "msg_date": "Sat, 11 May 2024 15:26:26 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "Hello Thomas and Andres,\n\nThank you for looking at this!\n\n11.05.2024 06:26, Thomas Munro wrote:\n> On Sat, May 11, 2024 at 3:57 AM Andres Freund <[email protected]> wrote:\n>> On 2024-05-10 16:00:01 +0300, Alexander Lakhin wrote:\n>>> and discovered that XLogRecordAssemble() calculates CRC over a buffer,\n>>> that might be modified by another process.\n>> If, with \"might\", you mean that it's legitimate for that buffer to be\n>> modified, I don't think so. The bug is that something is modifying the buffer\n>> despite it being exclusively locked.\n>>\n>> I.e. what we likely have here is a bug somewhere in the hash index code.\n> I don't have a good grip on the higher level locking protocols of\n> hash.c, but one microscopic thing jumps out:\n>\n> /*\n> * bucket buffer was not changed, but still needs to be\n> * registered to ensure that we can acquire a cleanup lock on\n> * it during replay.\n> */\n> if (!xlrec.is_primary_bucket_page)\n> {\n> uint8 flags = REGBUF_STANDARD |\n> REGBUF_NO_IMAGE | REGBUF_NO_CHANGE;\n>\n> XLogRegisterBuffer(0, bucket_buf, flags);\n> }\n>\n> That registers a buffer that is pinned but not content-locked, and we\n> tell xloginsert.c not to copy its image into the WAL, but it does it\n> anyway because:\n>\n> /*\n> * If needs_backup is true or WAL checking is enabled for current\n> * resource manager, log a full-page write for the current block.\n> */\n> include_image = needs_backup || (info & XLR_CHECK_CONSISTENCY) != 0;\n>\n> So I guess it copies the image on dodo, which has:\n>\n> 'PG_TEST_EXTRA' => 'ssl ldap\n> kerberos wal_consistency_checking libpq_encryption xid_wraparound'\n>\n> Perhaps a no-image, no-change registered buffer should not be\n> including an image, even for XLR_CHECK_CONSISTENCY? It's actually\n> useless for consistency checking too I guess, this issue aside,\n> because it doesn't change anything so there is nothing to check.\n\nYes, I think something wrong is here. I've reduced the reproducer to:\ncat << 'EOF' | psql\nCREATE TABLE hash_cleanup_heap(keycol INT);\nCREATE INDEX hash_cleanup_index on hash_cleanup_heap USING HASH (keycol);\n\nBEGIN;\nINSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 500) as i;\nROLLBACK;\nEOF\n\ncat << 'EOF' | psql &\nINSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 500) as i;\n\nDROP TABLE hash_cleanup_heap;\nEOF\n\ncat << 'EOF' | psql &\nSELECT pg_sleep(random() / 20);\nVACUUM hash_cleanup_heap;\nEOF\nwait\ngrep 'TRAP:' server.log\n\n(\"wal_consistency_checking = all\" and the xloginsert patch are still required)\n\nand with additional logging I see:\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96dd|LOG: !!!hashbucketcleanup| scan page buf: 1832\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96dd|CONTEXT: while vacuuming index \"hash_cleanup_index\" of \nrelation \"public.hash_cleanup_heap\"\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96dd|STATEMENT:  VACUUM hash_cleanup_heap;\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96de|LOG: !!!_hash_doinsert| _hash_getbucketbuf_from_hashkey: 1822\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96de|STATEMENT:  INSERT INTO hash_cleanup_heap SELECT 1 FROM \ngenerate_series(1, 500) as i;\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96dd|LOG: !!!hashbucketcleanup| xlrec.is_primary_bucket_page: 0, \nbuf: 1832, bucket_buf: 1822\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96dd|CONTEXT: while vacuuming index \"hash_cleanup_index\" of \nrelation \"public.hash_cleanup_heap\"\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96dd|STATEMENT:  VACUUM hash_cleanup_heap;\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96de|LOG: !!!_hash_doinsert| _hash_relbuf(rel, 1822)\n2024-05-11 03:45:23.424 UTC|law|regression|663ee9d3.1f96de|STATEMENT:  INSERT INTO hash_cleanup_heap SELECT 1 FROM \ngenerate_series(1, 500) as i;\nTRAP: failed Assert(\"memcmp(block1_ptr, block1_copy, block1_len) == 0\"), File: \"xloginsert.c\", Line: 949, PID: 2070237\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 11 May 2024 07:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Sat, May 11, 2024 at 4:00 PM Alexander Lakhin <[email protected]> wrote:\n> 11.05.2024 06:26, Thomas Munro wrote:\n> > Perhaps a no-image, no-change registered buffer should not be\n> > including an image, even for XLR_CHECK_CONSISTENCY? It's actually\n> > useless for consistency checking too I guess, this issue aside,\n> > because it doesn't change anything so there is nothing to check.\n>\n> Yes, I think something wrong is here. I've reduced the reproducer to:\n\nDoes it reproduce if you do this?\n\n- include_image = needs_backup || (info &\nXLR_CHECK_CONSISTENCY) != 0;\n+ include_image = needs_backup ||\n+ ((info & XLR_CHECK_CONSISTENCY) != 0 &&\n+ (regbuf->flags & REGBUF_NO_CHANGE) == 0);\n\nUnfortunately the back branches don't have that new flag from 00d7fb5e\nso, even if this is the right direction (not sure, I don't understand\nthis clean registered buffer trick) then ... but wait, why are there\nare no failures like this in the back branches (yet at least)? Does\nyour reproducer work for 16? I wonder if something relevant changed\nrecently, like f56a9def. CC'ing Michael and Amit K for info.\n\n\n", "msg_date": "Sat, 11 May 2024 16:25:44 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "11.05.2024 07:25, Thomas Munro wrote:\n> On Sat, May 11, 2024 at 4:00 PM Alexander Lakhin <[email protected]> wrote:\n>> 11.05.2024 06:26, Thomas Munro wrote:\n>>> Perhaps a no-image, no-change registered buffer should not be\n>>> including an image, even for XLR_CHECK_CONSISTENCY? It's actually\n>>> useless for consistency checking too I guess, this issue aside,\n>>> because it doesn't change anything so there is nothing to check.\n>> Yes, I think something wrong is here. I've reduced the reproducer to:\n> Does it reproduce if you do this?\n>\n> - include_image = needs_backup || (info &\n> XLR_CHECK_CONSISTENCY) != 0;\n> + include_image = needs_backup ||\n> + ((info & XLR_CHECK_CONSISTENCY) != 0 &&\n> + (regbuf->flags & REGBUF_NO_CHANGE) == 0);\n\nNo, it doesn't (at least with the latter, more targeted reproducer).\n\n> Unfortunately the back branches don't have that new flag from 00d7fb5e\n> so, even if this is the right direction (not sure, I don't understand\n> this clean registered buffer trick) then ... but wait, why are there\n> are no failures like this in the back branches (yet at least)? Does\n> your reproducer work for 16? I wonder if something relevant changed\n> recently, like f56a9def. CC'ing Michael and Amit K for info.\n\nMaybe it's hard to hit (autovacuum needs to process the index page in a\nnarrow time frame), but locally I could reproduce the issue even on\nac27c74de(~1 too) from 2018-09-06 (I tried several last commits touching\nhash indexes, didn't dig deeper).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 11 May 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Sat, May 11, 2024 at 5:00 PM Alexander Lakhin <[email protected]> wrote:\n> 11.05.2024 07:25, Thomas Munro wrote:\n> > On Sat, May 11, 2024 at 4:00 PM Alexander Lakhin <[email protected]> wrote:\n> >> 11.05.2024 06:26, Thomas Munro wrote:\n> >>> Perhaps a no-image, no-change registered buffer should not be\n> >>> including an image, even for XLR_CHECK_CONSISTENCY? It's actually\n> >>> useless for consistency checking too I guess, this issue aside,\n> >>> because it doesn't change anything so there is nothing to check.\n\n> >> Yes, I think something wrong is here. I've reduced the reproducer to:\n\n> > Does it reproduce if you do this?\n> >\n> > - include_image = needs_backup || (info &\n> > XLR_CHECK_CONSISTENCY) != 0;\n> > + include_image = needs_backup ||\n> > + ((info & XLR_CHECK_CONSISTENCY) != 0 &&\n> > + (regbuf->flags & REGBUF_NO_CHANGE) == 0);\n>\n> No, it doesn't (at least with the latter, more targeted reproducer).\n\nOK so that seems like a candidate fix, but ...\n\n> > Unfortunately the back branches don't have that new flag from 00d7fb5e\n> > so, even if this is the right direction (not sure, I don't understand\n> > this clean registered buffer trick) then ... but wait, why are there\n> > are no failures like this in the back branches (yet at least)? Does\n> > your reproducer work for 16? I wonder if something relevant changed\n> > recently, like f56a9def. CC'ing Michael and Amit K for info.\n>\n> Maybe it's hard to hit (autovacuum needs to process the index page in a\n> narrow time frame), but locally I could reproduce the issue even on\n> ac27c74de(~1 too) from 2018-09-06 (I tried several last commits touching\n> hash indexes, didn't dig deeper).\n\n... we'd need to figure out how to fix this in the back-branches too.\nOne idea would be to back-patch REGBUF_NO_CHANGE, and another might be\nto deduce that case from other variables. Let me CC a couple more\npeople from this thread, which most recently hacked on this stuff, to\nsee if they have insights:\n\nhttps://www.postgresql.org/message-id/flat/d2c31606e6bb9b83a02ed4835d65191b38d4ba12.camel%40j-davis.com\n\n\n", "msg_date": "Mon, 13 May 2024 11:15:03 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Mon, 2024-05-13 at 11:15 +1200, Thomas Munro wrote:\n> > > > > Perhaps a no-image, no-change registered buffer should not be\n> > > > > including an image, even for XLR_CHECK_CONSISTENCY?  It's\n> > > > > actually\n> > > > > useless for consistency checking too I guess, this issue\n> > > > > aside,\n> > > > > because it doesn't change anything so there is nothing to\n> > > > > check.\n\nI'm not convinced by that reasoning. Can't it check that nothing has\nchanged?\n\n> > > \n> > > Does it reproduce if you do this?\n> > > \n> > > -               include_image = needs_backup || (info &\n> > > XLR_CHECK_CONSISTENCY) != 0;\n> > > +               include_image = needs_backup ||\n> > > +                       ((info & XLR_CHECK_CONSISTENCY) != 0 &&\n> > > +                        (regbuf->flags & REGBUF_NO_CHANGE) ==\n> > > 0);\n> > \n> > No, it doesn't (at least with the latter, more targeted\n> > reproducer).\n> \n> OK so that seems like a candidate fix, but ...\n\n...\n\n> ... we'd need to figure out how to fix this in the back-branches too.\n> One idea would be to back-patch REGBUF_NO_CHANGE, and another might\n> be\n> to deduce that case from other variables.  Let me CC a couple more\n> people from this thread, which most recently hacked on this stuff, to\n> see if they have insights:\n\n\nStarting from the beginning, XLogRecordAssemble() calculates the CRC of\nthe record (technically just the non-header portions, but that's not\nimportant here), including any backup blocks. Later,\nCopyXLogRecordToWAL() copies that data into the actual xlog buffers. If\nthe data changes between those two steps, the CRC will be bad.\n\nFor most callers, the contents are exclusive-locked, so there's no\nproblem. For checksums, the data is copied out of shared memory into a\nstack variable first, so no concurrent activity can change it. For hash\nindexes, it tries to protect itself by passing REGBUF_NO_IMAGE.\n\nThere are two problems:\n\n1. That implies another invariant that we aren't checking for: that\nREGBUF_NO_CHANGE must be accompanied by REGBUF_NO_IMAGE. That doesn't\nseem to be true for all callers, see XLogRegisterBuffer(1, wbuf,\nwbuf_flags) in _hash_freeovflpage().\n\n2. As you point out, REGBUF_NO_IMAGE still allows an image to be taken\nif XLR_CHECK_CONSISTENCY is set, so we need to figure out what to do\nthere.\n\nCan we take a step back and think harder about what hash indexes are\ndoing and if there's a better way? Maybe hash indexes need to take a\ncopy of the page, like in XLogSaveBufferForHint()?\n\nI'd prefer that we find a way to get rid of REGBUF_NO_CHANGE and make\nall callers follow the rules than to find new special cases that depend\non REGBUF_NO_CHANGE. See these messages here:\n\nhttps://www.postgresql.org/message-id/b1f2d0f230d60fa8df33bb0b2af3236fde9d750d.camel%40j-davis.com\n\nhttps://www.postgresql.org/message-id/CA%2BTgmoY%2BdagCyrMKau7UQeQU6w4LuVEu%2ByjsmJBoXKAo6XbUUA%40mail.gmail.com\n\nIn other words, we added REGBUF_NO_CHANGE for the call sites (only hash\nindexes) that don't follow the rules and where it wasn't easy to make\nthem follow the rules. Now that we know of a concrete problem with the\ndesign, there's more upside to fixing it properly.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 16 May 2024 15:54:52 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Thu, May 16, 2024 at 03:54:52PM -0700, Jeff Davis wrote:\n> On Mon, 2024-05-13 at 11:15 +1200, Thomas Munro wrote:\n>>>>>> Perhaps a no-image, no-change registered buffer should not be\n>>>>>> including an image, even for XLR_CHECK_CONSISTENCY?  It's\n>>>>>> actually\n>>>>>> useless for consistency checking too I guess, this issue\n>>>>>> aside,\n>>>>>> because it doesn't change anything so there is nothing to\n>>>>>> check.\n> \n> I'm not convinced by that reasoning. Can't it check that nothing has\n> changed?\n\nThat's something I've done four weeks ago in the hash replay code\npath, and having the image with XLR_CHECK_CONSISTENCY even if\nREGBUF_NO_CHANGE was necessary because replay was setting up a LSN on\na REGBUF_NO_CHANGE page it should not have touched.\n\n> I'd prefer that we find a way to get rid of REGBUF_NO_CHANGE and make\n> all callers follow the rules than to find new special cases that depend\n> on REGBUF_NO_CHANGE. See these messages here:\n> \n> https://www.postgresql.org/message-id/b1f2d0f230d60fa8df33bb0b2af3236fde9d750d.camel%40j-davis.com\n> \n> https://www.postgresql.org/message-id/CA%2BTgmoY%2BdagCyrMKau7UQeQU6w4LuVEu%2ByjsmJBoXKAo6XbUUA%40mail.gmail.com\n> \n> In other words, we added REGBUF_NO_CHANGE for the call sites (only hash\n> indexes) that don't follow the rules and where it wasn't easy to make\n> them follow the rules. Now that we know of a concrete problem with the\n> design, there's more upside to fixing it properly.\n\nYeah, agreed that getting rid of REGBUF_NO_CHANGE would be nice in the\nfinal picture. It still strikes me as a weird concept that WAL replay\nfor hash indexes logs full pages just to be able to lock them at\nreplay based on what's in the records. :/\n--\nMichael", "msg_date": "Fri, 17 May 2024 10:12:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Fri, 2024-05-17 at 10:12 +0900, Michael Paquier wrote:\n> That's something I've done four weeks ago in the hash replay code\n> path, and having the image with XLR_CHECK_CONSISTENCY even if\n> REGBUF_NO_CHANGE was necessary because replay was setting up a LSN on\n> a REGBUF_NO_CHANGE page it should not have touched.\n\nThen the candidate fix to selectively break XLR_CHECK_CONSISTENCY is\nnot acceptable.\n\n> \n> Yeah, agreed that getting rid of REGBUF_NO_CHANGE would be nice in\n> the\n> final picture.  It still strikes me as a weird concept that WAL\n> replay\n> for hash indexes logs full pages just to be able to lock them at\n> replay based on what's in the records.  :/\n\nI'm still not entirely clear on why hash indexes can't just follow the\nrules and exclusive lock the buffer and dirty it. Presumably\nperformance would suffer, but I asked that question previously and\ndidn't get an answer:\n\nhttps://www.postgresql.org/message-id/CA%2BTgmoY%2BdagCyrMKau7UQeQU6w4LuVEu%2ByjsmJBoXKAo6XbUUA%40mail.gmail.com\n\nAnd if that does affect performance, what about following the same\nprotocol as XLogSaveBufferForHint()?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 17 May 2024 07:56:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" }, { "msg_contents": "On Fri, May 17, 2024 at 10:56 AM Jeff Davis <[email protected]> wrote:\n> I'm still not entirely clear on why hash indexes can't just follow the\n> rules and exclusive lock the buffer and dirty it. Presumably\n> performance would suffer, but I asked that question previously and\n> didn't get an answer:\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoY%2BdagCyrMKau7UQeQU6w4LuVEu%2ByjsmJBoXKAo6XbUUA%40mail.gmail.com\n\nIn my defense, the last time I worked on hash indexes was 7 years ago.\nIf this question had come up within a year or two of that work, I\nprobably would have both (a) had a much clearer idea of what the\nanswer was and (b) felt obliged to drop everything and go research it\nif I didn't. But at this point, I feel like it's fair for me to tell\nyou what I know and leave it to you to do further research if you feel\nlike that's warranted. I know that we're each responsible for what we\ncommit, but I don't really think that should extend to having to\nprioritize answering a hypothetical question (\"what would happen if X\nthing worked like Y instead of the way it does?\") about an area I\nhaven't touched in long enough for every release that doesn't contain\nthose commits to be out of support. If you feel otherwise, let's have\nthat argument, but I have a feeling that it may be more that you're\nhoping I have some kind of oracular powers which, in reality, I lack.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 11:38:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL record CRC calculated incorrectly because of underlying\n buffer modification" } ]
[ { "msg_contents": "SNI was brought up the discussions around the ALPN work, and I have had asks\nfor it off-list, so I decided to dust off an old patch I started around the\ntime we got client-side SNI support but never finished (until now). Since\nthere is discussion and thinking around how we handle SSL right now I wanted to\nshare this early even though it will be parked in the July CF for now. There\nare a few usecases for serverside SNI, allowing for completely disjoint CAs for\ndifferent hostnames is one that has come up. Using strict SNI mode (elaborated\non below) as a cross-host attack mitigation was mentioned in [0].\n\nThe attached patch adds serverside SNI support to libpq, it is still a bit\nrough around the edges but I'm sharing it early to make sure I'm not designing\nit in a direction that the community doesn't like. A new config file\n$datadir/pg_hosts.conf is used for configuring which certicate and key should\nbe used for which hostname. The file is parsed in the same way as pg_ident\net.al so it allows for the usual include type statements we support. A new\nGUC, ssl_snimode, is added which controls how the hostname TLS extension is\nhandled. The possible values are off, default and strict:\n\n - off: pg_hosts.conf is not parsed and the hostname TLS extension is\n not inspected at all. The normal SSL GUCs for certificates and keys\n are used.\n - default: pg_hosts.conf is loaded as well as the normal GUCs. If no\n match for the TLS extension hostname is found in pg_hosts the cert\n and key from the postgresql.conf GUCs is used as the default (used\n as a wildcard host).\n - strict: only pg_hosts.conf is loaded and the TLS extension hostname\n MUST be passed and MUST have a match in the configuration, else the\n connection is refused.\n\nAs of now the patch use default as the initial value for the GUC.\n\nThe way multiple certificates are handled is that libpq creates one SSL_CTX for\neach at startup, and switch to the appropriate one when the connection is\ninspected. Configuration handling is done in secure-common to not tie it to a\nspecific TLS backend (should we ever support more), but the validation of the\nconfig values is left for the TLS backend.\n\nThere are a few known open items with this patch:\n\n* There are two OpenSSL callbacks which can be used to inspect the hostname TLS\nextension: SSL_CTX_set_tlsext_servername_callback and\nSSL_CTX_set_client_hello_cb. The documentation for the latter says you\nshouldn't use the former, and the docs for the former says you need it even if\nyou use the latter. For now I'm using SSL_CTX_set_tlsext_servername_callback\nmainly because the OpenSSL tools themselves use that for SNI.\n\n* The documentation is not polished at all and will require a more work to make\nit passable I think. There are also lot's more testing that can be done, so\nfar it's pretty basic.\n\n* I've so far only tested with OpenSSL and haven't yet verified how LibreSSL\nhandles this.\n\n--\nDaniel Gustafsson\n\n[0] https://www.postgresql.org/message-id/e782e9f4-a0cd-49f5-800b-5e32a1b29183%40eisentraut.org", "msg_date": "Fri, 10 May 2024 16:22:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Serverside SNI support in libpq" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis is an interesting feature on PostgreSQL server side where it can swap the\r\ncertificate settings based on the incoming hostnames in SNI field in client\r\nhello message.\r\n\r\nI think this patch resonate with a patch I shared awhile ago\r\n( https://commitfest.postgresql.org/48/4924/ ) that adds multiple certificate\r\nsupport on the libpq client side while this patch adds multiple certificate\r\nsupport on the server side. My patch allows user to supply multiple certs, keys,\r\nsslpasswords in comma separated format and the libpq client will pick one that\r\nmatches the CA issuer names sent by the server. In relation with your patch,\r\nthis CA issuer name would match the CA certificate configured in pg_hosts.cfg.\r\n\r\nI had a look at the patch and here's my comments:\r\n\r\n+ <para>\r\n+ <productname>PostgreSQL</productname> can be configured for\r\n+ <acronym>SNI</acronym> using the <filename>pg_hosts.conf</filename>\r\n+ configuration file. <productname>PostgreSQL</productname> inspects the TLS\r\n+ hostname extension in the SSL connection handshake, and selects the right\r\n+ SSL certificate, key and CA certificate to use for the connection.\r\n+ </para>\r\n\r\npg_hosts should also have sslpassword_command just like in the postgresql.conf in\r\ncase the sslkey for a particular host is encrypted with a different password.\r\n\r\n+\t/*\r\n+\t * Install SNI TLS extension callback in case the server is configured to\r\n+\t * validate hostnames.\r\n+\t */\r\n+\tif (ssl_snimode != SSL_SNIMODE_OFF)\r\n+\t\tSSL_CTX_set_tlsext_servername_callback(context, sni_servername_cb);\r\n\r\nIf libpq client does not provide SNI, this callback will not be called, so there\r\nis not a chance to check for a hostname match from pg_hosts, swap the TLS CONTEXT,\r\nor possibly reject the connection even in strict mode. The TLS handshake in such\r\ncase shall proceed and server will use the certificate specified in\r\npostgresql.conf (if these are loaded) to complete the handshake with the client.\r\nThere is a comment in the patch that reads:\r\n\r\n> - strict: only pg_hosts.conf is loaded and the TLS extension hostname\r\n> MUST be passed and MUST have a match in the configuration, else the\r\n> connection is refused.\r\n\r\nI am not sure if it implies that if ssl_snimode is strict, then the normal ssl_cert,\r\nssl_key and ca_cert…etc settings in postgresql.conf are ignored?\r\n\r\nthank you\r\n\r\nCary Huang\r\n-------------\r\nHighGo Software Inc. (Canada)\r\[email protected]\r\nwww.highgo.ca", "msg_date": "Fri, 24 May 2024 19:54:49 +0000", "msg_from": "Cary Huang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serverside SNI support in libpq" }, { "msg_contents": "On Fri, May 10, 2024 at 7:23 AM Daniel Gustafsson <[email protected]> wrote:\n> The way multiple certificates are handled is that libpq creates one SSL_CTX for\n> each at startup, and switch to the appropriate one when the connection is\n> inspected.\n\nI fell in a rabbit hole while testing this patch, so this review isn't\ncomplete, but I don't want to delay it any more. I see a few\npossibly-related problems with the handling of SSL_context.\n\nThe first is that reloading the server configuration doesn't reset the\ncontexts list, so the server starts behaving in really strange ways\nthe longer you test. That's an easy enough fix, but things got weirder\nwhen I did. Part of that weirdness is that SSL_context gets set to the\nlast initialized context, so fallback doesn't always behave in a\ndeterministic fashion. But we do have to set it to something, to\ncreate the SSL object itself...\n\nI tried patching all that, but I continue to see nondeterministic\nbehavior, including the wrong certificate chain occasionally being\nserved, and the servername callback being called twice for each\nconnection (?!).\n\nSince I can't reproduce the weirdest bits under a debugger yet, I\ndon't really know what's happening. Maybe my patches are buggy. Or\nmaybe we're running into some chicken-and-egg madness? The order of\noperations looks like this:\n\n1. Create a list of contexts, selecting one as an arbitrary default\n2. Create an SSL object from our default context\n3. During the servername_callback, reparent that SSL object (which has\nan active connection underway) to the actual context we want to use\n4. Complete the connection\n\nIt's step 3 that I'm squinting at. I wondered how, exactly, that\nworked in practice, and based on this issue the answer might be \"not\nwell\":\n\n https://github.com/openssl/openssl/issues/6109\n\nMatt Caswell appears to be convinced that SSL_set_SSL_CTX() is\nfundamentally broken. So it might just be FUD, but I'm wondering if we\nshould instead be using the SSL_ flavors of the API to reassign the\ncertificate chain on the SSL pointer directly, inside the callback,\ninstead of trying to set them indirectly via the SSL_CTX_ API.\n\nHave you seen any weird behavior like this on your end? I'm starting\nto doubt my test setup... On the plus side, I now have a handful of\ndebugging patches for a future commitfest.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 25 Jul 2024 10:51:05 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serverside SNI support in libpq" }, { "msg_contents": "On Fri, May 24, 2024 at 12:55 PM Cary Huang <[email protected]> wrote:\n> pg_hosts should also have sslpassword_command just like in the postgresql.conf in\n> case the sslkey for a particular host is encrypted with a different password.\n\nGood point. There is also the HBA-related handling of client\ncertificate settings (such as pg_ident)...\n\nI really dislike that these things are governed by various different\nfiles, but I also feel like I'm opening up a huge can of worms by\nrequesting nestable configurations.\n\n> + if (ssl_snimode != SSL_SNIMODE_OFF)\n> + SSL_CTX_set_tlsext_servername_callback(context, sni_servername_cb);\n>\n> If libpq client does not provide SNI, this callback will not be called, so there\n> is not a chance to check for a hostname match from pg_hosts, swap the TLS CONTEXT,\n> or possibly reject the connection even in strict mode.\n\nI'm mistrustful of my own test setup (see previous email to the\nthread), but I don't seem to be able to reproduce this. With sslsni=0\nset, strict mode correctly shuts down the connection for me. Can you\nshare your setup?\n\n(The behavior you describe might be a useful setting in practice, to\nlet DBAs roll out strict protection for new clients gracefully without\nimmediately blocking older ones.)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 25 Jul 2024 11:00:41 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serverside SNI support in libpq" } ]
[ { "msg_contents": "Per src/tools/RELEASE_CHANGES, we still have some routine\ntasks to finish before beta1:\n\n* Run mechanical code beautification tools:\n pgindent, pgperltidy, and \"make reformat-dat-files\"\n (complete steps from src/tools/pgindent/README)\n\n* Renumber any manually-assigned OIDs between 8000 and 9999\n to lower numbers, using renumber_oids.pl (see notes in bki.sgml)\n\n(Although I expect the pgindent changes to be minimal, there will\nbe some since src/tools/pgindent/typedefs.list hasn't been\nmaintained entirely accurately.)\n\nI've been holding off doing this so as not to joggle the elbows\nof people trying to complete open items or revert failed patches,\nbut it's getting to be time. Any objections to doing these\nthings on Tuesday May 14th?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 10 May 2024 11:43:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "End-of-cycle code beautification tasks" } ]
[ { "msg_contents": "For example, 'i'::citext = 'İ'::citext fails to be true.\n\nIt must now be using UTF-8 (unlike, say, Drongo) and non-C ctype,\ngiven that the test isn't skipped. This isn't the first time that\nwe've noticed that Windows doesn't seem to know about İ→i (see [1]),\nbut I don't think anyone has explained exactly why, yet. It could be\nthat it just doesn't know about that in any locale, or that it is\nlocale-dependent and would only do that for Turkish, the same reason\nwe skip the test for ICU, or ...\n\nEither way, it seems like we'll need to skip that test on Windows if\nwe want hamerkop to be green. That can probably be cribbed from\ncollate.windows.win1252.sql into contrib/citext/sql/citext_utf8.sql's\nprelude... I just don't know how to explain it in the comment 'cause I\ndon't know why.\n\nOne new development in Windows-land is that the system now does\nactually support UTF-8 in the runtime libraries[2]. You can set it at\na system level, or for an application at build time, or by adding\n\".UTF-8\" to a locale name when opening the locale (apparently much\nmore like Unix systems, but I don't know what exactly it does). I\nwonder why we see this change now... did hamerkop have that ACP=UTF-8\nsetting applied on purpose, or if computers in Japan started doing\nthat by default instead of using Shift-JIS, or if it only started\npicking UTF-8 around the time of the Meson change somehow, or the\ninitdb-template change. It's a little hard to tell from the logs.\n\n[1] https://www.postgresql.org/message-id/CAC%2BAXB10p%2BmnJ6wrAEm6jb51%2B8%3DBfYzD%3Dw6ftHRbMjMuSFN3kQ%40mail.gmail.com\n[2] https://learn.microsoft.com/en-us/windows/apps/design/globalizing/use-utf8-code-page\n\n\n", "msg_date": "Sat, 11 May 2024 13:14:46 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Sat, May 11, 2024 at 1:14 PM Thomas Munro <[email protected]> wrote:\n> Either way, it seems like we'll need to skip that test on Windows if\n> we want hamerkop to be green. That can probably be cribbed from\n> collate.windows.win1252.sql into contrib/citext/sql/citext_utf8.sql's\n> prelude... I just don't know how to explain it in the comment 'cause I\n> don't know why.\n\nHere's a minimal patch like that.\n\nI don't think it's worth back-patching. Only 15 and 16 could possibly\nbe affected, I think, because the test wasn't enabled before that. I\nthink this is all just a late-appearing blow-up predicted by the\ncommit that enabled it:\n\ncommit c2e8bd27519f47ff56987b30eb34a01969b9a9e8\nAuthor: Tom Lane <[email protected]>\nDate: Wed Jan 5 13:30:07 2022 -0500\n\n Enable routine running of citext's UTF8-specific test cases.\n\n These test cases have been commented out since citext was invented,\n because at the time we had no nice way to deal with tests that\n have restrictions such as requiring UTF8 encoding. But now we do\n have a convention for that, ie put them into a separate test file\n with an early-exit path. So let's enable these tests to run when\n their prerequisites are satisfied.\n\n (We may have to tighten the prerequisites beyond the \"encoding = UTF8\n and locale != C\" checks made here. But let's put it on the buildfarm\n and see what blows up.)\n\nHamerkop is already green on the 15 and 16 branches, apparently\nbecause it's using the pre-meson test stuff that I guess just didn't\nrun the relevant test. In other words, nobody would notice the\ndifference anyway, and a master-only fix would be enough to end this\n44-day red streak.", "msg_date": "Sun, 12 May 2024 11:31:09 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Sat, May 11, 2024 at 1:14 PM Thomas Munro <[email protected]> wrote:\n>> Either way, it seems like we'll need to skip that test on Windows if\n>> we want hamerkop to be green. That can probably be cribbed from\n>> collate.windows.win1252.sql into contrib/citext/sql/citext_utf8.sql's\n>> prelude... I just don't know how to explain it in the comment 'cause I\n>> don't know why.\n\n> Here's a minimal patch like that.\n\nWFM until some Windows person cares to probe more deeply.\n\nBTW, I've also been wondering why hamerkop has been failing\nisolation-check in the 12 and 13 branches for the last six months\nor so. It is surely unrelated to this issue, and it looks like\nit must be due to some platform change rather than anything we\ncommitted at the time.\n\nI'm not planning on looking into that question myself, but really\nsomebody ought to. Or is Windows just as dead as AIX, in terms of\nanybody being willing to put effort into supporting it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 May 2024 01:34:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Hello Tom,\n\n12.05.2024 08:34, Tom Lane wrote:\n> BTW, I've also been wondering why hamerkop has been failing\n> isolation-check in the 12 and 13 branches for the last six months\n> or so. It is surely unrelated to this issue, and it looks like\n> it must be due to some platform change rather than anything we\n> committed at the time.\n>\n> I'm not planning on looking into that question myself, but really\n> somebody ought to. Or is Windows just as dead as AIX, in terms of\n> anybody being willing to put effort into supporting it?\n\nI've reproduced the failure locally with GSS enabled, so I'll try to\nfigure out what's going on here in the next few days.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 12 May 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On 2024-05-12 Su 01:34, Tom Lane wrote:\n> BTW, I've also been wondering why hamerkop has been failing\n> isolation-check in the 12 and 13 branches for the last six months\n> or so. It is surely unrelated to this issue, and it looks like\n> it must be due to some platform change rather than anything we\n> committed at the time.\n\n\nPossibly. It looks like this might be the issue:\n\n+Connection 2 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty\n+FATAL: sorry, too many clients already\n\n\nThere are several questions here, including:\n\n1. why isn't it failing on later branches?\n2. why isn't it failing on drongo (which has more modern compiler and OS)?\n\nI think we'll need the help of the animal owner to dig into the issue.\n\n> I'm not planning on looking into that question myself, but really\n> somebody ought to. Or is Windows just as dead as AIX, in terms of\n> anybody being willing to put effort into supporting it?\n> \t\t\t\n\n\nWell, this is more or less where I came in back in about 2002 :-) I've \nbeen trying to help support it ever since, mainly motivated by stubborn \npersistence than anything else. Still, I agree that the lack of support \nfor the Windows port from Microsoft over the years has been more than \ndisappointing.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-12 Su 01:34, Tom Lane wrote:\n\n\n\n\nBTW, I've also been wondering why hamerkop has been failing\nisolation-check in the 12 and 13 branches for the last six months\nor so. It is surely unrelated to this issue, and it looks like\nit must be due to some platform change rather than anything we\ncommitted at the time.\n\n\n\nPossibly. It looks like this might be the issue:\n+Connection 2 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty\n+FATAL: sorry, too many clients already\n\n\nThere are several questions here, including: \n\n1. why isn't it failing on later branches? \n2. why isn't it failing on drongo (which has more modern compiler and OS)?\n\nI think we'll need the help of the animal owner to dig into the issue.\n\n\n\n\n\n\nI'm not planning on looking into that question myself, but really\nsomebody ought to. Or is Windows just as dead as AIX, in terms of\nanybody being willing to put effort into supporting it?\n\t\t\t\n\n\n\nWell, this is more or less where I came in back in about 2002 :-) I've been trying to help support it ever since, mainly motivated by stubborn persistence than anything else. Still, I agree that the lack of support for the Windows port from Microsoft over the years has been more than disappointing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 12 May 2024 08:26:10 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On 2024-05-12 Su 08:26, Andrew Dunstan wrote:\n>\n>\n> On 2024-05-12 Su 01:34, Tom Lane wrote:\n>> BTW, I've also been wondering why hamerkop has been failing\n>> isolation-check in the 12 and 13 branches for the last six months\n>> or so. It is surely unrelated to this issue, and it looks like\n>> it must be due to some platform change rather than anything we\n>> committed at the time.\n>\n>\n> Possibly. It looks like this might be the issue:\n>\n> +Connection 2 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty\n> +FATAL: sorry, too many clients already\n>\n>\n> There are several questions here, including:\n>\n> 1. why isn't it failing on later branches?\n> 2. why isn't it failing on drongo (which has more modern compiler and OS)?\n>\n> I think we'll need the help of the animal owner to dig into the issue.\n\n\nAha! drongo doesn't have GSSAPI enabled. Will work on that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-12 Su 08:26, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2024-05-12 Su 01:34, Tom Lane\n wrote:\n\n\n\nBTW, I've also been wondering why hamerkop has been failing\nisolation-check in the 12 and 13 branches for the last six months\nor so. It is surely unrelated to this issue, and it looks like\nit must be due to some platform change rather than anything we\ncommitted at the time.\n\n\n\nPossibly. It looks like this might be the issue:\n+Connection 2 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty\n+FATAL: sorry, too many clients already\n\n\nThere are several questions here, including: \n\n1. why isn't it failing on later branches? \n2. why isn't it failing on drongo (which has more modern compiler and OS)?\n\nI think we'll need the help of the animal owner to dig into the issue.\n\n\n\nAha! drongo doesn't have GSSAPI enabled. Will work on that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 12 May 2024 12:20:48 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Mon, May 13, 2024 at 12:26 AM Andrew Dunstan <[email protected]> wrote:\n> Well, this is more or less where I came in back in about 2002 :-) I've been trying to help support it ever since, mainly motivated by stubborn persistence than anything else. Still, I agree that the lack of support for the Windows port from Microsoft over the years has been more than disappointing.\n\nI think \"state of the Windows port\" would make a good discussion topic\nat pgconf.dev (with write-up for those who can't be there). If there\nis interest, I could organise that with a short presentation of the\nissues I am aware of so far and possible improvements and\nsmaller-things-we-could-drop-instead-of-dropping-the-whole-port. I\nwould focus on technical stuff, not who-should-be-doing-what, 'cause I\ncan't make anyone do anything.\n\nFor citext_utf8, I pushed cff4e5a3. Hamerkop runs infrequently, so\nhere's hoping for 100% green on master by Tuesday or so.\n\n\n", "msg_date": "Mon, 13 May 2024 10:05:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "\nOn 2024-05-12 Su 18:05, Thomas Munro wrote:\n> On Mon, May 13, 2024 at 12:26 AM Andrew Dunstan <[email protected]> wrote:\n>> Well, this is more or less where I came in back in about 2002 :-) I've been trying to help support it ever since, mainly motivated by stubborn persistence than anything else. Still, I agree that the lack of support for the Windows port from Microsoft over the years has been more than disappointing.\n> I think \"state of the Windows port\" would make a good discussion topic\n> at pgconf.dev (with write-up for those who can't be there). If there\n> is interest, I could organise that with a short presentation of the\n> issues I am aware of so far and possible improvements and\n> smaller-things-we-could-drop-instead-of-dropping-the-whole-port. I\n> would focus on technical stuff, not who-should-be-doing-what, 'cause I\n> can't make anyone do anything.\n>\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 13 May 2024 10:19:04 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> For citext_utf8, I pushed cff4e5a3. Hamerkop runs infrequently, so\n> here's hoping for 100% green on master by Tuesday or so.\n\nIn the meantime, some off-list investigation by Alexander Lakhin\nhas turned up a good deal of information about why we're seeing\nfailures on hamerkop in the back branches. Summarizing, it\nappears that\n\n1. In a GSS-enabled Windows build without any active Kerberos server,\nlibpq's pg_GSS_have_cred_cache() succeeds, allowing libpq to try to\nopen a GSSAPI connection, but then gss_init_sec_context() fails,\nleading to client-side reports like this:\n\n+Connection 2 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information: Credential cache is empty\n+FATAL: sorry, too many clients already\n\n(The first of these lines comes out during the attempted GSS\nconnection, the second during the only-slightly-more-successful\nnon-GSS connection.) So that's problem number 1: how is it that\ngss_acquire_cred() succeeds but then gss_init_sec_context() disclaims\nknowledge of any credentials? Can we find a way to get this failure\nto be detected during pg_GSS_have_cred_cache()? It is mighty\nexpensive to launch a backend connection that is doomed to fail,\nespecially when this happens during *every single libpq connection\nattempt*.\n\n2. Once gss_init_sec_context() fails, libpq abandons the connection\nand starts over; since it has already initiated a GSS handshake\non the connection, there's not much choice. Although libpq faithfully\nissues closesocket() on the abandoned connection, Alexander found\nthat the connected backend doesn't reliably see that: it may just\nsit there until the AuthenticationTimeout elapses (1 minute by\ndefault). That backend is still consuming a postmaster child slot,\nso if this happens on any sizable fraction of test connection\nattempts, it's little surprise that we soon get \"sorry, too many\nclients already\" failures.\n\n3. We don't know exactly why hamerkop suddenly started seeing these\nfailures, but a plausible theory emerges after noting that its\nreported time for the successful \"make check\" step dropped pretty\nsubstantially right when this started. In the v13 branch, \"make\ncheck\" was taking 2:18 or more in the several runs right before the\nfirst isolationcheck failure, but 1:40 or less just after. So it\nlooks like the animal was moved onto faster hardware. That feeds\ninto this problem because, with a successful isolationcheck run\ntaking more than a minute, there was enough time for some of the\nearlier stuck sessions to time out and exit before their\npostmaster-child slots were needed.\n\nAlexander confirmed this theory by demonstrating that the main\nregression tests in v15 would pass if he limited their speed enough\n(by reducing the CPU allowed to a VM) but not at full speed. So the\nbuildfarm results suggesting this is only an issue in <= v13 must\nbe just a timing artifact; the problem is still there.\n\nOf course, backends waiting till timeout is not a good behavior\nindependently of what is triggering that, so we have two problems to\nsolve here, not one. I have no ideas about the gss_init_sec_context()\nfailure, but I see a plausible theory about the failure to detect\nsocket closure in Microsoft's documentation about closesocket() [1]:\n\n If the l_onoff member of the LINGER structure is zero on a stream\n socket, the closesocket call will return immediately and does not\n receive WSAEWOULDBLOCK whether the socket is blocking or\n nonblocking. However, any data queued for transmission will be\n sent, if possible, before the underlying socket is closed. This is\n also called a graceful disconnect or close. In this case, the\n Windows Sockets provider cannot release the socket and other\n resources for an arbitrary period, thus affecting applications\n that expect to use all available sockets. This is the default\n behavior for a socket.\n\nI'm not sure whether we've got unsent data pending in the problematic\ncondition, nor why it'd remain unsent if we do (shouldn't the backend\nconsume it anyway?). But this has the right odor for an explanation.\n\nI'm pretty hesitant to touch this area myself, because it looks an\nawful lot like commits 6051857fc and ed52c3707, which eventually\nhad to be reverted. I think we need a deeper understanding of\nexactly what Winsock is doing or not doing before we try to fix it.\nI wonder if there are any Microsoft employees around here with\naccess to the relevant source code.\n\nIn the short run, it might be a good idea to deprecate using\n--with-gssapi on Windows builds. A different stopgap idea\ncould be to drastically reduce the default AuthenticationTimeout,\nperhaps only on Windows.\n\n\t\t\tregards, tom lane\n\n[1] https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-closesocket\n\n\n", "msg_date": "Mon, 13 May 2024 16:17:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Tue, May 14, 2024 at 8:17 AM Tom Lane <[email protected]> wrote:\n> I'm not sure whether we've got unsent data pending in the problematic\n> condition, nor why it'd remain unsent if we do (shouldn't the backend\n> consume it anyway?). But this has the right odor for an explanation.\n>\n> I'm pretty hesitant to touch this area myself, because it looks an\n> awful lot like commits 6051857fc and ed52c3707, which eventually\n> had to be reverted. I think we need a deeper understanding of\n> exactly what Winsock is doing or not doing before we try to fix it.\n\nI was beginning to suspect that lingering odour myself. I haven't\nlook at the GSS code but I was imagining that what we have here is\nperhaps not unsent data dropped on the floor due to linger policy\n(unclean socket close on process exist), but rather that the server\ndidn't see the socket as ready to read because it lost track of the\nFD_CLOSE somewhere because the client closed it gracefully, and our\nserver-side FD_CLOSE handling has always been a bit suspect. I wonder\nif the GSS code is somehow more prone to brokenness. One thing we\nlearned in earlier problems was that abortive/error disconnections\ngenerate FD_CLOSE repeatedly, while graceful ones give you only one.\nIn other words, if the other end politely calls closesocket(), the\nserver had better not miss the FD_CLOSE event, because it won't come\nagain. That's what\n\nhttps://commitfest.postgresql.org/46/3523/\n\nis intended to fix. Does it help here? Unfortunately that's\nunpleasantly complicated and unbackpatchable (keeping a side-table of\nsocket FDs and event handles, so we don't lose events between the\ncracks).\n\n\n", "msg_date": "Tue, 14 May 2024 12:38:47 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "13.05.2024 23:17, Tom Lane wrote:\n> 3. We don't know exactly why hamerkop suddenly started seeing these\n> failures, but a plausible theory emerges after noting that its\n> reported time for the successful \"make check\" step dropped pretty\n> substantially right when this started. In the v13 branch, \"make\n> check\" was taking 2:18 or more in the several runs right before the\n> first isolationcheck failure, but 1:40 or less just after. So it\n> looks like the animal was moved onto faster hardware. That feeds\n> into this problem because, with a successful isolationcheck run\n> taking more than a minute, there was enough time for some of the\n> earlier stuck sessions to time out and exit before their\n> postmaster-child slots were needed.\n\nYes, and one thing I can't explain yet, is why REL_14_STABLE+ timings\nsubstantially differ from REL_13_STABLE-, say, for the check stage:\nREL_14_STABLE: the oldest available test log (from 2021-10-30) shows\ncheck (00:03:47) and the newest one (from 2024-05-12): check (00:03:18).\nWhilst on REL_13_STABLE the oldest available log (from 2021-08-06) shows\ncheck duration 00:03:00, then it decreased to 00:02:24 (2021-09-22)/\n00:02:14 (2021-11-07), and now it's 1:40, as you said.\n\nLocally I see more or less the same timings on REL_13_STABLE (34, 28, 27\nsecs) and on REL_14_STABLE (33, 29, 29 secs).\n\n14.05.2024 03:38, Thomas Munro wrote:\n> I was beginning to suspect that lingering odour myself. I haven't\n> look at the GSS code but I was imagining that what we have here is\n> perhaps not unsent data dropped on the floor due to linger policy\n> (unclean socket close on process exist), but rather that the server\n> didn't see the socket as ready to read because it lost track of the\n> FD_CLOSE somewhere because the client closed it gracefully, and our\n> server-side FD_CLOSE handling has always been a bit suspect. I wonder\n> if the GSS code is somehow more prone to brokenness. One thing we\n> learned in earlier problems was that abortive/error disconnections\n> generate FD_CLOSE repeatedly, while graceful ones give you only one.\n> In other words, if the other end politely calls closesocket(), the\n> server had better not miss the FD_CLOSE event, because it won't come\n> again. That's what\n>\n> https://commitfest.postgresql.org/46/3523/\n>\n> is intended to fix. Does it help here? Unfortunately that's\n> unpleasantly complicated and unbackpatchable (keeping a side-table of\n> socket FDs and event handles, so we don't lose events between the\n> cracks).\n\nYes, that cure helps here too. I've tested it on b282fa88d~1 (the last\nstate when that patch set can be applied).\n\nAn excerpt (all lines related to process 12500) from a failed run log\nwithout the patch set:\n2024-05-14 05:57:29.526 UTC [8228:128] DEBUG:  forked new backend, pid=12500 socket=5524\n2024-05-14 05:57:29.534 UTC [12500:1] [unknown] LOG:  connection received: host=::1 port=51394\n2024-05-14 05:57:29.534 UTC [12500:2] [unknown] LOG: !!!BackendInitialize| before ProcessStartupPacket\n2024-05-14 05:57:29.534 UTC [12500:3] [unknown] LOG: !!!ProcessStartupPacket| before secure_open_gssapi(), GSSok: G\n2024-05-14 05:57:29.534 UTC [12500:4] [unknown] LOG: !!!secure_open_gssapi| before read_or_wait\n2024-05-14 05:57:29.534 UTC [12500:5] [unknown] LOG: !!!read_or_wait| before secure_raw_read(); PqGSSRecvLength: 0, len: 4\n2024-05-14 05:57:29.534 UTC [12500:6] [unknown] LOG: !!!read_or_wait| after secure_raw_read: -1, errno: 10035\n2024-05-14 05:57:29.534 UTC [12500:7] [unknown] LOG: !!!read_or_wait| before WaitLatchOrSocket()\n2024-05-14 05:57:29.549 UTC [12500:8] [unknown] LOG: !!!read_or_wait| after WaitLatchOrSocket\n2024-05-14 05:57:29.549 UTC [12500:9] [unknown] LOG: !!!read_or_wait| before secure_raw_read(); PqGSSRecvLength: 0, len: 4\n2024-05-14 05:57:29.549 UTC [12500:10] [unknown] LOG: !!!read_or_wait| after secure_raw_read: 0, errno: 10035\n2024-05-14 05:57:29.549 UTC [12500:11] [unknown] LOG: !!!read_or_wait| before WaitLatchOrSocket()\n...\n2024-05-14 05:57:52.024 UTC [8228:3678] DEBUG:  server process (PID 12500) exited with exit code 1\n# at the end of the test run\n\nAnd an excerpt (all lines related to process 11736) from a successful run\nlog with the patch set applied:\n2024-05-14 06:03:57.216 UTC [4524:130] DEBUG:  forked new backend, pid=11736 socket=5540\n2024-05-14 06:03:57.226 UTC [11736:1] [unknown] LOG:  connection received: host=::1 port=51914\n2024-05-14 06:03:57.226 UTC [11736:2] [unknown] LOG: !!!BackendInitialize| before ProcessStartupPacket\n2024-05-14 06:03:57.226 UTC [11736:3] [unknown] LOG: !!!ProcessStartupPacket| before secure_open_gssapi(), GSSok: G\n2024-05-14 06:03:57.226 UTC [11736:4] [unknown] LOG: !!!secure_open_gssapi| before read_or_wait\n2024-05-14 06:03:57.226 UTC [11736:5] [unknown] LOG: !!!read_or_wait| before secure_raw_read(); PqGSSRecvLength: 0, len: 4\n2024-05-14 06:03:57.226 UTC [11736:6] [unknown] LOG: !!!read_or_wait| after secure_raw_read: -1, errno: 10035\n2024-05-14 06:03:57.226 UTC [11736:7] [unknown] LOG: !!!read_or_wait| before WaitLatchOrSocket()\n2024-05-14 06:03:57.240 UTC [11736:8] [unknown] LOG: !!!read_or_wait| after WaitLatchOrSocket\n2024-05-14 06:03:57.240 UTC [11736:9] [unknown] LOG: !!!read_or_wait| before secure_raw_read(); PqGSSRecvLength: 0, len: 4\n2024-05-14 06:03:57.240 UTC [11736:10] [unknown] LOG: !!!read_or_wait| after secure_raw_read: 0, errno: 10035\n2024-05-14 06:03:57.240 UTC [11736:11] [unknown] LOG: !!!read_or_wait| before WaitLatchOrSocket()\n2024-05-14 06:03:57.240 UTC [11736:12] [unknown] LOG: !!!read_or_wait| after WaitLatchOrSocket\n2024-05-14 06:03:57.240 UTC [11736:13] [unknown] LOG: !!!secure_open_gssapi| read_or_wait returned -1\n2024-05-14 06:03:57.240 UTC [11736:14] [unknown] LOG: !!!ProcessStartupPacket| secure_open_gssapi() returned error\n2024-05-14 06:03:57.240 UTC [11736:15] [unknown] LOG: !!!BackendInitialize| after ProcessStartupPacket\n2024-05-14 06:03:57.240 UTC [11736:16] [unknown] LOG: !!!BackendInitialize| status: -1\n2024-05-14 06:03:57.240 UTC [11736:17] [unknown] DEBUG: shmem_exit(0): 0 before_shmem_exit callbacks to make\n2024-05-14 06:03:57.240 UTC [11736:18] [unknown] DEBUG: shmem_exit(0): 0 on_shmem_exit callbacks to make\n2024-05-14 06:03:57.240 UTC [11736:19] [unknown] DEBUG: proc_exit(0): 1 callbacks to make\n2024-05-14 06:03:57.240 UTC [11736:20] [unknown] DEBUG:  exit(0)\n2024-05-14 06:03:57.240 UTC [11736:21] [unknown] DEBUG: shmem_exit(-1): 0 before_shmem_exit callbacks to make\n2024-05-14 06:03:57.240 UTC [11736:22] [unknown] DEBUG: shmem_exit(-1): 0 on_shmem_exit callbacks to make\n2024-05-14 06:03:57.240 UTC [11736:23] [unknown] DEBUG: proc_exit(-1): 0 callbacks to make\n2024-05-14 06:03:57.243 UTC [4524:132] DEBUG:  forked new backend, pid=10536 socket=5540\n2024-05-14 06:03:57.243 UTC [4524:133] DEBUG:  server process (PID 11736) exited with exit code 0\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 14 May 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n> 13.05.2024 23:17, Tom Lane wrote:\n>> 3. We don't know exactly why hamerkop suddenly started seeing these\n>> failures, but a plausible theory emerges after noting that its\n>> reported time for the successful \"make check\" step dropped pretty\n>> substantially right when this started. In the v13 branch, \"make\n>> check\" was taking 2:18 or more in the several runs right before the\n>> first isolationcheck failure, but 1:40 or less just after. So it\n>> looks like the animal was moved onto faster hardware.\n\n> Yes, and one thing I can't explain yet, is why REL_14_STABLE+ timings\n> substantially differ from REL_13_STABLE-, say, for the check stage:\n\nAs I mentioned in our off-list discussion, I have a lingering feeling\nthat this v14 commit could be affecting the results somehow:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL_14_BR [d5a9a661f] 2020-10-18 12:56:43 -0400\n\n Update the Winsock API version requested by libpq.\n \n According to Microsoft's documentation, 2.2 has been the current\n version since Windows 98 or so. Moreover, that's what the Postgres\n backend has been requesting since 2004 (cf commit 4cdf51e64).\n So there seems no reason for libpq to keep asking for 1.1.\n\nI didn't believe at the time that that'd have any noticeable effect,\nbut maybe it somehow made Winsock play a bit nicer with the GSS\nsupport?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 May 2024 10:38:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "14.05.2024 17:38, Tom Lane wrote:\n> As I mentioned in our off-list discussion, I have a lingering feeling\n> that this v14 commit could be affecting the results somehow:\n>\n> Author: Tom Lane <[email protected]>\n> Branch: master Release: REL_14_BR [d5a9a661f] 2020-10-18 12:56:43 -0400\n>\n> Update the Winsock API version requested by libpq.\n> \n> According to Microsoft's documentation, 2.2 has been the current\n> version since Windows 98 or so. Moreover, that's what the Postgres\n> backend has been requesting since 2004 (cf commit 4cdf51e64).\n> So there seems no reason for libpq to keep asking for 1.1.\n>\n> I didn't believe at the time that that'd have any noticeable effect,\n> but maybe it somehow made Winsock play a bit nicer with the GSS\n> support?\n\nYes, probably, but may be not nicer, as the test duration increased?\nStill I can't see the difference locally to check that commit.\nWill try other VMs/configurations, maybe I could find a missing factor...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 14 May 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Tue, May 14, 2024 at 9:00 PM Alexander Lakhin <[email protected]> wrote:\n> 14.05.2024 03:38, Thomas Munro wrote:\n> > I was beginning to suspect that lingering odour myself. I haven't\n> > look at the GSS code but I was imagining that what we have here is\n> > perhaps not unsent data dropped on the floor due to linger policy\n> > (unclean socket close on process exist), but rather that the server\n> > didn't see the socket as ready to read because it lost track of the\n> > FD_CLOSE somewhere because the client closed it gracefully, and our\n> > server-side FD_CLOSE handling has always been a bit suspect. I wonder\n> > if the GSS code is somehow more prone to brokenness. One thing we\n> > learned in earlier problems was that abortive/error disconnections\n> > generate FD_CLOSE repeatedly, while graceful ones give you only one.\n> > In other words, if the other end politely calls closesocket(), the\n> > server had better not miss the FD_CLOSE event, because it won't come\n> > again. That's what\n> >\n> > https://commitfest.postgresql.org/46/3523/\n> >\n> > is intended to fix. Does it help here? Unfortunately that's\n> > unpleasantly complicated and unbackpatchable (keeping a side-table of\n> > socket FDs and event handles, so we don't lose events between the\n> > cracks).\n>\n> Yes, that cure helps here too. I've tested it on b282fa88d~1 (the last\n> state when that patch set can be applied).\n\nThanks for checking, and generally for your infinite patience with all\nthese horrible Windows problems.\n\nOK, so we know what the problem is here. Here is the simplest\nsolution I know of for that problem. I have proposed this in the past\nand received negative feedback because it's a really gross hack. But\nI don't personally know what else to do about the back-branches (or\neven if that complex solution is the right way forward for master).\nThe attached kludge at least has the [de]merit of being a mirror image\nof the kludge that follows it for the \"opposite\" event. Does this fix\nit?", "msg_date": "Wed, 15 May 2024 10:26:43 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "15.05.2024 01:26, Thomas Munro wrote:\n> OK, so we know what the problem is here. Here is the simplest\n> solution I know of for that problem. I have proposed this in the past\n> and received negative feedback because it's a really gross hack. But\n> I don't personally know what else to do about the back-branches (or\n> even if that complex solution is the right way forward for master).\n> The attached kludge at least has the [de]merit of being a mirror image\n> of the kludge that follows it for the \"opposite\" event. Does this fix\n> it?\n\nYes, I see that abandoned GSS connections are closed immediately, as\nexpected. I have also confirmed that `meson test` with the basic\nconfiguration passes on REL_16_STABLE. So from the outside, the fix\nlooks good to me.\n\nThank you for working on this!\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 15 May 2024 09:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Wed, May 15, 2024 at 6:00 PM Alexander Lakhin <[email protected]> wrote:\n> 15.05.2024 01:26, Thomas Munro wrote:\n> > OK, so we know what the problem is here. Here is the simplest\n> > solution I know of for that problem. I have proposed this in the past\n> > and received negative feedback because it's a really gross hack. But\n> > I don't personally know what else to do about the back-branches (or\n> > even if that complex solution is the right way forward for master).\n> > The attached kludge at least has the [de]merit of being a mirror image\n> > of the kludge that follows it for the \"opposite\" event. Does this fix\n> > it?\n>\n> Yes, I see that abandoned GSS connections are closed immediately, as\n> expected. I have also confirmed that `meson test` with the basic\n> configuration passes on REL_16_STABLE. So from the outside, the fix\n> looks good to me.\n\nAlright, unless anyone has an objection or ideas for improvements, I'm\ngoing to go ahead and back-patch this slightly tidied up version\neverywhere.", "msg_date": "Thu, 16 May 2024 09:46:41 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Thu, May 16, 2024 at 9:46 AM Thomas Munro <[email protected]> wrote:\n> Alright, unless anyone has an objection or ideas for improvements, I'm\n> going to go ahead and back-patch this slightly tidied up version\n> everywhere.\n\nOf course as soon as I wrote that I thought of a useful improvement\nmyself: as far as I can tell, you only need to do the extra poll on\nthe first wait for WL_SOCKET_READABLE for any given WaitEventSet. I\ndon't think it's needed for later waits done by long-lived\nWaitEventSet objects, so we can track that with a flag. That avoids\nadding new overhead for regular backend socket waits after\nauthentication, it's just in these code paths that do a bunch of\nWaitLatchOrSocket() calls that we need to consider FD_CLOSE events\nlost between the cracks.\n\nI also don't know if the condition should include \"&& received == 0\".\nIt probably doesn't make much difference, but by leaving that off we\ndon't have to wonder how peeking interacts with events, ie if it's a\nproblem that we didn't do the \"reset\" step. Thinking about that, I\nrealised that I should probably set reset = true in this new return\npath, just like the \"normal\" WL_SOCKET_READABLE notification path,\njust to be paranoid. (Programming computers you don't have requires\nextra paranoia.)\n\nAny chance you could test this version please Alexander?", "msg_date": "Thu, 16 May 2024 10:43:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Thu, May 16, 2024 at 10:43 AM Thomas Munro <[email protected]> wrote:\n> Any chance you could test this version please Alexander?\n\nSorry, cancel that. v3 is not good. I assume it fixes the GSSAPI\nthing and is superficially better, but it doesn't handle code that\ncalls twice in a row and ignores the first result (I know that\nPostgreSQL does that occasionally in a few places), and it's also\nbroken if someone gets recv() = 0 (EOF), and then decides to wait\nanyway. The only ways I can think of to get full reliable poll()-like\nsemantics is to do that peek every time, OR the complicated patch\n(per-socket-workspace + intercepting recv etc). So I'm back to v2.\n\n\n", "msg_date": "Thu, 16 May 2024 13:32:08 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Hello Thomas,\n\n16.05.2024 04:32, Thomas Munro wrote:\n> On Thu, May 16, 2024 at 10:43 AM Thomas Munro <[email protected]> wrote:\n>> Any chance you could test this version please Alexander?\n> Sorry, cancel that. v3 is not good. I assume it fixes the GSSAPI\n> thing and is superficially better, but it doesn't handle code that\n> calls twice in a row and ignores the first result (I know that\n> PostgreSQL does that occasionally in a few places), and it's also\n> broken if someone gets recv() = 0 (EOF), and then decides to wait\n> anyway. The only ways I can think of to get full reliable poll()-like\n> semantics is to do that peek every time, OR the complicated patch\n> (per-socket-workspace + intercepting recv etc). So I'm back to v2.\n\nI've tested v2 and can confirm that it works as v1, `vcregress check`\npasses with no failures on REL_16_STABLE, `meson test` with the basic\nconfiguration too.\n\nBy the way, hamerkop is not configured to enable gssapi for HEAD [1] and\nI could not enable gss locally yet (just passing extra_lib_dirs,\nextra_include_dirs doesn't work for me).\n\nIt looks like we need to find a way to enable it for meson to continue\ntesting v17+ with GSS on Windows.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hamerkop&dt=2024-05-12%2011%3A00%3A28&stg=configure\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 16 May 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> For citext_utf8, I pushed cff4e5a3. Hamerkop runs infrequently, so\n> here's hoping for 100% green on master by Tuesday or so.\n\nMeanwhile, back at the ranch, it doesn't seem that changed anything:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-05-16%2011%3A00%3A32\n\n... and now that I look more closely, the reason why it didn't\nchange anything is that hamerkop is still building 0294df2\non HEAD. All its other branches are equally stuck at the\nend of March. So this is a flat-out-broken animal, and I\nplan to just ignore it until its owner un-sticks it.\n(In particular, I think we shouldn't be in a hurry to push\nthe patch discussed downthread.)\n\nAndrew: maybe the buildfarm server could be made to flag\nanimals building exceedingly old commits? This is the second\nproblem of this sort that I've noticed this month, and you\nreally have to look closely to realize it's happening.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 16:18:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "\nOn 2024-05-16 Th 16:18, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n>> For citext_utf8, I pushed cff4e5a3. Hamerkop runs infrequently, so\n>> here's hoping for 100% green on master by Tuesday or so.\n> Meanwhile, back at the ranch, it doesn't seem that changed anything:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-05-16%2011%3A00%3A32\n>\n> ... and now that I look more closely, the reason why it didn't\n> change anything is that hamerkop is still building 0294df2\n> on HEAD. All its other branches are equally stuck at the\n> end of March. So this is a flat-out-broken animal, and I\n> plan to just ignore it until its owner un-sticks it.\n> (In particular, I think we shouldn't be in a hurry to push\n> the patch discussed downthread.)\n>\n> Andrew: maybe the buildfarm server could be made to flag\n> animals building exceedingly old commits? This is the second\n> problem of this sort that I've noticed this month, and you\n> really have to look closely to realize it's happening.\n>\n> \t\t\t\n\n\nYeah, that should be doable. Since we have the git ref these days we \nshould be able to mark it as old, or maybe just reject builds for very \nold commits (the latter would be easier).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:07:49 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-05-16 Th 16:18, Tom Lane wrote:\n>> Andrew: maybe the buildfarm server could be made to flag\n>> animals building exceedingly old commits? This is the second\n>> problem of this sort that I've noticed this month, and you\n>> really have to look closely to realize it's happening.\n\n> Yeah, that should be doable. Since we have the git ref these days we \n> should be able to mark it as old, or maybe just reject builds for very \n> old commits (the latter would be easier).\n\nI'd rather have some visible status on the BF dashboard. Invariably,\nwith a problem like this, the animal's owner is unaware there's a\nproblem. If it's just silently not reporting, then no one else will\nnotice either, and we effectively lose an animal (despite it still\nburning electricity to perform those rejected runs).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 17:15:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "\nOn 2024-05-16 Th 17:15, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> On 2024-05-16 Th 16:18, Tom Lane wrote:\n>>> Andrew: maybe the buildfarm server could be made to flag\n>>> animals building exceedingly old commits? This is the second\n>>> problem of this sort that I've noticed this month, and you\n>>> really have to look closely to realize it's happening.\n>> Yeah, that should be doable. Since we have the git ref these days we\n>> should be able to mark it as old, or maybe just reject builds for very\n>> old commits (the latter would be easier).\n> I'd rather have some visible status on the BF dashboard. Invariably,\n> with a problem like this, the animal's owner is unaware there's a\n> problem. If it's just silently not reporting, then no one else will\n> notice either, and we effectively lose an animal (despite it still\n> burning electricity to perform those rejected runs).\n>\n> \t\t\t\n\n\nFair enough. That will mean some database changes and other stuff, so it \nwill take a bit longer.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:27:55 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2024-05-16 Th 17:15, Tom Lane wrote:\n>> I'd rather have some visible status on the BF dashboard. Invariably,\n>> with a problem like this, the animal's owner is unaware there's a\n>> problem. If it's just silently not reporting, then no one else will\n>> notice either, and we effectively lose an animal (despite it still\n>> burning electricity to perform those rejected runs).\n\n> Fair enough. That will mean some database changes and other stuff, so it \n> will take a bit longer.\n\nSure, I don't think it's urgent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 17:34:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Hello,\n\nI'm a hamerkop maintainer.\nSorry I have missed the scm error for so long.\n\nToday I switched scmrepo from git.postgresql.org/git/postgresql.git \nto github.com/postgres/postgres.git and successfully modernized\nthe build target code.\n\nwith best regards, Haruka Takatsuka\n\n\nOn Thu, 16 May 2024 16:18:23 -0400\nTom Lane <[email protected]> wrote:\n\n> Thomas Munro <[email protected]> writes:\n> > For citext_utf8, I pushed cff4e5a3. Hamerkop runs infrequently, so\n> > here's hoping for 100% green on master by Tuesday or so.\n> \n> Meanwhile, back at the ranch, it doesn't seem that changed anything:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2024-05-16%2011%3A00%3A32\n> \n> ... and now that I look more closely, the reason why it didn't\n> change anything is that hamerkop is still building 0294df2\n> on HEAD. All its other branches are equally stuck at the\n> end of March. So this is a flat-out-broken animal, and I\n> plan to just ignore it until its owner un-sticks it.\n> (In particular, I think we shouldn't be in a hurry to push\n> the patch discussed downthread.)\n> \n> Andrew: maybe the buildfarm server could be made to flag\n> animals building exceedingly old commits? This is the second\n> problem of this sort that I've noticed this month, and you\n> really have to look closely to realize it's happening.\n> \n> \t\t\tregards, tom lane\n\n\n_____________________________________________________________________\n\n\n", "msg_date": "Fri, 17 May 2024 12:34:02 +0900", "msg_from": "TAKATSUKA Haruka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Buildfarm:63] Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "TAKATSUKA Haruka <[email protected]> writes:\n> I'm a hamerkop maintainer.\n> Sorry I have missed the scm error for so long.\n\n> Today I switched scmrepo from git.postgresql.org/git/postgresql.git \n> to github.com/postgres/postgres.git and successfully modernized\n> the build target code.\n\nThanks very much! I see hamerkop has gone green in HEAD.\n\nIt looks like it succeeded in v13 too but failed in v12,\nwhich suggests that the isolationcheck problem is intermittent,\nwhich is not too surprising given our current theory about\nwhat's causing that.\n\nAt this point I think we are too close to the 17beta1 release\nfreeze to mess with it, but I'd support pushing Thomas'\nproposed patch after the freeze is over.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 12:36:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "\nOn 2024-05-16 Th 17:34, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> On 2024-05-16 Th 17:15, Tom Lane wrote:\n>>> I'd rather have some visible status on the BF dashboard. Invariably,\n>>> with a problem like this, the animal's owner is unaware there's a\n>>> problem. If it's just silently not reporting, then no one else will\n>>> notice either, and we effectively lose an animal (despite it still\n>>> burning electricity to perform those rejected runs).\n>> Fair enough. That will mean some database changes and other stuff, so it\n>> will take a bit longer.\n> Sure, I don't think it's urgent.\n\n\nI've pushed a small change, that should just mark with an asterisk any \ngitref that is more than 2 days older than the tip of the branch at the \ntime of reporting.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 18 May 2024 17:31:49 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> I've pushed a small change, that should just mark with an asterisk any \n> gitref that is more than 2 days older than the tip of the branch at the \n> time of reporting.\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 18:10:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Fri, May 17, 2024 at 12:00 AM Alexander Lakhin <[email protected]> wrote:\n> I've tested v2 and can confirm that it works as v1, `vcregress check`\n> passes with no failures on REL_16_STABLE, `meson test` with the basic\n> configuration too.\n\nPushed, including back-branches.\n\nThis is all not very nice code and I hope we can delete it all some\nday. Ideas include: (1) Thinking small: change over to the\nWAIT_USE_POLL implementation of latch.c on this OS (Windows has poll()\nthese days), using a socket pair for latch wakeup (i.e. give up trying\nto multiplex with native Windows event handles, even though they are a\ngreat fit for our latch abstraction, as the sockets are too different\nfrom Unix). (2) Thinking big: use native completion-based\nasynchronous socket APIs, as part of a much larger cross-platform AIO\nsocket reengineering project that would deliver higher performance\nnetworking on all OSes. The thought of (2) puts me off investing time\ninto (1), but on the other hand it would be nice if Windows could\nalmost completely share code with some Unixen. I may be more inclined\nto actually try it if/when we can rip out the fake signal support,\nbecause it is tangled up with this stuff and does not spark joy.\n\n\n", "msg_date": "Sat, 13 Jul 2024 16:22:12 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "Thomas Munro wrote 2024-05-12 06:31:\n> Hamerkop is already green on the 15 and 16 branches, apparently\n> because it's using the pre-meson test stuff that I guess just didn't\n> run the relevant test. In other words, nobody would notice the\n> difference anyway, and a master-only fix would be enough to end this\n> 44-day red streak.\n\nSorry for necroposting, but in our automated testing system we have\nfound some fails of this test. The most recent one was a couple of\ndays ago (see attached files) on PostgreSQL 15.7. Also I've reported\nthis bug some time ago [1], but provided an example only for\nPostgreSQL 17. Back then the bug was actually found on 15 or 16\nbranches (no logs remain from couple of months back), but i wanted\nto show that it was reproducible on 17.\n\nI would appreciate if you would backpatch this change to 15 and 16\nbranches.\n\n[1] \nhttps://www.postgresql.org/message-id/6885a0b52d06f7e5910d2b6276bbb4e8%40postgrespro.ru\n\nOleg Tselebrovskiy, Postgres Pro", "msg_date": "Thu, 01 Aug 2024 20:37:12 +0700", "msg_from": "Oleg Tselebrovskiy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Fri, Aug 2, 2024 at 1:37 AM Oleg Tselebrovskiy\n<[email protected]> wrote:\n> I would appreciate if you would backpatch this change to 15 and 16\n> branches.\n\nDone (e52a44b8, 91f498fd).\n\nAny elucidation on how and why Windows machines have started using\nUTF-8 would be welcome.\n\n\n", "msg_date": "Fri, 2 Aug 2024 10:54:02 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Aug 1, 2024, at 18:54, Thomas Munro <[email protected]> wrote:\n\n> Done (e52a44b8, 91f498fd).\n> \n> Any elucidation on how and why Windows machines have started using\n> UTF-8 would be welcome.\n\nHaven’t been following this thread, but this post reminded me of an issue I saw with locales on Windows[1]. Could it be that the introduction of Universal CRT[2] in Windows 10 has improved UTF-8 support?\n\nBit of a wild guess, but I assume worth bringing up at least.\n\nD\n\n\n[1]: https://github.com/shogo82148/actions-setup-perl/issues/1713 \n[2]: https://learn.microsoft.com/en-us/cpp/porting/upgrade-your-code-to-the-universal-crt?view=msvc-170\n\n", "msg_date": "Fri, 2 Aug 2024 10:11:10 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" }, { "msg_contents": "On Sat, Aug 3, 2024 at 2:11 AM David E. Wheeler <[email protected]> wrote:\n> Haven’t been following this thread, but this post reminded me of an issue I saw with locales on Windows[1]. Could it be that the introduction of Universal CRT[2] in Windows 10 has improved UTF-8 support?\n\nYeah. We have a few places that claim that Windows APIs can't do\nUTF-8 and they have to do extra wchar_t conversions, but that doesn't\nseem to be true on modern Windows. Example:\n\nhttps://github.com/postgres/postgres/blob/7926a9a80f6daf0fcc1feb1bee5c51fd001bc173/src/backend/utils/adt/pg_locale.c#L1814\n\nI suspect that at least when the locale name is \"en-US.UTF-8\", then\nthe regular POSIXoid strcoll_l() function should just work™ and we\ncould delete all that stuff and save Windows users a lot of wasted CPU\ncycles.\n\n\n", "msg_date": "Sat, 3 Aug 2024 10:02:48 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is citext/regress failing on hamerkop?" } ]
[ { "msg_contents": "Hello Everyone!\n\nIs there any chance to get some kind of a result set sifting mechanism in\nPostgres?\n\nWhat I am looking for is a way to get for example: \"nulls last\" in a result\nset, without having to call \"order by\" or having to use UNION ALL, and if\npossible to get this in a single result set pass.\n\nSomething on this line: SELECT a, b, c FROM my_table WHERE a nulls last\nOFFSET 0 LIMIT 25\n\nI don't want to use order by or union all because these are time consuming\noperations, especially on large data sets and when comparations are done\non dynamic values (eg: geolocation distances in between a mobile and a\nstatic location)\n\nWhat I would expect from such a feature, will be speeds comparable with non\nsorted selects, while getting a very rudimentary ordering.\n\nA use case for such a mechanism will be the implementation of QUICK\nrelevant search results for a search engine.\n\nI'm not familiar with how Postgres logic handles simple select queries, but\nthe way I would envision a result set sifting logic, would be to collect\nthe result set, in 2 separate lists, based on the sifting condition, and\nthen concatenate these 2 lists and return the result, when the pagination\nrequests conditions are met.\n\nAny idea if such a functionality is feasible ?\n\nThank you.\n\n PS: if ever implemented, the sifting mechanism could be extended to\naccommodate any type of thresholds, not just null values.\n\nHello Everyone!Is there any chance to get some kind of a result set sifting mechanism in Postgres? What I am looking for is a way to get for example: \"nulls last\" in a result set, without having to call \"order by\" or having to use UNION ALL, and if possible to get this in a single result set pass.Something on this line: SELECT a, b, c FROM my_table WHERE a nulls last OFFSET 0 LIMIT 25I don't want to use order by or union all because these are time consuming operations, especially on  large data sets and when comparations are done on dynamic values (eg: geolocation distances in between a mobile and a static location) What I would expect from such a feature, will be speeds comparable with non sorted selects, while getting a very rudimentary ordering.A use case for such a mechanism will be the implementation of QUICK relevant search results for a search engine.I'm not familiar with how Postgres logic handles simple select queries, but the way I would envision a result set sifting logic, would be to collect the result set, in 2 separate lists, based on the sifting condition, and then concatenate these 2 lists and return the result, when the pagination requests conditions are met.Any idea if such a functionality is feasible ?Thank you.  PS: if ever implemented, the \n\nsifting mechanism could be extended to accommodate any type of thresholds, not just null values.", "msg_date": "Sat, 11 May 2024 08:19:49 -0400", "msg_from": "aa <[email protected]>", "msg_from_op": true, "msg_subject": "Is there any chance to get some kind of a result set sifting\n mechanism in Postgres?" }, { "msg_contents": "Hi,\ndo I interpret your idea correctly: You want some sort of ordering without ordering?\nKind regardsWW\n\n Am Montag, 13. Mai 2024 um 10:40:38 MESZ hat aa <[email protected]> Folgendes geschrieben: \n \n Hello Everyone!\nIs there any chance to get some kind of a result set sifting mechanism in Postgres? \nWhat I am looking for is a way to get for example: \"nulls last\" in a result set, without having to call \"order by\" or having to use UNION ALL, and if possible to get this in a single result set pass.\nSomething on this line: SELECT a, b, c FROM my_table WHERE a nulls last OFFSET 0 LIMIT 25\nI don't want to use order by or union all because these are time consuming operations, especially on  large data sets and when comparations are done on dynamic values (eg: geolocation distances in between a mobile and a static location) \nWhat I would expect from such a feature, will be speeds comparable with non sorted selects, while getting a very rudimentary ordering.\nA use case for such a mechanism will be the implementation of QUICK relevant search results for a search engine.\nI'm not familiar with how Postgres logic handles simple select queries, but the way I would envision a result set sifting logic, would be to collect the result set, in 2 separate lists, based on the sifting condition, and then concatenate these 2 lists and return the result, when the pagination requests conditions are met.\nAny idea if such a functionality is feasible ?\nThank you.\n  PS: if ever implemented, the sifting mechanism could be extended to accommodate any type of thresholds, not just null values.\n\n\n \n\nHi,do I interpret your idea correctly: You want some sort of ordering without ordering?Kind regardsWW\n\n\n\n Am Montag, 13. Mai 2024 um 10:40:38 MESZ hat aa <[email protected]> Folgendes geschrieben:\n \n\n\nHello Everyone!Is there any chance to get some kind of a result set sifting mechanism in Postgres? What I am looking for is a way to get for example: \"nulls last\" in a result set, without having to call \"order by\" or having to use UNION ALL, and if possible to get this in a single result set pass.Something on this line: SELECT a, b, c FROM my_table WHERE a nulls last OFFSET 0 LIMIT 25I don't want to use order by or union all because these are time consuming operations, especially on  large data sets and when comparations are done on dynamic values (eg: geolocation distances in between a mobile and a static location) What I would expect from such a feature, will be speeds comparable with non sorted selects, while getting a very rudimentary ordering.A use case for such a mechanism will be the implementation of QUICK relevant search results for a search engine.I'm not familiar with how Postgres logic handles simple select queries, but the way I would envision a result set sifting logic, would be to collect the result set, in 2 separate lists, based on the sifting condition, and then concatenate these 2 lists and return the result, when the pagination requests conditions are met.Any idea if such a functionality is feasible ?Thank you.  PS: if ever implemented, the \n\nsifting mechanism could be extended to accommodate any type of thresholds, not just null values.", "msg_date": "Mon, 13 May 2024 09:48:34 +0000 (UTC)", "msg_from": "Wolfgang Wilhelm <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any chance to get some kind of a result set sifting\n mechanism in Postgres?" }, { "msg_contents": "Hi,\nIf you call the action of \"sifting\" ordering, then yes. If you don't call\nit ordering, then no.\n\nIn essence, is the output of a filtering mechanism, done in a single result\nset pass. And this pass should be the same pass in charge of collecting the\nresult set in the first place.\n\nThanks\n\n\nOn Mon, May 13, 2024 at 5:48 AM Wolfgang Wilhelm <[email protected]>\nwrote:\n\n> Hi,\n>\n> do I interpret your idea correctly: You want some sort of ordering without\n> ordering?\n>\n> Kind regards\n> WW\n>\n> Am Montag, 13. Mai 2024 um 10:40:38 MESZ hat aa <[email protected]>\n> Folgendes geschrieben:\n>\n>\n> Hello Everyone!\n>\n> Is there any chance to get some kind of a result set sifting mechanism in\n> Postgres?\n>\n> What I am looking for is a way to get for example: \"nulls last\" in a\n> result set, without having to call \"order by\" or having to use UNION ALL,\n> and if possible to get this in a single result set pass.\n>\n> Something on this line: SELECT a, b, c FROM my_table WHERE a nulls last\n> OFFSET 0 LIMIT 25\n>\n> I don't want to use order by or union all because these are time consuming\n> operations, especially on large data sets and when comparations are done\n> on dynamic values (eg: geolocation distances in between a mobile and a\n> static location)\n>\n> What I would expect from such a feature, will be speeds comparable with\n> non sorted selects, while getting a very rudimentary ordering.\n>\n> A use case for such a mechanism will be the implementation of QUICK\n> relevant search results for a search engine.\n>\n> I'm not familiar with how Postgres logic handles simple select queries,\n> but the way I would envision a result set sifting logic, would be to\n> collect the result set, in 2 separate lists, based on the sifting\n> condition, and then concatenate these 2 lists and return the result, when\n> the pagination requests conditions are met.\n>\n> Any idea if such a functionality is feasible ?\n>\n> Thank you.\n>\n> PS: if ever implemented, the sifting mechanism could be extended to\n> accommodate any type of thresholds, not just null values.\n>\n>\n>\n>\n\nHi,If you call the action of \"sifting\" ordering, then yes. If you don't call it ordering, then no.In essence, is the output of a filtering mechanism, done in a single result set pass. And this pass should be the same pass in charge of collecting the result set in the first place.ThanksOn Mon, May 13, 2024 at 5:48 AM Wolfgang Wilhelm <[email protected]> wrote:\nHi,do I interpret your idea correctly: You want some sort of ordering without ordering?Kind regardsWW\n\n\n\n Am Montag, 13. Mai 2024 um 10:40:38 MESZ hat aa <[email protected]> Folgendes geschrieben:\n \n\n\nHello Everyone!Is there any chance to get some kind of a result set sifting mechanism in Postgres? What I am looking for is a way to get for example: \"nulls last\" in a result set, without having to call \"order by\" or having to use UNION ALL, and if possible to get this in a single result set pass.Something on this line: SELECT a, b, c FROM my_table WHERE a nulls last OFFSET 0 LIMIT 25I don't want to use order by or union all because these are time consuming operations, especially on  large data sets and when comparations are done on dynamic values (eg: geolocation distances in between a mobile and a static location) What I would expect from such a feature, will be speeds comparable with non sorted selects, while getting a very rudimentary ordering.A use case for such a mechanism will be the implementation of QUICK relevant search results for a search engine.I'm not familiar with how Postgres logic handles simple select queries, but the way I would envision a result set sifting logic, would be to collect the result set, in 2 separate lists, based on the sifting condition, and then concatenate these 2 lists and return the result, when the pagination requests conditions are met.Any idea if such a functionality is feasible ?Thank you.  PS: if ever implemented, the \n\nsifting mechanism could be extended to accommodate any type of thresholds, not just null values.", "msg_date": "Mon, 13 May 2024 09:35:22 -0400", "msg_from": "aa <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is there any chance to get some kind of a result set sifting\n mechanism in Postgres?" }, { "msg_contents": "On Mon, 13 May 2024 at 04:40, aa <[email protected]> wrote:\n\n> Hello Everyone!\n>\n> Is there any chance to get some kind of a result set sifting mechanism in\n> Postgres?\n>\n> What I am looking for is a way to get for example: \"nulls last\" in a\n> result set, without having to call \"order by\" or having to use UNION ALL,\n> and if possible to get this in a single result set pass.\n>\n> Something on this line: SELECT a, b, c FROM my_table WHERE a nulls last\n> OFFSET 0 LIMIT 25\n>\n> I don't want to use order by or union all because these are time consuming\n> operations, especially on large data sets and when comparations are done\n> on dynamic values (eg: geolocation distances in between a mobile and a\n> static location)\n>\n\nThis already exists: ORDER BY a IS NULL\n\nI've found it to be more useful than one might initially expect to order by\na boolean expression.\n\nOn Mon, 13 May 2024 at 04:40, aa <[email protected]> wrote:Hello Everyone!Is there any chance to get some kind of a result set sifting mechanism in Postgres? What I am looking for is a way to get for example: \"nulls last\" in a result set, without having to call \"order by\" or having to use UNION ALL, and if possible to get this in a single result set pass.Something on this line: SELECT a, b, c FROM my_table WHERE a nulls last OFFSET 0 LIMIT 25I don't want to use order by or union all because these are time consuming operations, especially on  large data sets and when comparations are done on dynamic values (eg: geolocation distances in between a mobile and a static location) This already exists: ORDER BY a IS NULLI've found it to be more useful than one might initially expect to order by a boolean expression.", "msg_date": "Mon, 13 May 2024 10:37:27 -0400", "msg_from": "Isaac Morland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any chance to get some kind of a result set sifting\n mechanism in Postgres?" }, { "msg_contents": "aa <[email protected]> writes:\n> If you call the action of \"sifting\" ordering, then yes. If you don't call\n> it ordering, then no.\n> In essence, is the output of a filtering mechanism, done in a single result\n> set pass. And this pass should be the same pass in charge of collecting the\n> result set in the first place.\n\nSounds a lot like a WHERE clause to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 14:22:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any chance to get some kind of a result set sifting\n mechanism in Postgres?" }, { "msg_contents": "On 05/13/24 09:35, aa wrote:\n> If you call the action of \"sifting\" ordering, then yes. If you don't call\n> it ordering, then no.\n\n\nOne thing seems intriguing about this idea: normally, an expected\nproperty of any ORDER BY is that no result row can be passed down\nthe pipe until all input rows have been seen.\n\nIn the case of ORDER BY <boolean expression>, or more generally\nORDER BY <expression type with small discrete value space>, a\npigeonhole sort could be used—and rows mapping to the ordered-first\npigeonhole could be passed down the pipe on sight. (Rows mapping to\nany later pigeonhole still have to be held to the end, unless some\nfurther analysis can identify when all rows for earlier pigeonholes\nmust have been seen).\n\nI don't know whether any such ORDER BY strategy is already implemented,\nor would be useful enough to be worth implementing, but it might be\nhandy in cases where a large number of rows are expected to map to\nthe first pigeonhole. Intermediate storage wouldn't be needed for those,\nand some follow-on processing could go on concurrently.\n\nThe usage example offered here (\"sift\" nulls last, followed by\na LIMIT) does look a lot like a job for a WHERE clause though.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 13 May 2024 14:59:01 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any chance to get some kind of a result set sifting\n mechanism in Postgres?" } ]
[ { "msg_contents": "I spent some time looking into the performance complaint at [1],\nwhich for the sake of self-containedness is\n\nCREATE TABLE t(a int, b int);\n\nINSERT INTO t(a, b)\nSELECT\n (random() * 123456)::int AS a,\n (random() * 123456)::int AS b\nFROM\n generate_series(1, 12345678);\n\nCREATE INDEX my_idx ON t USING BTREE (a, b);\n\nVACUUM ANALYZE t;\n\nexplain (analyze, buffers) select * from t\nwhere row(a, b) > row(123450, 123444) and a = 0\norder by a, b;\n\nThis produces something like\n\n Index Only Scan using my_idx on t (cost=0.43..8.46 rows=1 width=8) (actual time=475.713..475.713 rows=0 loops=1)\n Index Cond: ((ROW(a, b) > ROW(123450, 123444)) AND (a = 0))\n Heap Fetches: 0\n Buffers: shared hit=1 read=33731\n Planning:\n Buffers: shared read=4\n Planning Time: 0.247 ms\n Execution Time: 475.744 ms\n\nshowing that we are reading practically the whole index, which\nis pretty sad considering the index conditions are visibly\nmutually contradictory. What's going on? I find that:\n\n1. _bt_preprocess_keys, which is responsible for detecting\nmutually-contradictory index quals, fails to do so because it\nreally just punts on row-comparison quals: it shoves them\ndirectly from input to output scankey array without any\ncomparisons to other keys. (In particular, that causes the\nrow-comparison qual to appear before the a = 0 one in the\noutput scankey array.)\n\n2. The initial-positioning logic in _bt_first chooses \"a = 0\"\nas determining where to start the scan, because it always\nprefers equality over inequality keys. (This seems reasonable.)\n\n3. We really should stop the scan once we're past the last a = 0\nindex entry, which'd at least limit the damage. However, at both\na = 0 and later entries, the row-comparison qual fails causing\n_bt_check_compare to return immediately, without examining the\na = 0 key which is marked as SK_BT_REQFWD, and thus it does not\nclear \"continuescan\". Only when we finally reach an index entry\nfor which the row-comparison qual succeeds do we notice that\na = 0 is failing so we could stop the scan.\n\nSo this seems pretty horrid. It would be nice if _bt_preprocess_keys\nwere smart enough to notice the contradictory nature of these quals,\nbut I grant that (a) that routine is dauntingly complex already and\n(b) this doesn't seem like a common enough kind of query to be worth\nmoving heaven and earth to optimize.\n\nHowever, I do think we should do something about the unstated\nassumption that _bt_preprocess_keys can emit the quals (for a given\ncolumn) in any random order it feels like. This is evidently not so,\nand it's probably capable of pessimizing other examples besides this\none. Unless we want to slow down _bt_check_compare by making it\ncontinue to examine quals after the first failure, we need to insist\nthat required quals appear before non-required quals, thus ensuring\nthat a comparison failure will clear continuescan if possible.\n\nEven that looks a little nontrivial, because it seems like nbtree\nmay be making some assumptions about the order in which array keys\nappear. I see the bit about\n\n * ... Some reordering of the keys\n * within each attribute may be done as a byproduct of the processing here.\n * That process must leave array scan keys (within an attribute) in the same\n * order as corresponding entries from the scan's BTArrayKeyInfo array info.\n\nwhich I could cope with, but then there's this down around line 2967:\n\n * Note: We do things this way around so that our arrays are\n * always in the same order as their corresponding scan keys,\n * even with incomplete opfamilies. _bt_advance_array_keys\n * depends on this.\n\nHowever, despite the rather over-the-top verbosity of commenting in\n_bt_advance_array_keys, it's far from clear why or how it depends on\nthat. So I feel a little stuck about what needs to be done here.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAAdwFAxBjyrYUkH7u%2BEceTaztd1QxBtBY1Teux8K%3DvcGKe%3D%3D-A%40mail.gmail.com\n\n\n", "msg_date": "Sat, 11 May 2024 15:19:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "On Sat, May 11, 2024 at 3:19 PM Tom Lane <[email protected]> wrote:\n> This produces something like\n>\n> Index Only Scan using my_idx on t (cost=0.43..8.46 rows=1 width=8) (actual time=475.713..475.713 rows=0 loops=1)\n> Index Cond: ((ROW(a, b) > ROW(123450, 123444)) AND (a = 0))\n> Heap Fetches: 0\n> Buffers: shared hit=1 read=33731\n> Planning:\n> Buffers: shared read=4\n> Planning Time: 0.247 ms\n> Execution Time: 475.744 ms\n>\n> showing that we are reading practically the whole index, which\n> is pretty sad considering the index conditions are visibly\n> mutually contradictory. What's going on?\n\nThere's another problem along these lines, that seems at least as bad:\nqueries involving contradictory >= and <= quals aren't recognized as\ncontradictory during preprocessing. There's no reason why\n_bt_preprocessing_keys couldn't detect that case; it just doesn't\nright now.\n\n> So this seems pretty horrid. It would be nice if _bt_preprocess_keys\n> were smart enough to notice the contradictory nature of these quals,\n> but I grant that (a) that routine is dauntingly complex already and\n> (b) this doesn't seem like a common enough kind of query to be worth\n> moving heaven and earth to optimize.\n\nI don't think that it would be all that hard.\n\n> However, I do think we should do something about the unstated\n> assumption that _bt_preprocess_keys can emit the quals (for a given\n> column) in any random order it feels like. This is evidently not so,\n> and it's probably capable of pessimizing other examples besides this\n> one. Unless we want to slow down _bt_check_compare by making it\n> continue to examine quals after the first failure, we need to insist\n> that required quals appear before non-required quals, thus ensuring\n> that a comparison failure will clear continuescan if possible.\n\nObviously that general principle is important, but I don't think that\nwe fail to do the right thing anywhere else -- this seems likely to be\nthe only one.\n\nRow comparisons are kind of a special case, both during preprocessing\nand during the scan itself. I find it natural to blame this problem on\nthe fact that preprocessing makes exactly zero effort to detect\ncontradictory conditions that happen to involve a RowCompare. Making\nnon-zero effort in that direction would already be a big improvement.\n\n> Even that looks a little nontrivial, because it seems like nbtree\n> may be making some assumptions about the order in which array keys\n> appear. I see the bit about\n\n> However, despite the rather over-the-top verbosity of commenting in\n> _bt_advance_array_keys, it's far from clear why or how it depends on\n> that. So I feel a little stuck about what needs to be done here.\n\nThe dependency is fairly simple. In the presence of multiple arrays on\nthe same column, which must be contradictory/redundant, but cannot be\nsimplified solely due to lack of suitable cross-type support, we have\nmultiple arrays on the same index column. _bt_advance_array_keys wants\nto deal with this by assuming that the scan key order matches the\narray key order. After all, there is no indirection to disambiguate\nwhich array belongs to which scan key. We make sure that\n_bt_advance_array_keys expectations are never violated by having\npreprocessing make sure that the arrays match input scan key order.\nPreprocessing must also make sure that the output scan keys are in the\nsame order as the input scan keys.\n\nI doubt that this detail makes the task of improving row compare\npreprocessing any harder. It only comes up in scenarios involving\nincomplete opfamilies, which is quite niche (obviously it's not a\nfactor in your test case, for example). But even if you assume that\nincomplete opfamilies are common, it still doesn't seem like this\ndetail matters.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 May 2024 16:12:18 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Sat, May 11, 2024 at 3:19 PM Tom Lane <[email protected]> wrote:\n>> However, despite the rather over-the-top verbosity of commenting in\n>> _bt_advance_array_keys, it's far from clear why or how it depends on\n>> that. So I feel a little stuck about what needs to be done here.\n\n> The dependency is fairly simple. In the presence of multiple arrays on\n> the same column, which must be contradictory/redundant, but cannot be\n> simplified solely due to lack of suitable cross-type support, we have\n> multiple arrays on the same index column. _bt_advance_array_keys wants\n> to deal with this by assuming that the scan key order matches the\n> array key order.\n\nI guess what is not clear to me is what you mean by \"array key order\".\nIs that simply the order of entries in BTArrayKeyInfo[], or are there\nadditional assumptions/restrictions?\n\n> There's another problem along these lines, that seems at least as bad:\n> queries involving contradictory >= and <= quals aren't recognized as\n> contradictory during preprocessing. There's no reason why\n> _bt_preprocessing_keys couldn't detect that case; it just doesn't\n> right now.\n\nUgh, how'd we miss that? I can take a look at this, unless you're\non it already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 May 2024 16:21:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "On Sat, May 11, 2024 at 4:21 PM Tom Lane <[email protected]> wrote:\n> > The dependency is fairly simple. In the presence of multiple arrays on\n> > the same column, which must be contradictory/redundant, but cannot be\n> > simplified solely due to lack of suitable cross-type support, we have\n> > multiple arrays on the same index column. _bt_advance_array_keys wants\n> > to deal with this by assuming that the scan key order matches the\n> > array key order.\n>\n> I guess what is not clear to me is what you mean by \"array key order\".\n> Is that simply the order of entries in BTArrayKeyInfo[], or are there\n> additional assumptions/restrictions?\n\nI simply mean the order of the entries in BTArrayKeyInfo[].\n\n> > There's another problem along these lines, that seems at least as bad:\n> > queries involving contradictory >= and <= quals aren't recognized as\n> > contradictory during preprocessing. There's no reason why\n> > _bt_preprocessing_keys couldn't detect that case; it just doesn't\n> > right now.\n>\n> Ugh, how'd we miss that? I can take a look at this, unless you're\n> on it already.\n\nMy draft skip scan/MDAM patch already deals with this in passing. So\nyou could say that I was already working on this. But I'm not sure\nthat I would actually say so myself; what I'm doing is tied to far\nmore complicated work.\n\nI haven't attempted to write the kind of targeted fix that you're\nthinking of. It might still be worth writing such a fix now. I\ncertainly have no objections.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 May 2024 16:30:48 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "On Sat, May 11, 2024 at 4:12 PM Peter Geoghegan <[email protected]> wrote:\n> Row comparisons are kind of a special case, both during preprocessing\n> and during the scan itself. I find it natural to blame this problem on\n> the fact that preprocessing makes exactly zero effort to detect\n> contradictory conditions that happen to involve a RowCompare. Making\n> non-zero effort in that direction would already be a big improvement.\n\nBTW, I'm playing with the idea of eliminating the special case logic\naround row comparisons scan keys through smarter preprocessing, of the\nkind that the MDAM paper contemplates for the SQL standard row\nconstructor syntax (under its \"Multi-Valued Predicates\" section). I'm\nnot sure if I'll get around to that anytime soon, but that sort of\napproach seems to have a lot to recommend it. Maybe nbtree shouldn't\neven have to think about row comparisons, except perhaps during\npreprocessing. (Actually, nbtree already doesn't have to deal with\nequality row comparisons -- this scheme would mean that it wouldn't\nhave to deal with row comparison inequalities.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 May 2024 16:45:57 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Sat, May 11, 2024 at 4:21 PM Tom Lane <[email protected]> wrote:\n>>> There's another problem along these lines, that seems at least as bad:\n>>> queries involving contradictory >= and <= quals aren't recognized as\n>>> contradictory during preprocessing. There's no reason why\n>>> _bt_preprocessing_keys couldn't detect that case; it just doesn't\n>>> right now.\n\n>> Ugh, how'd we miss that? I can take a look at this, unless you're\n>> on it already.\n\n> My draft skip scan/MDAM patch already deals with this in passing. So\n> you could say that I was already working on this. But I'm not sure\n> that I would actually say so myself; what I'm doing is tied to far\n> more complicated work.\n\nHmm, I'm generally in favor of a lot of small patches rather than one\nenormously complex one. Isn't this point something that could be\nbroken out?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 11 May 2024 17:05:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "On Sat, May 11, 2024 at 5:05 PM Tom Lane <[email protected]> wrote:\n> Hmm, I'm generally in favor of a lot of small patches rather than one\n> enormously complex one. Isn't this point something that could be\n> broken out?\n\nThat's not really possible here.\n\nSkip scan generally works by consing up a special \"skip\" array +\nequality scan key for attributes that lack input scan keys that use\nthe equality strategy (up to and including the least significant input\nscan key's index attribute). In the case of quals like \"WHERE sdate\nBETWEEN '2024-01-01' and '2024-01-31'\" (assume that there is no index\ncolumn before \"sdate\" here), we generate a skip scan key + skip array\nfor \"sdate\" early during preprocessing. This \"array\" works in mostly\nthe same way as arrays work in Postgres 17; the big difference is that\nit procedurally generates its values, on-demand. The values are\ngenerated from within given range of values -- often every possible\nvalue for the underlying type. Often, but not always -- there's also\nrange predicates to consider.\n\nLater preprocessing inside _bt_compare_array_scankey_args() will limit\nthe range of values that our magical skip array generates, when we try\nto compare it against inequalities. So for the BETWEEN example, both\nthe >= scan key and the <= scan key are \"eliminated\", though in a way\nthat leaves us with a skip array + scan key that generates the\nrequired range of values. It's very easy to make\n_bt_compare_array_scankey_args() detect the case where a skip array's\nupper and lower bounds are contradictory, which is how this is\nhandled.\n\nThat said, there'll likely be cases where this kind of transformation\nisn't possible. I hope to be able to always set the scan keys up this\nway, even in cases where skipping isn't expected to be useful (that\nshould be a problem for the index scan to deal with at runtime). But I\nthink I'll probably end up falling short of that ideal in some way or other.\nMaybe that creates a need to independently detect contradictory >= and\n<= scan keys (keys that don't go through this skip array preprocessing\npath).\n\nObviously this is rather up in the air right now. As I said, I think\nthat we could directly fix this case quite easily, if we had to. And\nI'm sympathetic; this is pretty horrible if you happen to run into it.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 May 2024 17:30:45 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" }, { "msg_contents": "On Sat, May 11, 2024 at 4:12 PM Peter Geoghegan <[email protected]> wrote:\n> The dependency is fairly simple. In the presence of multiple arrays on\n> the same column, which must be contradictory/redundant, but cannot be\n> simplified solely due to lack of suitable cross-type support, we have\n> multiple arrays on the same index column. _bt_advance_array_keys wants\n> to deal with this by assuming that the scan key order matches the\n> array key order. After all, there is no indirection to disambiguate\n> which array belongs to which scan key.\n\nMinor correction: there is an indirection. We can get from any\nBTArrayKeyInfo entry to its so->arrayData[] scan key using the\nBTArrayKeyInfo.scan_key offset. It'd just be inconvenient to do it\nthat way around within _bt_advance_array_keys, since\n_bt_advance_array_keys's loop iterates through so->arrayData[] in the\nusual order (just like in _bt_check_compare).\n\nThere is an assertion within _bt_advance_array_keys (and a couple of\nother similar assertions elsewhere) that verify that everybody got it\nright, though. The \"Assert(array->scan_key == ikey);\" assertion. So if\n_bt_preprocess_keys ever violated the expectations held by\n_bt_advance_array_keys, the problem would probably be detected before\nlong.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 11 May 2024 20:08:03 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient nbtree behavior with row-comparison quals" } ]
[ { "msg_contents": "I just joined the mailing list and I don't know how to respond to old messages. However, I have a few suggestions on the upcoming TLS and ALPN changes.\n\nTL;DR\n\nPrefer TLS over SSLRequest or plaintext (from the start)\n\n- ?sslmode=default # try tls, then sslrequest, then plaintext​\n- ?sslmode=tls|tlsv1.3 # require tls, no fallback​\n- ?sslmode=tls-noverify|tlsv1.3-noverify # require tls, ignore CA​\n- --tlsv1.3 # same as curl; require tls​\n- -k, --insecure # same as curl: don't require verification​\n\nAllow the user to specify ALPN (i.e. for privacy or advanced routing)\n\n- ?alpn=pg3|disable|<empty>​\n- --alpn 'pg3|disable|<arbitrary-string>' # same as curl, openssl\n(I don't have much to argue against the long form \"postgres/3\" other than that the trend is to keep it short and sweet and all mindshare (and SEO) for \"pg\" is pretty-well captured by Postgres already)\n\nRationales\n\nI don't really intend to sway anyone who has considered these things and decided against them. My intent is just to shed light for any of these aspects that haven't been carefully considered already.\n\nPrefer the Happy Path\n\n- We've more or less reached Peak Database, therefore Postgres will probably be around in another 20 years. There's probably not going to be a significant advance in storage and indexing technology that would make Postgres obsolete (re: the only NoSQL database left is Mongo, and in the next \"revolution\", that generation will likely come back around to the same conclusions people reached in the 1960s and 1970s: \"relational algebra wins\" and \"SQL syntax is good enough\").\n\n- We've more or less reached Peak Web, therefore TLS or a very close successor will probably be around in another 20 years as well. Even if a non-incremental, non-backwards-compatible protocol that is extraordinarily better were announced by a FAANG consortium tomorrow and immediately available by patches from them in every major product they touch, sunsetting TLS would probably take 20+ years.\n\n- Postgres versions (naturally) take years to make it into mainstream LTS server distros (without explicit user interaction anyway)\n\n- All of that is to say that I believe that optimizing for the Happy Path is going to be a big win. Optimizing for previous behavior will just make \"easy, secure defaults\" take (perhaps decades) longer to be fully adopted, and may not have any measurable benefit now or in the future.\n\nPrefer Standard TLS\n\n- As I experience it (and understand others to experience it), the one-time round trip isn't the main concern for switch to standard TLS, it's the ability to route and proxy.\n\n- Having an extra round trip (try TLS first, then SSLRequest) for increasingly older versions of Postgres will, definitionally, become even less and less important as time goes on.\n\n- Having postgres TLS/SNI/ALPN routable by default will just be more intuitive (it's what I assumed would have been the default anyway), and help increase adoption in cloud, enterprise, and other settings.\n\n- We live in the world of ACME / Let's Encrypt / ZeroSSL. Many proxies have that built in. As such optimizing for unverified TLS takes the user down a path that's just more difficult to begin with (it's easier​ to get a valid TLS cert than it is to get a self-signed cert these days), and more nuanced (upcoming implementors are accustomed to TLS being verified). It's easy to document how to use the letsencrypt client with postgres. It will also be increasingly easy to configure an ACME-enable proxy for postgres and not worry about it in the server at all.\n\n- With all that, there's still this issue of downgrade attacks that can't be solved without a breaking change (or unless the user is skilled enough to know to be explicit). I wish that could change with the next major version of postgres - for the client to have to opt-in to insecure connections (I assume that more and more TLS on the serverside will be handled by proxies).\n\nDon't add extra flags\n\n- sslnegotiation=xxxx​ seems to make sslmode=​ more confusing - which modes will be compatible? Will it come into conflict more if others are added in the future? How many conflict-checks will be needed in the client code that make reading that code more complicated? What all has to be duplicated now (i.e. require)? How about the future?\n\n- reusing sslmode=​ and adding new flags is simpler and less error prone\n\n- \"sslnegotiation\" is also prolonging the use of the term \"ssl\" for something that isn't actually \"ssl\"\n\nAllow the user to specify ALPN\n\n- I don't think this is particularly controversial or nuanced, so I don't have much to say here - most TLS tools allow the user to specify ALPN for the same reason they allow specifying the port number - either for privacy, security-by-obscurity, or navigating some form of application or user routing.\n\nRe:\n\n- https://www.postgresql.org/message-id/flat/[email protected]\n\n- https://www.postgresql.org/message-id/flat/[email protected]\n\n- https://www.postgresql.org/message-id/ECNyobMWPeoCd4yj_5J0RsDL1yKC9MbbBwOGCYHgcts7v0BW_-znGIoxcvfzUsf3yKvUB6Lef22OBMZnJyZ-0T2U1qaVflQqEGO0RFHp1PE%3D%40proton.me\n\n- https://www.postgresql.org/message-id/y3hCpl3ALJQPlIn8aKG19aiYbNM_HbchVTOqlwm2Y9OE-sWmtre-Cljlt9Jd_yYsv5S3mDNG-T5OXXfU8GgDrdu2MjTBEcWl23_NUesj8i8%3D%40proton.me\n\nAJ ONeal\nI just joined the mailing list and I don't know how to respond to old messages. However, I have a few suggestions on the upcoming TLS and ALPN changes.TL;DRPrefer TLS over SSLRequest or plaintext (from the start)?sslmode=default # try tls, then sslrequest, then plaintext​?sslmode=tls|tlsv1.3 # require tls, no fallback​?sslmode=tls-noverify|tlsv1.3-noverify # require tls, ignore CA​--tlsv1.3 # same as curl; require tls​-k, --insecure # same as curl: don't require verification​Allow the user to specify ALPN (i.e. for privacy or advanced routing)?alpn=pg3|disable|<empty>​--alpn 'pg3|disable|<arbitrary-string>' # same as curl, openssl(I don't have much to argue against the long form \"postgres/3\" other than that the trend is to keep it short and sweet and all mindshare (and SEO) for \"pg\" is pretty-well captured by Postgres already)RationalesI don't really intend to sway anyone who has considered these things and decided against them. My intent is just to shed light for any of these aspects that haven't been carefully considered already.Prefer the Happy PathWe've more or less reached Peak Database, therefore Postgres will probably be around in another 20 years. There's probably not going to be a significant advance in storage and indexing technology that would make Postgres obsolete (re: the only NoSQL database left is Mongo, and in the next \"revolution\", that generation will likely come back around to the same conclusions people reached in the 1960s and 1970s: \"relational algebra wins\" and \"SQL syntax is good enough\").We've more or less reached Peak Web, therefore TLS or a very close successor will probably be around in another 20 years as well. Even if a non-incremental, non-backwards-compatible protocol that is extraordinarily better were announced by a FAANG consortium tomorrow and immediately available by patches from them in every major product they touch, sunsetting TLS would probably take 20+ years.Postgres versions (naturally) take years to make it into mainstream LTS server distros (without explicit user interaction anyway)All of that is to say that I believe that optimizing for the Happy Path is going to be a big win. Optimizing for previous behavior will just make \"easy, secure defaults\" take (perhaps decades) longer to be fully adopted, and may not have any measurable benefit now or in the future.Prefer Standard TLSAs I experience it (and understand others to experience it), the one-time round trip isn't the main concern for switch to standard TLS, it's the ability to route and proxy.Having an extra round trip (try TLS first, then SSLRequest) for increasingly older versions of Postgres will, definitionally, become even less and less important as time goes on.Having postgres TLS/SNI/ALPN routable by default will just be more intuitive (it's what I assumed would have been the default anyway), and help increase adoption in cloud, enterprise, and other settings.We live in the world of ACME / Let's Encrypt / ZeroSSL. Many proxies have that built in. As such optimizing for unverified TLS takes the user down a path that's just more difficult to begin with (it's easier​ to get a valid TLS cert than it is to get a self-signed cert these days), and more nuanced (upcoming implementors are accustomed to TLS being verified). It's easy to document how to use the letsencrypt client with postgres. It will also be increasingly easy to configure an ACME-enable proxy for postgres and not worry about it in the server at all.With all that, there's still this issue of downgrade attacks that can't be solved without a breaking change (or unless the user is skilled enough to know to be explicit). I wish that could change with the next major version of postgres - for the client to have to opt-in to insecure connections (I assume that more and more TLS on the serverside will be handled by proxies).Don't add extra flagssslnegotiation=xxxx​ seems to make sslmode=​ more confusing - which modes will be compatible? Will it come into conflict more if others are added in the future? How many conflict-checks will be needed in the client code that make reading that code more complicated? What all has to be duplicated now (i.e. require)? How about the future?reusing sslmode=​ and adding new flags is simpler and less error prone\"sslnegotiation\" is also prolonging the use of the term \"ssl\" for something that isn't actually \"ssl\"Allow the user to specify ALPNI don't think this is particularly controversial or nuanced, so I don't have much to say here - most TLS tools allow the user to specify ALPN for the same reason they allow specifying the port number - either for privacy, security-by-obscurity, or navigating some form of application or user routing.Re:https://www.postgresql.org/message-id/flat/[email protected]://www.postgresql.org/message-id/flat/[email protected]://www.postgresql.org/message-id/ECNyobMWPeoCd4yj_5J0RsDL1yKC9MbbBwOGCYHgcts7v0BW_-znGIoxcvfzUsf3yKvUB6Lef22OBMZnJyZ-0T2U1qaVflQqEGO0RFHp1PE%3D%40proton.mehttps://www.postgresql.org/message-id/y3hCpl3ALJQPlIn8aKG19aiYbNM_HbchVTOqlwm2Y9OE-sWmtre-Cljlt9Jd_yYsv5S3mDNG-T5OXXfU8GgDrdu2MjTBEcWl23_NUesj8i8%3D%40proton.me\n\n\nAJ ONeal", "msg_date": "Sat, 11 May 2024 19:36:17 +0000", "msg_from": "AJ ONeal <[email protected]>", "msg_from_op": true, "msg_subject": "Comments about TLS (no SSLRequest) and ALPN" }, { "msg_contents": "On Sat, 11 May 2024 at 21:36, AJ ONeal <[email protected]> wrote:\n>\n> I just joined the mailing list and I don't know how to respond to old messages. However, I have a few suggestions on the upcoming TLS and ALPN changes.\n>\n> TL;DR\n>\n> Prefer TLS over SSLRequest or plaintext (from the start)\n>\n> ?sslmode=default # try tls, then sslrequest, then plaintext\n> ?sslmode=tls|tlsv1.3 # require tls, no fallback\n> ?sslmode=tls-noverify|tlsv1.3-noverify # require tls, ignore CA\n\nI'm against adding a separate mini configuration language within our options.\n\n> Allow the user to specify ALPN (i.e. for privacy or advanced routing)\n>\n> ?alpn=pg3|disable|<empty>\n> --alpn 'pg3|disable|<arbitrary-string>' # same as curl, openssl\n> (I don't have much to argue against the long form \"postgres/3\" other than that the trend is to keep it short and sweet and all mindshare (and SEO) for \"pg\" is pretty-well captured by Postgres already)\n\nThe \"postgresql\" alpn identifier has been registered, and I don't\nthink it's a good idea to further change this unless you have good\narguments as to why we'd need to change this.\n\nAdditionally, I don't think psql needs to request any protocol other\nthan Postgres' own protocol, so I don't see any need for an \"arbitrary\nstring\" option, as it'd incorrectly imply that we support arbitrary\nprotocols.\n\n> Rationales\n>\n> I don't really intend to sway anyone who has considered these things and decided against them. My intent is just to shed light for any of these aspects that haven't been carefully considered already.\n>\n> Prefer the Happy Path\n[...]\n> Postgres versions (naturally) take years to make it into mainstream LTS server distros (without explicit user interaction anyway)\n\nUsually, the latest version is picked up by the LTS distro on release.\nAdd a feature freeze window, and you're likely no more than 1 major\nversion behind on launch. Using an LTS release for its full support\nwindow would then indeed imply a long time of using that version, but\nthat's the user's choice for choosing to use the LTS distro.\n\n> Prefer Standard TLS\n>\n> As I experience it (and understand others to experience it), the one-time round trip isn't the main concern for switch to standard TLS, it's the ability to route and proxy.\n\nNo, the one RTT saved is one of the main benefits here for both the\nclients and servers. Server *owners* may benefit by the improved\nrouting capabilities, but we're not developing a database connection\nrouter, but database clients and servers.\n\n> Having an extra round trip (try TLS first, then SSLRequest) for increasingly older versions of Postgres will, definitionally, become even less and less important as time goes on.\n\nYes. But right now, there are approximately 0 servers that use the\nlatest (not even beta) version of PostgreSQL that supports direct\nSSL/TLS connections. So, for now, we need to support connecting to\nolder databases, and I don't think we can just decide to regress those\nusers' connections when they upgrade their client binaries.\n\n> Having postgres TLS/SNI/ALPN routable by default will just be more intuitive (it's what I assumed would have been the default anyway), and help increase adoption in cloud, enterprise, and other settings.\n\nAFAIK, there are very few companies that actually route PostgreSQL\nclient traffic without a bouncer that load-balances the contents of\nthose connections. While TLS/SNI/SLPN does bring benefits to these\ncompanies, I don't think the use of these features is widespread\nenough to default to a more expensive path for older server versions,\nand newer servers that can't or won't support direct ssl connections\nfor some reason.\n\n> We live in the world of ACME / Let's Encrypt / ZeroSSL. Many proxies have that built in. As such optimizing for unverified TLS takes the user down a path that's just more difficult to begin with (it's easier to get a valid TLS cert than it is to get a self-signed cert these days), and more nuanced (upcoming implementors are accustomed to TLS being verified). It's easy to document how to use the letsencrypt client with postgres. It will also be increasingly easy to configure an ACME-enable proxy for postgres and not worry about it in the server at all.\n\nI don't think we should build specifically to support decrypting\nconnection proxies, and thus I don't think that proxy argument holds\nvalue.\n\n> With all that, there's still this issue of downgrade attacks that can't be solved without a breaking change (or unless the user is skilled enough to know to be explicit). I wish that could change with the next major version of postgres - for the client to have to opt-in to insecure connections (I assume that more and more TLS on the serverside will be handled by proxies).\n\nAFAIK, --sslmode=require already prevents downgrade attacks (assuming\nyour ssl library does its job correctly). What more would PostgreSQL\nneed to do?\n\n> I assume that more and more TLS on the serverside will be handled by proxies\n\nI see only negative value there: We have TLS to ensure end-to-end\nconnection security. A proxy in between adds overhead and negates this\nsecurity principle.\n\n> Don't add extra flags\n\nWe can't add completely new configurable features without adding new\nconfiguration options (i.e. flags) for those configurable features. If\nyou don't want new options, you're free to stay at older versions.\n\n> Allow the user to specify ALPN\n>\n> I don't think this is particularly controversial or nuanced, so I don't have much to say here - most TLS tools allow the user to specify ALPN for the same reason they allow specifying the port number - either for privacy, security-by-obscurity, or navigating some form of application or user routing.\n\nAs I mentioned above, I don't see any value, and only demerits, in the\npsql client reporting support for anything other than the protocol\nthat PostgreSQL supports, i.e. the alpn identifier \"postgresql\".\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Sun, 12 May 2024 15:18:44 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comments about TLS (no SSLRequest) and ALPN" }, { "msg_contents": "On Sat, 11 May 2024 at 15:36, AJ ONeal <[email protected]> wrote:\n\n> Having postgres TLS/SNI/ALPN routable by default will just be more intuitive (it's what I assumed would have been the default anyway), and help increase adoption in cloud, enterprise, and other settings.\n\nRouting is primarily a feature for \"cloud-first\" deployments. I.e.\nthings like Kubernetes or equivalent. I don't think people deploying\nto their own metal or on their own network often need this kind of\nfeature today. Of course we don't know what the future holds and it\ncould well become more common.\n\nIn that context I think it's clear that user-oriented tools like psql\nshouldn't change their default behaviour. They need the maximum\nflexibility of being able to negotiate plain text and GSSAUTH\nconnections if possible. It's only applications deployed by the same\ncloud environment building tools that deploy the database and SSL\nproxies that will know where direct SSL connections are necessary.\n\n\n> We live in the world of ACME / Let's Encrypt / ZeroSSL. Many proxies have that built in. As such optimizing for unverified TLS takes the user down a path that's just more difficult to begin with (it's easier to get a valid TLS cert than it is to get a self-signed cert these days), and more nuanced (upcoming implementors are accustomed to TLS being verified). It's easy to document how to use the letsencrypt client with postgres. It will also be increasingly easy to configure an ACME-enable proxy for postgres and not worry about it in the server at all.\n\nI tend to agree that it would be good for our documentation and\ninstall scripts to assume letsencrypt certs can be requested. That\nsaid there are still a lot of database environments that are not on\nnetworks that can reach internet services directly without special\nfirewall or routing rules set up.\n\n\n\n> Allow the user to specify ALPN\n>\n> I don't think this is particularly controversial or nuanced, so I don't have much to say here - most TLS tools allow the user to specify ALPN for the same reason they allow specifying the port number - either for privacy, security-by-obscurity, or navigating some form of application or user routing.\n\nI think I need a citation before I believe this. I can't imagine it\nmakes sense for anything other than general purpose TLS testing tools\nto allow arbitrary protocol names. It seems like something that would\nbe mostly useful for pentesting or regression tests. But for actual\ndeployed applications it doesn't make any sense to me.\n\n\n--\ngreg\n\n\n", "msg_date": "Fri, 17 May 2024 13:56:19 -0400", "msg_from": "\"Greg Stark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comments about TLS (no SSLRequest) and ALPN" } ]
[ { "msg_contents": "After sending out my v18 patches:\nhttps://www.postgresql.org/message-id/20240511.162307.2246647987352188848.t-ishii%40sranhm.sra.co.jp\n\nCFbot complains that the patch was broken:\nhttp://cfbot.cputube.org/patch_48_4460.log\n\n=== Applying patches on top of PostgreSQL commit ID 31e8f4e619d9b5856fa2bd5713cb1e2e170a9c7d ===\n=== applying patch ./v18-0001-Row-pattern-recognition-patch-for-raw-parser.patch\ngpatch: **** Only garbage was found in the patch input.\n\nThe patch was generated by git-format-patch (same as previous\npatches). I failed to find any patch format problem in the\npatch. Does anybody know what's wrong here?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sun, 12 May 2024 15:50:06 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "CFbot does not recognize patch contents" }, { "msg_contents": "Hi,\n\nOn Sun, 12 May 2024 at 09:50, Tatsuo Ishii <[email protected]> wrote:\n>\n> After sending out my v18 patches:\n> https://www.postgresql.org/message-id/20240511.162307.2246647987352188848.t-ishii%40sranhm.sra.co.jp\n>\n> CFbot complains that the patch was broken:\n> http://cfbot.cputube.org/patch_48_4460.log\n>\n> === Applying patches on top of PostgreSQL commit ID 31e8f4e619d9b5856fa2bd5713cb1e2e170a9c7d ===\n> === applying patch ./v18-0001-Row-pattern-recognition-patch-for-raw-parser.patch\n> gpatch: **** Only garbage was found in the patch input.\n>\n> The patch was generated by git-format-patch (same as previous\n> patches). I failed to find any patch format problem in the\n> patch. Does anybody know what's wrong here?\n\nI am able to apply all your patches. I found that a similar thing\nhappened before [0] and I guess your case is similar. Adding Thomas to\nCC, he may be able to help more.\n\nNitpick: There is a trailing space warning while applying one of your patches:\nApplying: Row pattern recognition patch (docs).\n.git/rebase-apply/patch:81: trailing whitespace.\n company | tdate | price | first_value | max | count\n\n[0] postgr.es/m/CA%2BhUKGLiY1e%2B1%3DpB7hXJOyGj1dJOfgde%2BHmiSnv3gDKayUFJMA%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Sun, 12 May 2024 12:08:47 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CFbot does not recognize patch contents" }, { "msg_contents": "> I am able to apply all your patches. I found that a similar thing\n> happened before [0] and I guess your case is similar. Adding Thomas to\n> CC, he may be able to help more.\n\nOk. Thanks for the info.\n\n> Nitpick: There is a trailing space warning while applying one of your patches:\n> Applying: Row pattern recognition patch (docs).\n> .git/rebase-apply/patch:81: trailing whitespace.\n> company | tdate | price | first_value | max | count\n\nYes, I know. The reason why there's a trailing whitespace is, I copied\nthe psql output and pasted it into the docs. I wonder why psql adds\nthe whitespace. Unless there's a good reason to do that, I think it's\nbetter to fix psql so that it does not emit trailing spaces in its\noutput.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sun, 12 May 2024 19:11:11 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CFbot does not recognize patch contents" }, { "msg_contents": "On Sun, May 12, 2024 at 10:11 PM Tatsuo Ishii <[email protected]> wrote:\n> > I am able to apply all your patches. I found that a similar thing\n> > happened before [0] and I guess your case is similar. Adding Thomas to\n> > CC, he may be able to help more.\n>\n> Ok. Thanks for the info.\n\nThis obviously fixed itself automatically soon after this message, but\nI figured out what happened: I had not actually fixed that referenced\nbug in cfbot :-(. It was checking for HTTP error codes correctly in\nthe place that reads emails from the archives, but not the place that\ndownloads patches, so in this case I think when it tried to follow the\nlink[1] to download the patch, I guess it must have pulled down a\ntransient Varnish error message (I don't know what, I don't store it\nanywhere), and tried to apply that as a patch. Oops. Fixed[2].\n\n[1] https://www.postgresql.org/message-id/attachment/160138/v18-0001-Row-pattern-recognition-patch-for-raw-parser.patch\n[2] https://github.com/macdice/cfbot/commit/ec33a65a877a88befc29ea220e87b98c89b27dcc\n\n\n", "msg_date": "Wed, 15 May 2024 14:54:39 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CFbot does not recognize patch contents" }, { "msg_contents": "Hi Thomas,\n\n> This obviously fixed itself automatically soon after this message, but\n> I figured out what happened: I had not actually fixed that referenced\n> bug in cfbot :-(. It was checking for HTTP error codes correctly in\n> the place that reads emails from the archives, but not the place that\n> downloads patches, so in this case I think when it tried to follow the\n> link[1] to download the patch, I guess it must have pulled down a\n> transient Varnish error message (I don't know what, I don't store it\n> anywhere), and tried to apply that as a patch. Oops. Fixed[2].\n> \n> [1] https://www.postgresql.org/message-id/attachment/160138/v18-0001-Row-pattern-recognition-patch-for-raw-parser.patch\n> [2] https://github.com/macdice/cfbot/commit/ec33a65a877a88befc29ea220e87b98c89b27dcc\n\nThank you for looking into this. I understand the situation. BTW I\nhave just posted a v19 patch [1] and cfbot took care of it nicely.\n\n[1] https://www.postgresql.org/message-id/20240515.090203.2255390780622503596.t-ishii%40sranhm.sra.co.jp\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n\n", "msg_date": "Wed, 15 May 2024 13:52:42 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CFbot does not recognize patch contents" } ]
[ { "msg_contents": "XLogReadBufferForRedoExtended() precedes RestoreBlockImage() with\nRBM_ZERO_AND_LOCK. Per src/backend/storage/buffer/README:\n\n Once one has determined that a tuple is interesting (visible to the current\n transaction) one may drop the content lock, yet continue to access the\n tuple's data for as long as one holds the buffer pin.\n\nThe use of RBM_ZERO_AND_LOCK is incompatible with that. See a similar\nargument at https://postgr.es/m/flat/[email protected] that led me\nto the cause. Adding a 10ms sleep just after RBM_ZERO_AND_LOCK, I got 2\nfailures in 7 runs of 027_stream_regress.pl, at Assert(ItemIdIsNormal(lpp)) in\nheapgettup_pagemode(). In the core file, lpp pointed into an all-zeros page.\nRestoreBkpBlocks() had been doing RBM_ZERO years before hot standby existed,\nbut it wasn't a bug until queries could run concurrently.\n\nI suspect the fix is to add a ReadBufferMode specified as, \"If the block is\nalready in shared_buffers, do RBM_NORMAL and exclusive-lock the buffer.\nOtherwise, do RBM_ZERO_AND_LOCK.\" That avoids RBM_NORMAL for a block past the\ncurrent end of the file. Like RBM_ZERO_AND_LOCK, it avoids wasting disk reads\non data we discard. Are there other strategies to consider?\n\nI got here from a Windows CI failure,\nhttps://cirrus-ci.com/task/6247605141766144. That involved patched code, but\nadding the sleep suffices on Linux, with today's git master:\n\n--- a/src/backend/access/transam/xlogutils.c\n+++ b/src/backend/access/transam/xlogutils.c\n@@ -388,6 +388,8 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,\n \t\t*buf = XLogReadBufferExtended(rlocator, forknum, blkno,\n \t\t\t\t\t\t\t\t\t get_cleanup_lock ? RBM_ZERO_AND_CLEANUP_LOCK : RBM_ZERO_AND_LOCK,\n \t\t\t\t\t\t\t\t\t prefetch_buffer);\n+\t\tif (!get_cleanup_lock)\n+\t\t\tpg_usleep(10 * 1000);\n \t\tpage = BufferGetPage(*buf);\n \t\tif (!RestoreBlockImage(record, block_id, page))\n \t\t\tereport(ERROR,\n\n\n", "msg_date": "Sun, 12 May 2024 10:16:58 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Hot standby queries see transient all-zeros pages" } ]
[ { "msg_contents": "Hello!\nI have created a patch to allow additional commas between columns, and \nat the end of the SELECT clause.\n\nMotivation:\nCommas of this type are allowed in many programming languages, in some \nit is even recommended to use them at the ends of lists or objects. A \nnew generation of programmers expects a more forgiving language just as \nour generation enjoyed LIMIT and the ability to write `select` in lowercase.\n\nAccepted:\n     SELECT 1,;\n     SELECT 1,,,,,;\n     SELECT *, from information_schema.sql_features;\n     (...) RETURNING a,,b,c,;\n\nNot accepted:\n     SELECT ,;\n     SELECT ,1;\n     SELECT ,,,;\n\nAdvantages:\n- simplifies the creation and debugging of queries by reducing the most \ncommon syntax error,\n- eliminates the need to use the popular `1::int as dummy` at the end of \na SELECT list,\n- simplifies query generators,\n- the query is still deterministic,\n\nDisadvantages:\n- counting of returned columns can be difficult,\n- syntax checkers will still report errors,\n- probably not SQL standard compliant,\n- functionality can be controversial,\n\nI attach the patch along with the tests.\n\nWhat do you think?\n\nYour opinions are very much welcome!", "msg_date": "Mon, 13 May 2024 00:15:14 +0200", "msg_from": "Artur Formella <[email protected]>", "msg_from_op": true, "msg_subject": "Allowing additional commas between columns, and at the end of the\n SELECT clause" }, { "msg_contents": "On Mon, 13 May 2024 at 10:42, Artur Formella <[email protected]> wrote:\n> Motivation:\n> Commas of this type are allowed in many programming languages, in some\n> it is even recommended to use them at the ends of lists or objects.\n\nSingle trailing commas are a feature that's more and more common in\nlanguages, yes, but arbitrary excess commas is new to me. Could you\nprovide some examples of popular languages which have that, as I can't\nthink of any.\n\n> Accepted:\n> SELECT 1,;\n> SELECT 1,,,,,;\n> SELECT *, from information_schema.sql_features;\n> (...) RETURNING a,,b,c,;\n>\n> Not accepted:\n> SELECT ,;\n> SELECT ,1;\n> SELECT ,,,;\n>\n> Advantages:\n> - simplifies the creation and debugging of queries by reducing the most\n> common syntax error,\n> - eliminates the need to use the popular `1::int as dummy` at the end of\n> a SELECT list,\n\nThis is the first time I've heard of this `1 as dummy`.\n\n> - simplifies query generators,\n> - the query is still deterministic,\n\nWhat part of a query would (or would not) be deterministic? I don't\nthink I understand the potential concern here. Is it about whether the\nstatement can be parsed deterministically?\n\n> Disadvantages:\n> - counting of returned columns can be difficult,\n> - syntax checkers will still report errors,\n> - probably not SQL standard compliant,\n\nI'd argue you better raise this with the standard committee if this\nisn't compliant. I don't see enough added value to break standard\ncompliance here, especially when the standard may at some point allow\nonly a single trailing comma (and not arbitrarily many).\n\n> What do you think?\n\nDo you expect `SELECT 1,,,,,,,` to have an equivalent query identifier\nto `SELECT 1;` in pg_stat_statements? Why, or why not?\n\nOverall, I don't think unlimited commas is a good feature. A trailing\ncomma in the select list would be less problematic, but I'd still want\nto follow the standard first and foremost.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 13 May 2024 11:24:27 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing additional commas between columns, and at the end of the\n SELECT clause" }, { "msg_contents": "Hi,\n\nAs a developer, I love this feature.\n\nBut as a developer of an universal TDOP SQL parser[1], this can be a\npain. Please request it to the standard.\n\nRegards,\nÉtienne\n\n\n[1]: https://gitlab.com/dalibo/transqlate\n\n\n", "msg_date": "Mon, 13 May 2024 14:05:49 +0200", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing additional commas between columns, and at the end of\n the SELECT clause" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n\n> On Mon, 13 May 2024 at 10:42, Artur Formella <[email protected]> wrote:\n>> Motivation:\n>> Commas of this type are allowed in many programming languages, in some\n>> it is even recommended to use them at the ends of lists or objects.\n>\n> Single trailing commas are a feature that's more and more common in\n> languages, yes, but arbitrary excess commas is new to me. Could you\n> provide some examples of popular languages which have that, as I can't\n> think of any.\n\nThe only one I can think of is Perl, which I'm not sure counts as\npopular any more. JavaScript allows consecutive commas in array\nliterals, but they're not no-ops, they create empty array slots:\n\n ❯ js\n Welcome to Node.js v18.19.0.\n Type \".help\" for more information.\n > [1,,2,,]\n [ 1, <1 empty item>, 2, <1 empty item> ]\n\n- ilmari\n\n\n", "msg_date": "Mon, 13 May 2024 13:28:19 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing additional commas between columns, and at the end of\n the SELECT clause" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Matthias van de Meent <[email protected]> writes:\n>> Single trailing commas are a feature that's more and more common in\n>> languages, yes, but arbitrary excess commas is new to me. Could you\n>> provide some examples of popular languages which have that, as I can't\n>> think of any.\n\n> The only one I can think of is Perl, which I'm not sure counts as\n> popular any more. JavaScript allows consecutive commas in array\n> literals, but they're not no-ops, they create empty array slots:\n\nI'm fairly down on this idea for SQL, because I think it creates\nambiguity for the ROW() constructor syntax. That is:\n\n\t(x,y) is understood to be shorthand for ROW(x,y)\n\n\t(x) is not ROW(x), it's just x\n\n\t(x,) means what?\n\nI realize the original proposal intended to restrict the legality of\nexcess commas to only a couple of places, but to me that just flags\nit as a kluge. ROW(...) ought to work pretty much the same as a\nSELECT list.\n\nAs already mentioned, if you can get some variant of this through the\nSQL standards process, we'll probably adopt it. But I doubt that we\nwant to get out front of the committee in this area.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 10:11:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing additional commas between columns,\n and at the end of the SELECT clause" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n>> Matthias van de Meent <[email protected]> writes:\n>>> Single trailing commas are a feature that's more and more common in\n>>> languages, yes, but arbitrary excess commas is new to me. Could you\n>>> provide some examples of popular languages which have that, as I can't\n>>> think of any.\n>\n>> The only one I can think of is Perl, which I'm not sure counts as\n>> popular any more. JavaScript allows consecutive commas in array\n>> literals, but they're not no-ops, they create empty array slots:\n>\n> I'm fairly down on this idea for SQL, because I think it creates\n> ambiguity for the ROW() constructor syntax. That is:\n>\n> \t(x,y) is understood to be shorthand for ROW(x,y)\n>\n> \t(x) is not ROW(x), it's just x\n>\n> \t(x,) means what?\n\nPython has a similar issue: (x, y) is a tuple, but (x) is just x, and\nthey use the trailing comma to disambiguate, so (x,) creates a\nsingle-item tuple. AFAIK it's the only place where the trailing comma\nis significant.\n\n> I realize the original proposal intended to restrict the legality of\n> excess commas to only a couple of places, but to me that just flags\n> it as a kluge. ROW(...) ought to work pretty much the same as a\n> SELECT list.\n\nYeah, a more principled approach would be to not special-case target\nlists, but to allow one (and only one) trailing comma everywhere:\nselect, order by, group by, array constructors, row constructors,\neverything that looks like a function call, etc.\n\n> As already mentioned, if you can get some variant of this through the\n> SQL standards process, we'll probably adopt it. But I doubt that we\n> want to get out front of the committee in this area.\n\nAgreed.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n", "msg_date": "Mon, 13 May 2024 17:35:42 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing additional commas between columns, and at the end of\n the SELECT clause" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> I'm fairly down on this idea for SQL, because I think it creates\n>> ambiguity for the ROW() constructor syntax. That is:\n>> \t(x,y) is understood to be shorthand for ROW(x,y)\n>> \t(x) is not ROW(x), it's just x\n>> \t(x,) means what?\n\n> Python has a similar issue: (x, y) is a tuple, but (x) is just x, and\n> they use the trailing comma to disambiguate, so (x,) creates a\n> single-item tuple. AFAIK it's the only place where the trailing comma\n> is significant.\n\nUgh :-(. The semantic principle I'd prefer to have here is \"a trailing\ncomma is ignored\", but what they did breaks that. But then again,\nI'm not particularly a fan of anything about Python's syntax.\n\n> Yeah, a more principled approach would be to not special-case target\n> lists, but to allow one (and only one) trailing comma everywhere:\n> select, order by, group by, array constructors, row constructors,\n> everything that looks like a function call, etc.\n\nIf it can be made to work everywhere, that would get my vote.\nI'm not sure if any other ambiguities arise, though. SQL has\na lot of weird syntax corners (and the committee keeps adding\nmore :-().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 13:14:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing additional commas between columns,\n and at the end of the SELECT clause" }, { "msg_contents": "On 13.05.2024 11:24, Matthias van de Meent wrote:\n> On Mon, 13 May 2024 at 10:42, Artur Formella<[email protected]> wrote:\n>> Motivation:\n>> Commas of this type are allowed in many programming languages, in some\n>> it is even recommended to use them at the ends of lists or objects.\n> Single trailing commas are a feature that's more and more common in\n> languages, yes, but arbitrary excess commas is new to me. Could you\n> provide some examples of popular languages which have that, as I can't\n> think of any.\nThank for your comment.\nI meant commas are recommended at the end of the list. Sorry for the \nlack of precision.\nTypescript has a popular directive \"rules\": { \"trailing-comma\": false } \nin the tslint.json file, which forces trailing commas. Popular Airbnb \ncoding style require trailing commas by eslint \n(https://github.com/airbnb/javascript?tab=readme-ov-file#functions--signature-invocation-indentation).\n\n> This is the first time I've heard of this `1 as dummy`.\n\ndummy column is a popular way to end SELECT list on R&D phase to avoid \nthe most common syntax error. This way you don't have to pay attention \nto commas.\n\nSELECT <hacking /> , 1::int AS ignoreme FROM <hacking />\n\n>> - simplifies query generators,\n>> - the query is still deterministic,\n> What part of a query would (or would not) be deterministic? I don't\n> think I understand the potential concern here. Is it about whether the\n> statement can be parsed deterministically?\n\nBison doesn't report error or conflict.\n\n> I'd argue you better raise this with the standard committee if this\n> isn't compliant. I don't see enough added value to break standard\n> compliance here, especially when the standard may at some point allow\n> only a single trailing comma (and not arbitrarily many).\n>\n>\n> Do you expect `SELECT 1,,,,,,,` to have an equivalent query identifier\n> to `SELECT 1;` in pg_stat_statements? Why, or why not?\nI don't know, I have a feeling that the queries are equivalent, but I \ndon't know the mechanism.\n> Overall, I don't think unlimited commas is a good feature. A trailing\n> comma in the select list would be less problematic, but I'd still want\n> to follow the standard first and foremost.\n\nI will prepare a patch with trailing comma only tomorrow.\n\nThank you.\n\nArtur\n\n\n\n\n\n\n\nOn 13.05.2024 11:24, Matthias van de\n Meent wrote:\n\n\nOn Mon, 13 May 2024 at 10:42, Artur Formella <[email protected]> wrote:\n\n\nMotivation:\nCommas of this type are allowed in many programming languages, in some\nit is even recommended to use them at the ends of lists or objects.\n\n\n\nSingle trailing commas are a feature that's more and more common in\nlanguages, yes, but arbitrary excess commas is new to me. Could you\nprovide some examples of popular languages which have that, as I can't\nthink of any.\n\n\nThank for your comment.\nI meant commas are recommended at the\n end of the list. Sorry for the lack of precision.\n Typescript has a popular directive \"rules\": { \"trailing-comma\":\n false } in the tslint.json file, which forces trailing commas.\n Popular Airbnb coding style require trailing commas by eslint\n(https://github.com/airbnb/javascript?tab=readme-ov-file#functions--signature-invocation-indentation).\n\n\n\n\n\n\nThis is the first time I've heard of this `1 as dummy`.\n\ndummy column is a popular way to end SELECT list on R&D phase\n to avoid the most common syntax error.\n This way you don't have to pay attention to\n commas.\nSELECT \n <hacking />\n , 1::int AS ignoreme\nFROM\n <hacking />\n\n\n\n- simplifies query generators,\n- the query is still deterministic,\n\n\n\nWhat part of a query would (or would not) be deterministic? I don't\nthink I understand the potential concern here. Is it about whether the\nstatement can be parsed deterministically?\n\n\nBison doesn't report error or conflict.\n\n\n\nI'd argue you better raise this with the standard committee if this\nisn't compliant. I don't see enough added value to break standard\ncompliance here, especially when the standard may at some point allow\nonly a single trailing comma (and not arbitrarily many).\n\n\nDo you expect `SELECT 1,,,,,,,` to have an equivalent query identifier\nto `SELECT 1;` in pg_stat_statements? Why, or why not?\n\nI don't know, I have a feeling that the queries\n are equivalent, but I don't know the mechanism.\n\nOverall, I don't think unlimited commas is a good feature. A trailing\ncomma in the select list would be less problematic, but I'd still want\nto follow the standard first and foremost.\n\n\nI will prepare a patch with trailing comma only tomorrow.\n\nThank you.\nArtur", "msg_date": "Tue, 14 May 2024 00:26:50 +0200", "msg_from": "Artur Formella <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allowing additional commas between columns, and at the end of the\n SELECT clause" } ]
[ { "msg_contents": "The sepgsql tests have not been integrated into the Meson build system \nyet. I propose to fix that here.\n\nOne problem there was that the tests use a very custom construction \nwhere a top-level shell script internally calls make. I have converted \nthis to a TAP script that does the preliminary checks and then calls \npg_regress directly, without make. This seems to get the job done. \nAlso, once you have your SELinux environment set up as required, the \ntest now works fully automatically; you don't have to do any manual prep \nwork. The whole thing is guarded by PG_TEST_EXTRA=sepgsql now.\n\nSome comments and questions:\n\n- Do we want to keep the old way to run the test? I don't know all the \ntesting scenarios that people might be interested in, but of course it \nwould also be good to cut down on the duplication in the test files.\n\n- Strangely, there was apparently so far no way to get to the build \ndirectory from a TAP script. They only ever want to read files from the \nsource directory. So I had to add that.\n\n- If you go through the pre-test checks in contrib/sepgsql/test_sepgsql, \nI have converted most of these checks to the Perl script. Some of the \nchecks are obsolete, because they check whether the database has been \ncorrectly initialized, which is now done by the TAP script anyway. One \ncheck that I wasn't sure about is the\n\n# 'psql' command must be executable from test domain\n\nThe old test was checking the installation tree, which I guess could be \nset up in random ways. But do we need this kind of check if we are \nusing a temporary installation?\n\nAs mentioned in the patch, the documentation needs to be updated. This \ndepends on the outcome of the question above whether we want to keep the \nold tests in some way.", "msg_date": "Mon, 13 May 2024 08:16:10 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Convert sepgsql tests to TAP" }, { "msg_contents": "I took a quick look at the patch and I like that we standardize things a \nbit. But one thing I am not a fan of are all the use of sed and awk in \nthe Perl script. I would prefer if that logic happened all in Perl, \nespecially since we have some of it in Perl (e.g. chomp). Also I wonder \nif we should not use IPC::Run to do the tests since we already depend on \nit for the other TAP tests.\n\nI have not yet set up an VM with selinux to try the patch out for real \nbut will do so later.\n\nOn 5/13/24 8:16 AM, Peter Eisentraut wrote:\n> - Do we want to keep the old way to run the test?  I don't know all the \n> testing scenarios that people might be interested in, but of course it \n> would also be good to cut down on the duplication in the test files.\n\nI cannot see why. Having two ways to run the tests seems only like a bad \nthing to me.\n\n> - If you go through the pre-test checks in contrib/sepgsql/test_sepgsql, \n> I have converted most of these checks to the Perl script.  Some of the \n> checks are obsolete, because they check whether the database has been \n> correctly initialized, which is now done by the TAP script anyway.  One \n> check that I wasn't sure about is the\n> \n> # 'psql' command must be executable from test domain\n> \n> The old test was checking the installation tree, which I guess could be \n> set up in random ways.  But do we need this kind of check if we are \n> using a temporary installation?\n\nYeah, that does not seem necessary.\n\nAndreas\n\n\n", "msg_date": "Wed, 24 Jul 2024 16:31:32 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 7/24/24 4:31 PM, Andreas Karlsson wrote:\n> I have not yet set up an VM with selinux to try the patch out for real \n> but will do so later.\n\nI almost got the tests running but it required way too many manual steps \nto just get there and I gave up after just getting segfaults. I had to \nedit sepgsql-regtest.te because sepgsql-regtest.pp would not build \notherwise on Debian bookworm, but after I had done that instead of \ngetting test failures as I expected I just got segfaults. Maybe those \nare caused by an incorrect sepgsql-regtest.pp but this was not nice at \nall to try to get running for someone like me who does not know selinux \nwell.\n\nPeter, what did you do to get the tests running? And should we fix these \ntests to make them more user friendly?\n\nAndreas\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 18:29:30 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 24.07.24 18:29, Andreas Karlsson wrote:\n> On 7/24/24 4:31 PM, Andreas Karlsson wrote:\n>> I have not yet set up an VM with selinux to try the patch out for real \n>> but will do so later.\n> \n> I almost got the tests running but it required way too many manual steps \n> to just get there and I gave up after just getting segfaults. I had to \n> edit sepgsql-regtest.te because sepgsql-regtest.pp would not build \n> otherwise on Debian bookworm, but after I had done that instead of \n> getting test failures as I expected I just got segfaults. Maybe those \n> are caused by an incorrect sepgsql-regtest.pp but this was not nice at \n> all to try to get running for someone like me who does not know selinux \n> well.\n> \n> Peter, what did you do to get the tests running? And should we fix these \n> tests to make them more user friendly?\n\nIn my experience, the tests (both the old and the proposed new) only \nwork on Red Hat-like platforms. I had also tried on Debian but decided \nthat it won't work.\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 18:31:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 24.07.24 16:31, Andreas Karlsson wrote:\n> I took a quick look at the patch and I like that we standardize things a \n> bit. But one thing I am not a fan of are all the use of sed and awk in \n> the Perl script. I would prefer if that logic happened all in Perl, \n> especially since we have some of it in Perl (e.g. chomp). Also I wonder \n> if we should not use IPC::Run to do the tests since we already depend on \n> it for the other TAP tests.\n\nIn principle yes, but here I tried not rewriting the tests too much but \njust port them to a newer environment. I think the adjustments you \ndescribe could be done as a second step.\n\n(I don't really have any expertise in sepgsql or selinux, I'm just doing \nthis to reduce the dependency on makefiles for testing. So I'm trying \nto use as light a touch as possible.)\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 18:33:53 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 7/24/24 6:31 PM, Peter Eisentraut wrote:\n> On 24.07.24 18:29, Andreas Karlsson wrote:\n>> Peter, what did you do to get the tests running? And should we fix \n>> these tests to make them more user friendly?\n> \n> In my experience, the tests (both the old and the proposed new) only \n> work on Red Hat-like platforms.  I had also tried on Debian but decided \n> that it won't work.\n\nThanks, will try to run them on Rocky Linux when I have calmed down a \nbit. :)\n\nAndreas\n\n\n", "msg_date": "Wed, 24 Jul 2024 18:34:32 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In my experience, the tests (both the old and the proposed new) only \n> work on Red Hat-like platforms. I had also tried on Debian but decided \n> that it won't work.\n\nYeah, Red Hat is pretty much the only vendor that has pushed SELinux\nfar enough to be usable by non-wizards. I'm not surprised if there\nare outright bugs in other distros' versions of it, as AFAIK\nnobody else turns it on by default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jul 2024 12:36:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 7/24/24 12:36, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> In my experience, the tests (both the old and the proposed new) only \n>> work on Red Hat-like platforms. I had also tried on Debian but decided \n>> that it won't work.\n> \n> Yeah, Red Hat is pretty much the only vendor that has pushed SELinux\n> far enough to be usable by non-wizards. I'm not surprised if there\n> are outright bugs in other distros' versions of it, as AFAIK\n> nobody else turns it on by default.\n\nI tried some years ago to get it working on my Debian-derived Linux Mint \ndesktop and gave up. I think SELinux is a really good tool on RHEL \nvariants, but I don't think many people use it on anything else. As Tom \nsays, perhaps there are a few wizards out there though...\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 24 Jul 2024 12:42:11 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 7/24/24 6:33 PM, Peter Eisentraut wrote:\n> On 24.07.24 16:31, Andreas Karlsson wrote:\n>> I took a quick look at the patch and I like that we standardize things \n>> a bit. But one thing I am not a fan of are all the use of sed and awk \n>> in the Perl script. I would prefer if that logic happened all in Perl, \n>> especially since we have some of it in Perl (e.g. chomp). Also I \n>> wonder if we should not use IPC::Run to do the tests since we already \n>> depend on it for the other TAP tests.\n> \n> In principle yes, but here I tried not rewriting the tests too much but \n> just port them to a newer environment.  I think the adjustments you \n> describe could be done as a second step.\n\nThat reasoning makes a lot of sense and I am in agreement. Cleaning that \nup is best for another patch.\n\nAnd managed to get the tests running on Rocky Linux 9 with both \nautotools and meson and everything work as it should.\n\nSo I have two comments:\n\n1) As I said earlier I think we should remove the old code.\n\n2) If we remove the old code I think the launcher script can be merged \ninto the TAP test instead of being a separate shell script. But I am \nfine if you think that is also something for a separate commit.\n\nI like this kind of clean up patch. Good work! :)\n\nAndreas\n\n\n", "msg_date": "Wed, 24 Jul 2024 21:54:37 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "Andreas Karlsson <[email protected]> writes:\n> 1) As I said earlier I think we should remove the old code.\n\nI agree that carrying two versions of the test doesn't seem great.\nHowever, a large part of the purpose of test_sepgsql is to help\npeople debug their sepgsql setup, which is why it goes to great\nlengths to print helpful error messages. I'm worried that making\nit into a TAP test will degrade the usefulness of that, simply\nbecause the TAP infrastructure is pretty damn unfriendly when it\ncomes to figuring out why a test failed. You have to know where\nto even look for the test logfile, and then you have to ignore\na bunch of useless-to-you chatter. I'm not sure if there is much\nwe can do to improve that. (Although if we could, it would\nyield benefits across the whole tree.)\n\nOTOH, I suspect there are so few people using sepgsql that this\ndoesn't matter too much. Probably most of them will be advanced\nhackers who won't blink at digging through a TAP log. We should\nupdate the docs to explain that though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Jul 2024 16:35:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 7/24/24 10:35 PM, Tom Lane wrote:\n> Andreas Karlsson <[email protected]> writes:\n>> 1) As I said earlier I think we should remove the old code.\n> \n> I agree that carrying two versions of the test doesn't seem great.\n> However, a large part of the purpose of test_sepgsql is to help\n> people debug their sepgsql setup, which is why it goes to great\n> lengths to print helpful error messages. I'm worried that making\n> it into a TAP test will degrade the usefulness of that, simply\n> because the TAP infrastructure is pretty damn unfriendly when it\n> comes to figuring out why a test failed. You have to know where\n> to even look for the test logfile, and then you have to ignore\n> a bunch of useless-to-you chatter. I'm not sure if there is much\n> we can do to improve that. (Although if we could, it would\n> yield benefits across the whole tree.)\n\nFor me personally the output from when running it with meson was good \nenough while the output when running with autotools was usable but \nannoying to work with. Meson's integration with TAP is pretty good. But \nwith that said I am a power user and developer used to both meson and \nautotools. Unclear what skill we should expect from the target audience \nof test_sepgsql.\n\nAndreas\n\n\n", "msg_date": "Wed, 24 Jul 2024 23:03:45 +0200", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert sepgsql tests to TAP" }, { "msg_contents": "On 24.07.24 23:03, Andreas Karlsson wrote:\n> On 7/24/24 10:35 PM, Tom Lane wrote:\n>> Andreas Karlsson <[email protected]> writes:\n>>> 1) As I said earlier I think we should remove the old code.\n>>\n>> I agree that carrying two versions of the test doesn't seem great.\n>> However, a large part of the purpose of test_sepgsql is to help\n>> people debug their sepgsql setup, which is why it goes to great\n>> lengths to print helpful error messages.  I'm worried that making\n>> it into a TAP test will degrade the usefulness of that, simply\n>> because the TAP infrastructure is pretty damn unfriendly when it\n>> comes to figuring out why a test failed.  You have to know where\n>> to even look for the test logfile, and then you have to ignore\n>> a bunch of useless-to-you chatter.  I'm not sure if there is much\n>> we can do to improve that.  (Although if we could, it would\n>> yield benefits across the whole tree.)\n> \n> For me personally the output from when running it with meson was good \n> enough while the output when running with autotools was usable but \n> annoying to work with. Meson's integration with TAP is pretty good. But \n> with that said I am a power user and developer used to both meson and \n> autotools. Unclear what skill we should expect from the target audience \n> of test_sepgsql.\n\nHere is a new patch version.\n\nI simplified the uses of sed and awk inside the Perl script. I also \nfixed \"make installcheck\". I noticed that meson installs sepgsql.sql \ninto the wrong directory, so that's fixed also. (Many of the \ncomplications in this patch set are because sepgsql is not an extension \nbut a loose SQL script, of which it is now the only one. Maybe \nsomething to address separately.)\n\nI did end up deciding to keep the old test_sepgsql script, because it \ndoes have the documented purpose of testing existing installations. I \ndid change it so that it calls pg_regress directly, without going via \nmake, so that the dependency on make is removed.\n\nThe documentation is also updated a little bit, but I kept it to a \nminimum, because I'm not really sure how up to date the existing \ndocumentation was. It lists several steps in the test procedure that I \ndidn't need to do. Someone who knows more about the whole picture would \nneed to look at that in more detail.", "msg_date": "Tue, 27 Aug 2024 10:12:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Convert sepgsql tests to TAP" } ]
[ { "msg_contents": "I noticed that the reference pages for initdb and pg_ctl claim in the \nEnvironment section that libpq variables are used, which does not seem \ncorrect to me. I think this was accidentally copied when this blurb was \nadded to other pages.\n\nWhile I was checking around that, I also noticed that pg_amcheck and \npg_upgrade don't have Environment sections on their reference pages, so \nI added them. For pg_amcheck I copied the standard text for client \nprograms. pg_upgrade has its own specific list of environment variables.\n\nPatches attached. I think the first one is a bug fix.", "msg_date": "Mon, 13 May 2024 10:48:53 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "doc: some fixes for environment sections in ref pages" }, { "msg_contents": "> On 13 May 2024, at 10:48, Peter Eisentraut <[email protected]> wrote:\n\n> Patches attached.\n\nAll patches look good.\n\n> I think the first one is a bug fix.\n\nAgreed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 13 May 2024 13:02:46 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: some fixes for environment sections in ref pages" }, { "msg_contents": "On 13.05.24 13:02, Daniel Gustafsson wrote:\n>> On 13 May 2024, at 10:48, Peter Eisentraut <[email protected]> wrote:\n> \n>> Patches attached.\n> \n> All patches look good.\n> \n>> I think the first one is a bug fix.\n> \n> Agreed.\n\nCommitted, thanks.\n\n\n\n", "msg_date": "Wed, 15 May 2024 13:24:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: some fixes for environment sections in ref pages" } ]
[ { "msg_contents": "hi.\n\nexplain(analyze, format json, serialize, memory, costs off, Timing\noff) select * from tenk1;\n QUERY PLAN\n---------------------------------\n [\n {\n \"Plan\": {\n \"Node Type\": \"Seq Scan\",\n \"Parallel Aware\": false,\n \"Async Capable\": false,\n \"Relation Name\": \"tenk1\",\n \"Alias\": \"tenk1\",\n \"Actual Rows\": 10000,\n \"Actual Loops\": 1\n },\n \"Planning\": {\n \"Memory Used\": 23432,\n \"Memory Allocated\": 65536\n },\n \"Planning Time\": 0.290,\n \"Triggers\": [\n ],\n \"Serialization\": {\n \"Output Volume\": 1143,\n \"Format\": \"text\"\n },\n \"Execution Time\": 58.814\n }\n ]\n\nexplain(analyze, format text, serialize, memory, costs off, Timing\noff) select * from tenk1;\n QUERY PLAN\n---------------------------------------------------\n Seq Scan on tenk1 (actual rows=10000 loops=1)\n Planning:\n Memory: used=23432 bytes allocated=65536 bytes\n Planning Time: 0.289 ms\n Serialization: output=1143kB format=text\n Execution Time: 58.904 ms\n\nunder format json, \"Output Volume\": 1143,\n1143 is kiB unit, and is not the same as \"Memory Used\" or \"Memory\nAllocated\" byte unit.\n\nDo we need to convert it to byte for the non-text format option for EXPLAIN?\n\n\n", "msg_date": "Mon, 13 May 2024 17:16:24 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "explain format json, unit for serialize and memory are different." }, { "msg_contents": "> On 13 May 2024, at 11:16, jian he <[email protected]> wrote:\n\n> under format json, \"Output Volume\": 1143,\n> 1143 is kiB unit, and is not the same as \"Memory Used\" or \"Memory\n> Allocated\" byte unit.\n\nNice catch.\n\n> Do we need to convert it to byte for the non-text format option for EXPLAIN?\n\nSince json (and yaml/xml) is intended to be machine-readable I think we use a\nsingle unit for all values, and document this fact.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 13 May 2024 11:22:08 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Mon, May 13, 2024 at 11:22:08AM +0200, Daniel Gustafsson wrote:\n> Since json (and yaml/xml) is intended to be machine-readable I think we use a\n> single unit for all values, and document this fact.\n\nAgreed with the documentation gap. Another thing that could be worth\nconsidering is to add the units aside with the numerical values, say:\n\"Memory Used\": {\"value\": 23432, \"units\": \"bytes\"}\n\nThat would require changing ExplainProperty() so as the units are\nshowed in some shape, while still being readable when parsed. I\nwouldn't recommend doing that in v17, but perhaps v18 could do better?\n\nUnits are also ignored for the XML and yaml outputs.\n--\nMichael", "msg_date": "Tue, 14 May 2024 14:39:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Tue, 14 May 2024 at 17:40, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, May 13, 2024 at 11:22:08AM +0200, Daniel Gustafsson wrote:\n> > Since json (and yaml/xml) is intended to be machine-readable I think we use a\n> > single unit for all values, and document this fact.\n>\n> Agreed with the documentation gap. Another thing that could be worth\n> considering is to add the units aside with the numerical values, say:\n> \"Memory Used\": {\"value\": 23432, \"units\": \"bytes\"}\n>\n> That would require changing ExplainProperty() so as the units are\n> showed in some shape, while still being readable when parsed. I\n> wouldn't recommend doing that in v17, but perhaps v18 could do better?\n\nI think for v17, we should consider adding a macro to explain.c to\ncalculate the KB from bytes. There are other inconsistencies that it\nwould be good to address. We normally round up to the nearest kilobyte\nwith (bytes + 1023) / 1024, but if you look at what 06286709e did, it\nseems to be rounding to the nearest without any apparent justification\nas to why. It does (metrics->bytesSent + 512) / 1024.\n\nshow_memory_counters() could be modified to use the macro and show\nkilobytes rather than bytes.\n\nDavid\n\n\n", "msg_date": "Tue, 14 May 2024 18:16:26 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Tue, 14 May 2024 at 18:16, David Rowley <[email protected]> wrote:\n> I think for v17, we should consider adding a macro to explain.c to\n> calculate the KB from bytes. There are other inconsistencies that it\n> would be good to address. We normally round up to the nearest kilobyte\n> with (bytes + 1023) / 1024, but if you look at what 06286709e did, it\n> seems to be rounding to the nearest without any apparent justification\n> as to why. It does (metrics->bytesSent + 512) / 1024.\n>\n> show_memory_counters() could be modified to use the macro and show\n> kilobytes rather than bytes.\n\nHere's a patch for that.\n\nI checked the EXPLAIN SERIALIZE thread and didn't see any mention of\nthe + 512 thing. It seems Tom added it just before committing and no\npatch ever made it to the mailing list with + 512. The final patch on\nthe list is in [1].\n\nFor the EXPLAIN MEMORY part, the bytes vs kB wasn't discussed. The\nclosest the thread came to that was what Abhijit mentioned in [2].\n\nI also adjusted some inconsistencies around spaces between the digits\nand kB. In other places in EXPLAIN we write \"100kB\" not \"100 kB\". I\nsee we print times with a space (\"Execution Time: 1.719 ms\"), so we're\nnot very consistent overall, but since the EXPLAIN MEMORY is new, it\nmakes sense to change it now to be aligned to the other kB stuff in\nexplain.c\n\nThe patch does change a long into an int64 in show_hash_info(). I\nwondered if that part should be backpatched. It does not seem very\nrobust to me to divide a Size by 1024 and expect it to fit into a\nlong. With MSVC 64 bit, sizeof(Size) == 8 and sizeof(long) == 4. I\nunderstand work_mem is limited to 2GB on that platform, but it does\nnot seem like a good reason to use a long.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAEze2WhopAFRS4xdNtma6XtYCRqydPWAg83jx8HZTowpeXzOyg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/ZaF1fB_hMqycSq-S%40toroid.org", "msg_date": "Tue, 14 May 2024 22:33:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Tue, May 14, 2024 at 6:33 PM David Rowley <[email protected]> wrote:\n>\n> On Tue, 14 May 2024 at 18:16, David Rowley <[email protected]> wrote:\n> > I think for v17, we should consider adding a macro to explain.c to\n> > calculate the KB from bytes. There are other inconsistencies that it\n> > would be good to address. We normally round up to the nearest kilobyte\n> > with (bytes + 1023) / 1024, but if you look at what 06286709e did, it\n> > seems to be rounding to the nearest without any apparent justification\n> > as to why. It does (metrics->bytesSent + 512) / 1024.\n> >\n> > show_memory_counters() could be modified to use the macro and show\n> > kilobytes rather than bytes.\n>\n> Here's a patch for that.\n>\n> I checked the EXPLAIN SERIALIZE thread and didn't see any mention of\n> the + 512 thing. It seems Tom added it just before committing and no\n> patch ever made it to the mailing list with + 512. The final patch on\n> the list is in [1].\n>\n> For the EXPLAIN MEMORY part, the bytes vs kB wasn't discussed. The\n> closest the thread came to that was what Abhijit mentioned in [2].\n\n\n\nstatic void\nshow_memory_counters(ExplainState *es, const MemoryContextCounters\n*mem_counters)\n{\nif (es->format == EXPLAIN_FORMAT_TEXT)\n{\nExplainIndentText(es);\nappendStringInfo(es->str,\n\"Memory: used=%zukB allocated=%zukB\",\nBYTES_TO_KILOBYTES(mem_counters->totalspace - mem_counters->freespace),\nBYTES_TO_KILOBYTES(mem_counters->totalspace));\nappendStringInfoChar(es->str, '\\n');\n}\nelse\n{\nExplainPropertyInteger(\"Memory Used\", \"bytes\",\n mem_counters->totalspace - mem_counters->freespace,\n es);\nExplainPropertyInteger(\"Memory Allocated\", \"bytes\",\n mem_counters->totalspace, es);\n}\n}\n\nthe \"else\" branch, also need to apply BYTES_TO_KILOBYTES marco?\notherwise, it's inconsistent?\n\n> I also adjusted some inconsistencies around spaces between the digits\n> and kB. In other places in EXPLAIN we write \"100kB\" not \"100 kB\". I\n> see we print times with a space (\"Execution Time: 1.719 ms\"), so we're\n> not very consistent overall, but since the EXPLAIN MEMORY is new, it\n> makes sense to change it now to be aligned to the other kB stuff in\n> explain.c\n>\n> The patch does change a long into an int64 in show_hash_info(). I\n> wondered if that part should be backpatched. It does not seem very\n> robust to me to divide a Size by 1024 and expect it to fit into a\n> long. With MSVC 64 bit, sizeof(Size) == 8 and sizeof(long) == 4. I\n> understand work_mem is limited to 2GB on that platform, but it does\n> not seem like a good reason to use a long.\n>\n\nI also checked output\nfrom function show_incremental_sort_group_info and show_sort_info,\nthe \"kB\" usage is consistent.\n\n\n", "msg_date": "Tue, 14 May 2024 21:17:51 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Wed, 15 May 2024 at 01:18, jian he <[email protected]> wrote:\n> else\n> {\n> ExplainPropertyInteger(\"Memory Used\", \"bytes\",\n> mem_counters->totalspace - mem_counters->freespace,\n> es);\n> ExplainPropertyInteger(\"Memory Allocated\", \"bytes\",\n> mem_counters->totalspace, es);\n> }\n> }\n>\n> the \"else\" branch, also need to apply BYTES_TO_KILOBYTES marco?\n\nYeah, I missed that. Here's another patch.\n\nThanks for looking.\n\nDavid", "msg_date": "Wed, 15 May 2024 02:01:21 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Wed, May 15, 2024 at 02:01:21AM +1200, David Rowley wrote:\n> Yeah, I missed that. Here's another patch.\n> \n> Thanks for looking.\n\nThanks for bringing in a patch that makes the whole picture more\nconsistent across the board. When it comes to MEMORY, I can get\nbehind your suggestion to use kB and call it a day, while SERIALIZE\nwould apply the same conversion at 1023b.\n\nIt would be nice to document the units implied in the non-text\nformats, rather than have the users guess what these are.\n\nPerhaps Alvaro and Tom would like to chime in, as committers of\nrespectively 5de890e3610d and 06286709ee06?\n--\nMichael", "msg_date": "Wed, 15 May 2024 09:57:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Perhaps Alvaro and Tom would like to chime in, as committers of\n> respectively 5de890e3610d and 06286709ee06?\n\nNo objection here. In a green field I might argue for\nround-to-nearest instead of round-up, but it looks like we\nhave several precedents for round-up, so let's avoid changing\nthat existing behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 May 2024 21:23:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "explain (format json, analyze, wal, buffers, memory, serialize) insert\ninto tenk1 select * from tenk1 limit 1;\n QUERY PLAN\n-----------------------------------------------\n [\n {\n \"Plan\": {\n \"Node Type\": \"ModifyTable\",\n \"Operation\": \"Insert\",\n \"Parallel Aware\": false,\n \"Async Capable\": false,\n \"Relation Name\": \"tenk1\",\n \"Alias\": \"tenk1\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 0.04,\n \"Plan Rows\": 0,\n \"Plan Width\": 0,\n \"Actual Startup Time\": 0.030,\n \"Actual Total Time\": 0.030,\n \"Actual Rows\": 0,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 3,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"WAL Records\": 1,\n \"WAL FPI\": 0,\n \"WAL Bytes\": 299,\n \"Plans\": [\n {\n \"Node Type\": \"Limit\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Async Capable\": false,\n \"Startup Cost\": 0.00,\n \"Total Cost\": 0.04,\n \"Plan Rows\": 1,\n \"Plan Width\": 244,\n \"Actual Startup Time\": 0.011,\n \"Actual Total Time\": 0.011,\n \"Actual Rows\": 1,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 2,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"WAL Records\": 0,\n \"WAL FPI\": 0,\n \"WAL Bytes\": 0,\n \"Plans\": [\n {\n \"Node Type\": \"Seq Scan\",\n \"Parent Relationship\": \"Outer\",\n \"Parallel Aware\": false,\n \"Async Capable\": false,\n \"Relation Name\": \"tenk1\",\n \"Alias\": \"tenk1_1\",\n \"Startup Cost\": 0.00,\n \"Total Cost\": 445.00,\n \"Plan Rows\": 10000,\n \"Plan Width\": 244,\n \"Actual Startup Time\": 0.009,\n \"Actual Total Time\": 0.009,\n \"Actual Rows\": 1,\n \"Actual Loops\": 1,\n \"Shared Hit Blocks\": 2,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"WAL Records\": 0,\n \"WAL FPI\": 0,\n \"WAL Bytes\": 0\n }\n ]\n }\n ]\n },\n \"Planning\": {\n \"Shared Hit Blocks\": 0,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0,\n \"Memory Used\": 68080,\n \"Memory Allocated\": 131072\n },\n \"Planning Time\": 0.659,\n \"Triggers\": [\n ],\n \"Serialization\": {\n \"Time\": 0.000,\n \"Output Volume\": 0,\n \"Format\": \"text\",\n \"Shared Hit Blocks\": 0,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n },\n \"Execution Time\": 0.065\n }\n ]\n\n-------\n \"Shared Hit Blocks\": 0,\n \"Shared Read Blocks\": 0,\n \"Shared Dirtied Blocks\": 0,\n \"Shared Written Blocks\": 0,\n \"Local Hit Blocks\": 0,\n \"Local Read Blocks\": 0,\n \"Local Dirtied Blocks\": 0,\n \"Local Written Blocks\": 0,\n \"Temp Read Blocks\": 0,\n \"Temp Written Blocks\": 0\n\nthese information duplicated for json key \"Serialization\" and json key\n\"Planning\"\ni am not sure this is intended?\n\n\n", "msg_date": "Wed, 15 May 2024 09:44:25 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Wed, 15 May 2024 at 13:44, jian he <[email protected]> wrote:\n> \"Shared Hit Blocks\": 0,\n> \"Shared Read Blocks\": 0,\n> \"Shared Dirtied Blocks\": 0,\n> \"Shared Written Blocks\": 0,\n> \"Local Hit Blocks\": 0,\n> \"Local Read Blocks\": 0,\n> \"Local Dirtied Blocks\": 0,\n> \"Local Written Blocks\": 0,\n> \"Temp Read Blocks\": 0,\n> \"Temp Written Blocks\": 0\n>\n> these information duplicated for json key \"Serialization\" and json key\n> \"Planning\"\n> i am not sure this is intended?\n\nLooks ok to me. Buffers used during planning are independent from the\nbuffers used when outputting rows to the client.\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 14:13:00 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Wed, May 15, 2024 at 10:13 AM David Rowley <[email protected]> wrote:\n>\n> On Wed, 15 May 2024 at 13:44, jian he <[email protected]> wrote:\n> > \"Shared Hit Blocks\": 0,\n> > \"Shared Read Blocks\": 0,\n> > \"Shared Dirtied Blocks\": 0,\n> > \"Shared Written Blocks\": 0,\n> > \"Local Hit Blocks\": 0,\n> > \"Local Read Blocks\": 0,\n> > \"Local Dirtied Blocks\": 0,\n> > \"Local Written Blocks\": 0,\n> > \"Temp Read Blocks\": 0,\n> > \"Temp Written Blocks\": 0\n> >\n> > these information duplicated for json key \"Serialization\" and json key\n> > \"Planning\"\n> > i am not sure this is intended?\n>\n> Looks ok to me. Buffers used during planning are independent from the\n> buffers used when outputting rows to the client.\n>\n\nlooking at serializeAnalyzeReceive.\nI am not sure which part of serializeAnalyzeReceive will update pgBufferUsage.\n\nI am looking for an example where this information under json key\n\"Serialization\" is not zero.\nSo far I have tried:\n\ncreate table s(a text);\ninsert into s select repeat('a', 1024) from generate_series(1,1024);\nexplain (format json, analyze, wal, buffers, memory, serialize, timing\noff) select * from s;\n\n\n", "msg_date": "Wed, 15 May 2024 11:40:11 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Wed, 15 May 2024 at 15:40, jian he <[email protected]> wrote:\n> I am looking for an example where this information under json key\n> \"Serialization\" is not zero.\n> So far I have tried:\n\nSomething that requires detoasting.\n\n> create table s(a text);\n> insert into s select repeat('a', 1024) from generate_series(1,1024);\n> explain (format json, analyze, wal, buffers, memory, serialize, timing\n> off) select * from s;\n\nSomething bigger than 1024 bytes or use SET STORAGE EXTERNAL or EXTENDED.\n\ncreate table s(a text);\ninsert into s select repeat('a', 1024*1024) from generate_series(1,10);\nexplain (format text, analyze, buffers, serialize, timing off) select * from s;\n\n Serialization: output=10241kB format=text\n Buffers: shared hit=36\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 16:44:32 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." }, { "msg_contents": "On Wed, 15 May 2024 at 13:23, Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > Perhaps Alvaro and Tom would like to chime in, as committers of\n> > respectively 5de890e3610d and 06286709ee06?\n>\n> No objection here. In a green field I might argue for\n> round-to-nearest instead of round-up, but it looks like we\n> have several precedents for round-up, so let's avoid changing\n> that existing behavior.\n\nThanks. I've pushed the patch now.\n\nDavid\n\n\n", "msg_date": "Thu, 16 May 2024 12:51:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: explain format json, unit for serialize and memory are different." } ]
[ { "msg_contents": "In master, if you look at ExecHashGetHashValue() in nodeHash.c, you\ncan see that it calls ExecEvalExpr() and then manually calls the hash\nfunction on the returned value. This process is repeated once for each\nhash key. This is inefficient for a few reasons:\n\n1) ExecEvalExpr() will only deform tuples up the max varattno that's\nmentioned in the hash key. That means we might have to deform\nattributes in multiple steps, once for each hash key.\n2) ExecHashGetHashValue() is very branchy and checks if hashStrict[]\nand keep_nulls on every loop. There's also a branch to check which\nhash functions to use.\n3) foreach isn't exactly the pinnacle of efficiency either.\n\nAll of the above points can be improved by making ExprState handle\nhashing. This means we'll deform all attributes that are needed for\nhashing once, rather than incrementally once per key. This also allows\nJIT compilation of hashing ExprStates, which will make things even\nfaster.\n\nThe attached patch implements this. Here are some performance numbers.\n\n## Test 1: rows=1000 jit=0\n\n1 hash key\nmaster = 4938.5 tps\npatched = 5126.7 tps (+3.81%)\n\n2 hash keys\nmaster = 4326.4 tps\npatched = 4520.2 tps (+4.48%)\n\n3 hash keys\nmaster = 4145.5 tps\npatched = 4559.7 tps (+9.99%)\n\n## Test 2: rows = 1000000 jit=1 (with opt and inline)\n\n1 hash key\nmaster = 3.663 tps\npatched = 3.816 tps (+4.16%)\n\n2 hash keys\nmaster = 3.392 tps\npatched = 3.550 tps (+4.67%)\n\n3 hash keys\nmaster = 3.086 tps\npatched = 3.411 tps (+10.55%)\n\nBenchmark script attached\n\nNotes:\nThe ExecBuildHash32Expr() function to build the ExprState isn't called\nfrom the same location as the previous ExecInitExprList() code. The\nreason for this is that it's not possible to build the ExprState for\nhashing in ExecInitHash() because we don't yet know the jointype and\nwe need to know that because the expression ExecBuildHash32Expr()\nneeds to allow NULLs for outer join types. I've put the\nExecBuildHash32Expr() call in ExecInitHashJoin() just after we set\nhj_NullOuterTupleSlot and hj_NullOuterTupleSlot fields. I tried\nhaving this code in ExecHashTableCreate(). but that's no good as we\nonly call that during executor run, which is too late as any SubPlans\nin the hash keys need to be attributed to the correct parent. Since\nEXPLAIN shows the subplans, this needs to be done before executor run.\n\nI've not hacked on llvmjit_expr.c much before, so I'd be happy for a\ndetailed review of that code.\n\nI manually checked hashvalues between JIT and non-JIT. They matched.\nIf we ever consider JITting more granularly, it might be worth always\napplying the same jit flags to the hash exprs on either side of the\njoin. I've slight concerns about compiler bugs producing different\nhash codes. Unsure if there are non-bug reasons for them to differ on\nthe same CPU architecture.\n\nI've not looked at applications of this beyond hash join. I'm\nconsidering other executor nodes to be follow-on material.\n\nThanks to Andres Freund for mentioning this idea to me.", "msg_date": "Mon, 13 May 2024 21:23:49 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Speed up Hash Join by teaching ExprState about hashing" }, { "msg_contents": "On Mon, 13 May 2024 at 21:23, David Rowley <[email protected]> wrote:\n> In master, if you look at ExecHashGetHashValue() in nodeHash.c, you\n> can see that it calls ExecEvalExpr() and then manually calls the hash\n> function on the returned value. This process is repeated once for each\n> hash key. This is inefficient for a few reasons:\n>\n> 1) ExecEvalExpr() will only deform tuples up the max varattno that's\n> mentioned in the hash key. That means we might have to deform\n> attributes in multiple steps, once for each hash key.\n> 2) ExecHashGetHashValue() is very branchy and checks if hashStrict[]\n> and keep_nulls on every loop. There's also a branch to check which\n> hash functions to use.\n> 3) foreach isn't exactly the pinnacle of efficiency either.\n>\n> All of the above points can be improved by making ExprState handle\n> hashing. This means we'll deform all attributes that are needed for\n> hashing once, rather than incrementally once per key. This also allows\n> JIT compilation of hashing ExprStates, which will make things even\n> faster.\n>\n> The attached patch implements this. Here are some performance numbers.\n\nI've been doing a bit more work on this to start to add support for\nfaster hashing for hashing needs other than Hash Join. In the\nattached, I've added support to give the hash value an initial value.\nSupport for that is required to allow Hash Aggregate to work. If you\nlook at what's being done now inside BuildTupleHashTableExt(), you'll\nsee that \"hash_iv\" exists there to allow an initial hash value. This\nseems to be getting used to allow some variation in hash values\ncalculated inside parallel workers, per hashtable->hash_iv =\nmurmurhash32(ParallelWorkerNumber). One of my aims for this patch is\nto always produce the same hash value before and after the patch, so\nI've gone and implemented the equivalent functionality which can be\nenabled or disabled as required depending on the use case.\n\nI've not added support for Hash Aggregate quite yet. I did look at\ndoing that, but it seems to need quite a bit of refactoring to do it\nnicely. The problem is that BuildTupleHashTableExt() receives\nkeyColIdx with the attribute numbers to hash. The new\nExecBuildHash32Expr() function requires a List of Exprs. It looks\nlike the keyColIdx array comes directly from the planner which is many\nlayers up and would need lots of code churn of function signatures to\nchange. While I could form Vars using the keyColIdx array to populate\nthe required List of Exprs, I so far can't decide where exactly that\nshould happen. I think probably the planner should form the Expr List.\nIt seems a bit strange to be doing makeVar() in the executor.\n\nI currently think that it's fine to speed up Hash Join as phase one\nfor this patch. I can work more on improving hash value generation in\nother locations later.\n\nI'd be happy if someone else were to give this patch a review and\ntest. One part I struggled a bit with was finding a way to cast the\nSize variable down to uint32 in LLVM. I tried to add a new supported\ntype for uint32 but just couldn't get it to work. Instead, I did:\n\nv_tmp1 = LLVMBuildAnd(b, v_tmp1,\n l_sizet_const(0xffffffff), \"\");\n\nwhich works and I imagine compiled to the same code as a cast. It\njust looks a bit strange.\n\nDavid", "msg_date": "Thu, 11 Jul 2024 16:47:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up Hash Join by teaching ExprState about hashing" }, { "msg_contents": "On Sun, 11 Aug 2024 at 22:09, Alexey Dvoichenkov <[email protected]> wrote:\n> I like the idea so I started looking at this patch. I ran some tests,\n> the query is an aggregation over a join of two tables with 5M rows,\n> where \"columns\" is the number of join conditions. (Mostly the same as\n> in your test.) The numbers are the average query run-time in seconds.\n\nThanks for running those tests.\n\nI wondered if the hash table has 5M items that the non-predictable\nmemory access pattern when probing that table might be drowning out\nsome of the gains of producing hash values faster. I wrote the\nattached script which creates a fairly small table but probes that\ntable much more than once per hash value. I tried to do that in a way\nthat didn't read or process lots of shared buffers so as not to put\nadditional pressure on the CPU caches, which could evict cache lines\nof the hash table. I am seeing much larger performance gains from\nthat test. Up to 26% faster. Please see the attached .png file for the\nresults. I've also attached the script I used to get those results.\nThis time I tried 1-6 join columns and also included the test results\nfor jit=off, jit=on, jit optimize, jit inline for each of the 6\nqueries. You can see that with 5 and 6 columns that jit inline was\n26% faster than master, but just 14% faster with 1 column. The\nsmallest improvement was with 1 col with jit=on at just 7% faster.\n\n> - ExecHashGetHashValue, and\n> - TupleHashTableHash_internal\n>\n> .. currently rotate the initial and previous hash values regardless of\n> the NULL check. So the rotation should probably be placed before the\n> NULL check in NEXT states if you want to preserve the existing\n> behavior.\n\nThat's my mistake. I think originally I didn't see the sense in\nrotating, but you're right. I think not doing that would have (1,\nNULL) and (NULL, 1) hash to the same value. Maybe that's ok, but I\nthink it's much better not to take the risk and keep the behaviour the\nsame as master. The attached v3 patch does that. I've left the\nclient_min_messages=debug1 output in the patch for now. I checked the\nhash values match with master using a FULL OUTER JOIN with a 3-column\njoin using 1000 random INTs, 10% of them NULL.\n\nDavid", "msg_date": "Thu, 15 Aug 2024 11:36:57 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up Hash Join by teaching ExprState about hashing" }, { "msg_contents": "On Thu, 15 Aug 2024 at 19:50, Alexey Dvoichenkov <[email protected]> wrote:\n> I gave v3 another look. One tiny thing I've noticed is that you\n> removed ExecHashGetHashValue() but not its forward declaration in\n> include/executor/nodeHash.h\n\nFixed\n\n> I also reviewed the JIT code this time, it looks reasonable to\n> me. I've added names to some variables to make the IR easier to\n> read. (Probably best to squash it into your patch, if you want to\n> apply this.)\n\nThanks. I've included that.\n\nI made another complete pass over this today and I noticed that there\nwere a few cases where I wasn't properly setting resnull and resvalue\nto (Datum) 0.\n\nI'm happy with the patch now. I am aware nothing currently uses\nEEOP_HASHDATUM_SET_INITVAL, but I want to get moving with the Hash\nAggregate usages of this code fairly quickly and I'd rather get the\nExprState step code done now and not have to change it again.\n\nv4 patch attached. If nobody else wants to look at this then I'm\nplanning on pushing it soon.\n\nDavid", "msg_date": "Sat, 17 Aug 2024 17:14:10 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up Hash Join by teaching ExprState about hashing" } ]
[ { "msg_contents": "Hi,\n\nBookworm versions of the Debian CI images are available now [0]. The\npatches to use these images are attached.\n\n'v1-0001-Upgrade-Debian-CI-images-to-Bookworm_REL_16+.patch' patch can\nbe applied to both upstream and REL_16 and all of the tasks finish\nsuccessfully.\n\n'v1-0001-Upgrade-Debian-CI-images-to-Bookworm_REL_15.patch' patch can\nbe applied to REL_15 but it gives a compiler warning. The fix for this\nwarning is proposed here [1]. After the fix is applied, all of the\ntasks finish successfully.\n\nAny kind of feedback would be appreciated.\n\n[0] https://github.com/anarazel/pg-vm-images/pull/91\n\n[1] postgr.es/m/CAN55FZ0o9wqVoMTh_gJCmj_%2B4XbX9VXzQF8OySPZ0R1saxV3bA%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 13 May 2024 13:57:08 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Upgrade Debian CI images to Bookworm" }, { "msg_contents": "On 13.05.24 12:57, Nazir Bilal Yavuz wrote:\n> Bookworm versions of the Debian CI images are available now [0]. The\n> patches to use these images are attached.\n> \n> 'v1-0001-Upgrade-Debian-CI-images-to-Bookworm_REL_16+.patch' patch can\n> be applied to both upstream and REL_16 and all of the tasks finish\n> successfully.\n> \n> 'v1-0001-Upgrade-Debian-CI-images-to-Bookworm_REL_15.patch' patch can\n> be applied to REL_15 but it gives a compiler warning. The fix for this\n> warning is proposed here [1]. After the fix is applied, all of the\n> tasks finish successfully.\n> \n> Any kind of feedback would be appreciated.\n\nThese updates are very welcome and look straightforward enough.\n\nI'm not sure what the backpatching expectations of this kind of thing \nis. The history of this CI setup is relatively short, so this hasn't \nbeen stressed too much. I see that we once backpatched the macOS \nupdate, but that might have been all.\n\nIf we start backpatching this kind of thing, then this will grow as a \njob over time. We'll have 5 or 6 branches to keep up to date, with \nseveral operating systems. And once in a while we'll have to make \nadditional changes like this warning fix you mention here. I'm not sure \nhow much we want to take this on. Is there ongoing value in the CI \nsetup in backbranches?\n\nWith these patches, we could do either of the following:\n\n1) We update only master and only after it branches for PG18. (The \nupdate is a \"new feature\".)\n\n2) We update only master but do it now. (This gives us the most amount \nof buffer time before the next release.)\n\n3) We update master and PG16 now. We ignore PG15.\n\n4) We update master and PG16 now. We update PG15 whenever that warning \nis fixed.\n\n5) We update master, PG16, and PG15, but we hold all of them until the \nwarning in PG15 is fixed.\n\n6) We update all of them now and let the warning in PG15 be fixed \nindependently.\n\n\n\n", "msg_date": "Fri, 24 May 2024 16:17:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" }, { "msg_contents": "Hi,\n\nOn 2024-05-24 16:17:37 +0200, Peter Eisentraut wrote:\n> I'm not sure what the backpatching expectations of this kind of thing is.\n> The history of this CI setup is relatively short, so this hasn't been\n> stressed too much. I see that we once backpatched the macOS update, but\n> that might have been all.\n\nI've backpatched a few other changes too.\n\n\n> If we start backpatching this kind of thing, then this will grow as a job\n> over time. We'll have 5 or 6 branches to keep up to date, with several\n> operating systems. And once in a while we'll have to make additional\n> changes like this warning fix you mention here. I'm not sure how much we\n> want to take this on. Is there ongoing value in the CI setup in\n> backbranches?\n\nI find it extremely useful to run CI on backbranches before\nbatckpatching. Enough so that I've thought about proposing backpatching CI all\nthe way.\n\nI don't think it's that much work to fix this kind of thing in the\nbackbranches. We don't need to backpatch new tasks or such. Just enough stuff\nto keep e.g. the base image the same - otherwise we end up running CI on\nunsupported distros, which doesn't help anybody.\n\n\n> With these patches, we could do either of the following:\n> 5) We update master, PG16, and PG15, but we hold all of them until the\n> warning in PG15 is fixed.\n\nI think we should apply the fix in <= 15 - IMO it's a correct compiler\nwarning, what we do right now is wrong.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 24 May 2024 10:30:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" }, { "msg_contents": "Hi,\n\nOn 2024-05-24 10:30:00 -0700, Andres Freund wrote:\n> On 2024-05-24 16:17:37 +0200, Peter Eisentraut wrote:\n> > I'm not sure what the backpatching expectations of this kind of thing is.\n> > The history of this CI setup is relatively short, so this hasn't been\n> > stressed too much. I see that we once backpatched the macOS update, but\n> > that might have been all.\n> \n> I've backpatched a few other changes too.\n> \n> \n> > If we start backpatching this kind of thing, then this will grow as a job\n> > over time. We'll have 5 or 6 branches to keep up to date, with several\n> > operating systems. And once in a while we'll have to make additional\n> > changes like this warning fix you mention here. I'm not sure how much we\n> > want to take this on. Is there ongoing value in the CI setup in\n> > backbranches?\n> \n> I find it extremely useful to run CI on backbranches before\n> batckpatching. Enough so that I've thought about proposing backpatching CI all\n> the way.\n> \n> I don't think it's that much work to fix this kind of thing in the\n> backbranches. We don't need to backpatch new tasks or such. Just enough stuff\n> to keep e.g. the base image the same - otherwise we end up running CI on\n> unsupported distros, which doesn't help anybody.\n> \n> \n> > With these patches, we could do either of the following:\n> > 5) We update master, PG16, and PG15, but we hold all of them until the\n> > warning in PG15 is fixed.\n> \n> I think we should apply the fix in <= 15 - IMO it's a correct compiler\n> warning, what we do right now is wrong.\n\nI've now applied the guc fix to all branches and the CI changes to 15+.\n\nThanks Bilal!\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 15 Jul 2024 09:43:26 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" }, { "msg_contents": "Hi,\n\nOn 2024-07-15 09:43:26 -0700, Andres Freund wrote:\n> I've now applied the guc fix to all branches and the CI changes to 15+.\n\nUgh. I see that this fails on master, because of\n\ncommit 0c3930d0768\nAuthor: Peter Eisentraut <[email protected]>\nDate: 2024-07-01 07:30:38 +0200\n \n Apply COPT to CXXFLAGS as well\n\nI hadn't seen that because of an independent failure (the macos stuff, I'll\nsend an email about it in a bit).\n\nNot sure what the best real fix here is, this is outside of our code. I'm\ninclined to just disable llvm for the compiler warning task for now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Jul 2024 11:30:59 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" }, { "msg_contents": "Hi,\n\nOn 2024-07-15 11:30:59 -0700, Andres Freund wrote:\n> On 2024-07-15 09:43:26 -0700, Andres Freund wrote:\n> > I've now applied the guc fix to all branches and the CI changes to 15+.\n> \n> Ugh. I see that this fails on master, because of\n> \n> commit 0c3930d0768\n> Author: Peter Eisentraut <[email protected]>\n> Date: 2024-07-01 07:30:38 +0200\n> \n> Apply COPT to CXXFLAGS as well\n> \n> I hadn't seen that because of an independent failure (the macos stuff, I'll\n> send an email about it in a bit).\n> \n> Not sure what the best real fix here is, this is outside of our code. I'm\n> inclined to just disable llvm for the compiler warning task for now.\n\nOh - there's a better fix: Turns out bookworm does have llvm 16, where the\nwarning has been fixed. Upgrading the CI image to install llvm 16 should fix\nthis. Any arguments against that approach?\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 15 Jul 2024 12:37:54 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" }, { "msg_contents": "Hi,\n\nOn 2024-07-15 12:37:54 -0700, Andres Freund wrote:\n> On 2024-07-15 11:30:59 -0700, Andres Freund wrote:\n> > On 2024-07-15 09:43:26 -0700, Andres Freund wrote:\n> > > I've now applied the guc fix to all branches and the CI changes to 15+.\n> >\n> > Ugh. I see that this fails on master, because of\n> >\n> > commit 0c3930d0768\n> > Author: Peter Eisentraut <[email protected]>\n> > Date: 2024-07-01 07:30:38 +0200\n> >\n> > Apply COPT to CXXFLAGS as well\n> >\n> > I hadn't seen that because of an independent failure (the macos stuff, I'll\n> > send an email about it in a bit).\n> >\n> > Not sure what the best real fix here is, this is outside of our code. I'm\n> > inclined to just disable llvm for the compiler warning task for now.\n>\n> Oh - there's a better fix: Turns out bookworm does have llvm 16, where the\n> warning has been fixed. Upgrading the CI image to install llvm 16 should fix\n> this. Any arguments against that approach?\n\nSpecifically, something like the attached.\n\nDue to the CI failure this is causing, I'm planning to apply this soon...\n\nArguably we could backpatch this, the warning are present on older branches\ntoo. Except that they don't cause errors, as 0c3930d0768 is only on master.\n\nGreetings,\n\nAndres", "msg_date": "Mon, 15 Jul 2024 14:35:14 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" }, { "msg_contents": "Hi,\n\nOn 2024-07-15 14:35:14 -0700, Andres Freund wrote:\n> Specifically, something like the attached.\n>\n> Due to the CI failure this is causing, I'm planning to apply this soon...\n>\n> Arguably we could backpatch this, the warning are present on older branches\n> too. Except that they don't cause errors, as 0c3930d0768 is only on master.\n\nHere's a v2, to address two things:\n- there was an error in the docs build, because LLVM_CONFIG changed\n- there were also deprecation warnings in headerscheck/cpluspluscheck\n\nSo I just made the change apply a bit more widely.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 15 Jul 2024 14:58:16 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade Debian CI images to Bookworm" } ]
[ { "msg_contents": "Hi,\n\nMy collegue Konstantin Knizhnik pointed out that we fail to mark pages\nwith a non-standard page layout with page_std=false in\nRelationCopyStorageUsingBuffer when we WAL-log them. This causes us to\ninterpret the registered buffer as a standard buffer, and omit the\nhole in the page, which for FSM/VM pages covers the whole page.\n\nThe immediate effect of this bug is that replicas and primaries in a\nphysical replication system won't have the same data in their VM- and\nFSM-forks until the first VACUUM on the new database has WAL-logged\nthese pages again. Whilst not actively harmful for the VM/FSM\nsubsystems, it's definitely suboptimal.\nSecondary unwanted effects are that AMs that use the buffercache- but\nwhich don't use or update the pageheader- also won't see the main data\nlogged in WAL, thus potentially losing user data in the physical\nreplication stream or with a system crash. I've not looked for any\nsuch AMs and am unaware of any that would have this issue, but it's\nbetter to fix this.\n\n\nPFA a patch that fixes this issue, by assuming that all pages in the\nsource database utilize a non-standard page layout.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Mon, 13 May 2024 14:31:41 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "WAL_LOG CREATE DATABASE strategy broken for non-standard page layouts" }, { "msg_contents": "Matthias van de Meent <[email protected]> writes:\n> PFA a patch that fixes this issue, by assuming that all pages in the\n> source database utilize a non-standard page layout.\n\nSurely that cure is worse than the disease?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 10:13:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL_LOG CREATE DATABASE strategy broken for non-standard page\n layouts" }, { "msg_contents": "On Mon, 13 May 2024 at 16:13, Tom Lane <[email protected]> wrote:\n>\n> Matthias van de Meent <[email protected]> writes:\n> > PFA a patch that fixes this issue, by assuming that all pages in the\n> > source database utilize a non-standard page layout.\n>\n> Surely that cure is worse than the disease?\n\nI don't know where we would get the information whether the selected\nrelation fork's pages are standard-compliant. We could base it off of\nthe fork number (that info is available locally) but that doesn't\nguarantee much.\nFor VM and FSM-pages we know they're essentially never\nstandard-compliant (hence this thread), but for the main fork it is\nanyone's guess once the user has installed an additional AM - which we\ndon't detect nor pass through to the offending\nRelationCopyStorageUsingBuffer.\n\nAs for \"worse\", the default template database is still much smaller\nthan the working set of most databases. This will indeed regress the\nworkload a bit, but only by the fraction of holes in the page + all\nFSM/VM data.\nI think the additional WAL volume during CREATE DATABASE is worth it\nwhen the alternative is losing that data with physical\nreplication/secondary instances. Note that this does not disable page\ncompression, it just stops the logging of holes in pages; holes which\ngenerally are only a fraction of the whole database.\n\nIt's not inconceivable that this will significantly increase WAL\nvolume, but I think we should go for correctness rather than fastest\ncopy. If we went with fastest copy, we'd better just skip logging the\nFSM and VM forks because we're already ignoring the data of the pages,\nso why not ignore the pages themselves, too? I don't think that holds\nwater when we want to be crash-proof in CREATE DATABASE, with a full\ndata copy of the template database.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 13 May 2024 16:52:49 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WAL_LOG CREATE DATABASE strategy broken for non-standard page\n layouts" }, { "msg_contents": "On Mon, May 13, 2024 at 10:53 AM Matthias van de Meent\n<[email protected]> wrote:\n> It's not inconceivable that this will significantly increase WAL\n> volume, but I think we should go for correctness rather than fastest\n> copy.\n\nI don't think we can afford to just do this blindly for the sake of a\nhypothetical non-core AM that uses nonstandard pages. There must be\nlots of cases where the holes are large, and where the WAL volume\nwould be a multiple of what it is currently. That's a *big*\nregression.\n\n> If we went with fastest copy, we'd better just skip logging the\n> FSM and VM forks because we're already ignoring the data of the pages,\n> so why not ignore the pages themselves, too? I don't think that holds\n> water when we want to be crash-proof in CREATE DATABASE, with a full\n> data copy of the template database.\n\nThis seems like a red herring. Either assuming standard pages is a\ngood idea or it isn't, and either logging the FSM and VM forks is a\ngood idea or it isn't, but those are two separate questions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 May 2024 15:43:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL_LOG CREATE DATABASE strategy broken for non-standard page\n layouts" }, { "msg_contents": "Hi,\r\n\r\nQuick question, are there any more revisions left to be done on this patch from the previous feedback?\r\nOr should I continue with reviewing the current patch?\r\n\r\nRegards,\r\nAkshat Jaimini", "msg_date": "Sat, 14 Sep 2024 18:57:21 +0000", "msg_from": "Akshat Jaimini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL_LOG CREATE DATABASE strategy broken for non-standard page\n layouts" } ]
[ { "msg_contents": "Hi team!\n\nFirst, i want to thank you for having your hands in this. You are doing a\nfantastic and blessing job. Bless to you all!\n\nI have a special need i want to comment to you. This is not a bug, is a\nneed i have and i write here for been redirected where needed.\n\nI have to make a daily backup. The database is growing a lot per day, and\nsometimes i've had the need to recover just a table. And would be easier\nmove a 200M file with only the needed table instead of moving a 5G file\nwith all the tables i don't need, just a matter of speed.\n\nI've created a script to export every table one by one, so in case i need\nto import a table again, don't have the need to use the very big\nexportation file, but the \"tablename.sql\" file created for every table.\n\nMy hosting provider truncated my script because is very large (more than\n200 lines, each line to export one table), so i think the way i do this is\nhurting the server performance.\n\nThen my question.\n\nDo you consider useful to add a parameter (for example, --separatetables)\nso when used the exporting file process can create a different\ntablename.sql file for each table in database automatically?\n\nExample...\n\nPGHOST=\"/tmp\" PGPASSWORD=\"mydbpass\" pg_dump -U dbusername --separatetables\n-Fp --inserts dbname > \"/route/dbname.sql\"\n\nAnd if this database has tables table1...table10, then 10 files are\ncreated...\n\ndbname_table1.sql\ndbname_table2.sql\ndbname_table3.sql\n...\ndbname_table8.sql\ndbname_table9.sql\ndbname_table10.sql\n\n\nIn each file, all main parameters will be generated again. For example the\nfile dbname_table1.sql...\n\n--\n-- PostgreSQL database dump\n--\n-- Dumped from database version 10.21\n-- Dumped by pg_dump version 15.6\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET client_encoding = 'UTF8';\n...\n...\nSET default_tablespace = '';\n--\n-- Name: table1; Type: TABLE; Schema: public; Owner: dbusername\n--\nCREATE TABLE public.table1 (\n code numeric(5,0),\n name character varying(20)\n)\n\n\nI dont know if many developers have same need as me. I hope this help in\nfuture.\n\nThanks for reading me and thanks for what you've done.. You are doing fine!\nCheers!\n\n\n______________\nJuan de Jesús\n\nHi team!\nFirst, i want to thank you for having\n your hands in this. You are doing a fantastic and blessing\n job. Bless to you all!\nI have a special need i want to comment\n to you. This is not a bug, is a need i have and i write here\n for been redirected where needed.\nI have to make a daily backup. The\n database is growing a lot per day, and sometimes i've had the\n need to recover just a table. And would be easier move a 200M\n file with only the needed table instead of moving a 5G file\n with all the tables i don't need, just a matter of speed.\n\nI've created a script to export\n every table one by one, so in case i need to import a table\n again, don't have the need to use the very big exportation\n file, but the \"tablename.sql\" file created for every table.\nMy hosting provider truncated my script\n because is very large (more than 200 lines, each line to\n export one table), so i think the way i do this is hurting the\n server performance.\nThen my question.\nDo you consider useful to add a\n parameter (for example, --separatetables) so when used the\n exporting file process can create a different tablename.sql\n file for each table in database automatically?\nExample...\nPGHOST=\"/tmp\"\n PGPASSWORD=\"mydbpass\" pg_dump -U dbusername --separatetables\n -Fp --inserts dbname > \"/route/dbname.sql\"\nAnd if this database has tables\n table1...table10, then 10 files are created...\ndbname_table1.sql\n dbname_table2.sql\n dbname_table3.sql\n ...\n dbname_table8.sql\n dbname_table9.sql\n dbname_table10.sql\n\n\nIn each file, all main parameters will\n be generated again. For example the file dbname_table1.sql...\n\n--\n --\n PostgreSQL database dump\n --\n --\n Dumped from database version 10.21\n --\n Dumped by pg_dump version 15.6\n SET\n statement_timeout = 0;\n SET\n lock_timeout = 0;\n SET\n client_encoding = 'UTF8';\n ...\n ...\n SET\n default_tablespace = '';\n --\n -- Name:\n table1; Type: TABLE; Schema: public; Owner: dbusername\n --\n CREATE\n TABLE public.table1 (\n     code\n numeric(5,0),\n     name\n character varying(20)\n )\n\n\nI dont know if many\n developers have same need as me. I hope this help in future.\nThanks for reading me\n and thanks for what you've done.. You are doing fine! Cheers!\n\n\n\n______________\n Juan de Jesús", "msg_date": "Mon, 13 May 2024 09:01:45 -0400", "msg_from": "=?UTF-8?Q?Juan_Hern=C3=A1ndez?= <[email protected]>", "msg_from_op": true, "msg_subject": "I have an exporting need..." }, { "msg_contents": "On Tue, 14 May 2024 at 06:18, Juan Hernández <[email protected]> wrote:\n> Do you consider useful to add a parameter (for example, --separatetables) so when used the exporting file process can create a different tablename.sql file for each table in database automatically?\n>\n> Example...\n>\n> PGHOST=\"/tmp\" PGPASSWORD=\"mydbpass\" pg_dump -U dbusername --separatetables -Fp --inserts dbname > \"/route/dbname.sql\"\n>\n> And if this database has tables table1...table10, then 10 files are created...\n\npg_dump has code to figure out the dependency of objects in the\ndatabase so that the dump file produced can be restored. If one file\nwas output per table, how would you know in which order to restore\nthem? For example, you have a table with a FOREIGN KEY to reference\nsome other table, you need to restore the referenced table first.\nThat's true for both restoring the schema and restoring the data.\n\nDavid\n\n\n", "msg_date": "Tue, 14 May 2024 12:13:11 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have an exporting need..." }, { "msg_contents": "On 13/05/2024 16:01, Juan Hernández wrote:\n> Hi team!\n> \n> First, i want to thank you for having your hands in this. You are doing \n> a fantastic and blessing job. Bless to you all!\n> \n> I have a special need i want to comment to you. This is not a bug, is a \n> need i have and i write here for been redirected where needed.\n> \n> I have to make a daily backup. The database is growing a lot per day, \n> and sometimes i've had the need to recover just a table. And would be \n> easier move a 200M file with only the needed table instead of moving a \n> 5G file with all the tables i don't need, just a matter of speed.\n> \n> I've created a script to export every table one by one, so in case i \n> need to import a table again, don't have the need to use the very big \n> exportation file, but the \"tablename.sql\" file created for every table.\n> \n> My hosting provider truncated my script because is very large (more than \n> 200 lines, each line to export one table), so i think the way i do this \n> is hurting the server performance.\n\nSome ideas for you to explore:\n\n- Use \"pg_dump -Fcustom\" format. That still creates one large file, but \nyou can then use \"pg_restore --table=foobar\" to extract a .sql file for \nsingle table from that when restoring.\n\n- \"pg_dump -Fdirectory\" format does actually create one file per table. \nIt's in pg_dump's internal format though, so you'll still need to use \npg_restore to make sense of it.\n\n- Use rsync to copy just the changed parts between two dump.\n\n> Then my question.\n> \n> Do you consider useful to add a parameter (for example, \n> --separatetables) so when used the exporting file process can create a \n> different tablename.sql file for each table in database automatically?\n\nIt'd be tricky to restore from, as you need to restore the tables in the \nright order. I think you'd still need a \"main\" sql file that includes \nall the other files in the right order. And using the table names as \nfilenames gets tricky if the table names contain any funny characters.\n\nFor manual operations, yeah, I can see it being useful nevertheless.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 14 May 2024 09:19:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have an exporting need..." }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> On 13/05/2024 16:01, Juan Hernández wrote:\n>> Do you consider useful to add a parameter (for example, \n>> --separatetables) so when used the exporting file process can create a \n>> different tablename.sql file for each table in database automatically?\n\n> It'd be tricky to restore from, as you need to restore the tables in the \n> right order. I think you'd still need a \"main\" sql file that includes \n> all the other files in the right order. And using the table names as \n> filenames gets tricky if the table names contain any funny characters.\n\nIt's a lot worse than that, as it's entirely possible to have circular\nFK dependencies, meaning there is no \"right order\" if you think of\neach table file as self-contained DDL plus data. Other sorts of\ncircularities are possible too.\n\npg_dump deals with that hazard by splitting things up: first create\nall the tables, then load all the data, then create all the indexes\nand foreign keys. You can tell it to just emit the parts relevant to\na particular table, but it's on your head whether that's actually\ngoing to be useful in your context. I doubt that it's widely enough\nuseful to justify creating a special mode beyond what we already\nhave.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 May 2024 10:54:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have an exporting need..." }, { "msg_contents": "Hi team!\n\nI read all your comments and this leads me to learn more.\n\nFor me and my case would be useful, even there are other ways to solve\nthis, but I may be wrong and just have to learn more about maintenance,\nbackup and recovery tasks.\n\nWhat if when --separatetables clause is used, table definition and data are\nexported. Indexes, foreign keys and relations declarations are exported\ntoo, but commented, with an advice. Just an idea.\n\nThank you all and best regards!\n\nJuan de Jesús\n\n\n\nEl mar, 14 may 2024 a las 10:54, Tom Lane (<[email protected]>) escribió:\n\n> Heikki Linnakangas <[email protected]> writes:\n> > On 13/05/2024 16:01, Juan Hernández wrote:\n> >> Do you consider useful to add a parameter (for example,\n> >> --separatetables) so when used the exporting file process can create a\n> >> different tablename.sql file for each table in database automatically?\n>\n> > It'd be tricky to restore from, as you need to restore the tables in the\n> > right order. I think you'd still need a \"main\" sql file that includes\n> > all the other files in the right order. And using the table names as\n> > filenames gets tricky if the table names contain any funny characters.\n>\n> It's a lot worse than that, as it's entirely possible to have circular\n> FK dependencies, meaning there is no \"right order\" if you think of\n> each table file as self-contained DDL plus data. Other sorts of\n> circularities are possible too.\n>\n> pg_dump deals with that hazard by splitting things up: first create\n> all the tables, then load all the data, then create all the indexes\n> and foreign keys. You can tell it to just emit the parts relevant to\n> a particular table, but it's on your head whether that's actually\n> going to be useful in your context. I doubt that it's widely enough\n> useful to justify creating a special mode beyond what we already\n> have.\n>\n> regards, tom lane\n>\n\nHi team!I read all your comments and this leads me to learn more.For me and my case would be useful, even there are other ways to solve this, but I may be wrong and just have to learn more about maintenance, backup and recovery tasks.What if when --separatetables clause is used, table definition and data are exported. Indexes, foreign keys and relations declarations are exported too, but commented, with an advice. Just an idea.Thank you all and best regards!Juan de JesúsEl mar, 14 may 2024 a las 10:54, Tom Lane (<[email protected]>) escribió:Heikki Linnakangas <[email protected]> writes:\n> On 13/05/2024 16:01, Juan Hernández wrote:\n>> Do you consider useful to add a parameter (for example, \n>> --separatetables) so when used the exporting file process can create a \n>> different tablename.sql file for each table in database automatically?\n\n> It'd be tricky to restore from, as you need to restore the tables in the \n> right order. I think you'd still need a \"main\" sql file that includes \n> all the other files in the right order. And using the table names as \n> filenames gets tricky if the table names contain any funny characters.\n\nIt's a lot worse than that, as it's entirely possible to have circular\nFK dependencies, meaning there is no \"right order\" if you think of\neach table file as self-contained DDL plus data.  Other sorts of\ncircularities are possible too.\n\npg_dump deals with that hazard by splitting things up: first create\nall the tables, then load all the data, then create all the indexes\nand foreign keys.  You can tell it to just emit the parts relevant to\na particular table, but it's on your head whether that's actually\ngoing to be useful in your context.  I doubt that it's widely enough\nuseful to justify creating a special mode beyond what we already\nhave.\n\n                        regards, tom lane", "msg_date": "Mon, 20 May 2024 19:46:43 -0400", "msg_from": "=?UTF-8?Q?Juan_Hern=C3=A1ndez?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I have an exporting need..." } ]
[ { "msg_contents": "Hello,\n\nIn light of multiple threads [1-6] discussing sorting improvements, I'd like to consolidate the old (+some new) ideas as a starting point.\nIt might make sense to brain storm on a few of these ideas and maybe even identify some that are worth implementing and testing.\n\n1. Simple algorithmic ideas:\n\t- Use single-assignment insertion-sort instead of swapping\n\t- Increase insertion-sort threshold to at least 8 (possibly 10+), to be determined empirically based on current hardware\n\t- Make insertion-sort threshold customizable via template based on sort element size\n\n2. More complex/speculative algorithmic ideas:\n\t- Try counting insertion-sort loop iterations and bail after a certain limit (include presorted check in insertion-sort loop and continue presorted check from last position in separate loop after bailout)\n\t- Try binary search for presorted check (outside of insertion-sort-code)\n\t- Try binary insertion sort (if comparison costs are high)\n\t- Try partial insertion sort (include presorted check)\n\t- Try presorted check only at top-level, not on every recursive step, or if on every level than at least only for n > some threshold\n\t- Try asymmetric quick-sort partitioning\n\t- Try dual pivot quick-sort\n\t- Try switching to heap-sort dependent on recursion depth (might allow ripping out median-of-median)\n\n3. TupleSort ideas:\n\t- Use separate sort partition for NULL values to avoid null check on every comparison and to make nulls first/last trivial\n\t- Pass down non-nullness info to avoid null check and/or null-partition creation (should ideally be determined by planner)\n\t- Skip comparison of first sort key on subsequent full tuple tie-breaker comparison (unless abbreviated key)\n\t- Encode NULL directly in abbreviated key (only if no null-partitioning)\n\n4. Planner ideas:\n\t- Use pg_stats.correlation to inform sort algorithm selection for sort keys that come from sequential-scans/bitmap-heap-scans\n\t- Use n_distinct to inform sort algorithm selection (many tie-breaker comparisons necessary on multi-key sort)\n\t- Improve costing of sorts in planner considering tuple size, distribution and n_distinct\n\n[1] https://www.postgresql.org/message-id/flat/ddc4e498740a8e411c59%40zeyos.com\n[2] https://www.postgresql.org/message-id/flat/CAFBsxsHanJTsX9DNJppXJxwg3bU%2BYQ6pnmSfPM0uvYUaFdwZdQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/CAApHDvoTTtoQYfp3d0kTPF6y1pjexgLwquzKmjzvjC9NCw4RGw%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/flat/CAEYLb_Xn4-6f1ofsf2qduf24dDCVHbQidt7JPpdL_RiT1zBJ6A%40mail.gmail.com\n[5] https://www.postgresql.org/message-id/flat/CAEYLb_W%2B%2BUhrcWprzG9TyBVF7Sn-c1s9oLbABvAvPGdeP2DFSQ%40mail.gmail.com\n[6] https://www.postgresql.org/message-id/flat/683635b8-381b-5b08-6069-d6a45de19a12%40enterprisedb.com#12683b7a6c566eb5b926369b5fd2d41f\n\n-- \n\nBenjamin Coutu\nhttp://www.zeyos.com\n\n\n", "msg_date": "Mon, 13 May 2024 22:43:00 +0200", "msg_from": "Benjamin Coutu <[email protected]>", "msg_from_op": true, "msg_subject": "Summary of Sort Improvement Proposals" } ]
[ { "msg_contents": "Hi,\n\nIt can be very useful to look at the log messages emitted by a larger number\nof postgres instances to see if anything unusual is happening. E.g. checking\nwhether there are an increased number of internal, IO, corruption errors (and\nLOGs too, because we emit plenty bad things as LOG) . One difficulty is that\nextensions tend to not categorize their errors. But unfortunately errors in\nextensions are hard to distinguish from errors emitted by postgres.\n\nA related issue is that it'd be useful to be able to group log messages by\nextension, to e.g. see which extensions are emitting disproportionally many\nlog messages.\n\nTherefore I'd like to collect the extension name in elog/ereport and add a\nmatching log_line_prefix escape code.\n\n\nIt's not entirely trivial to provide errfinish() with a parameter indicating\nthe extension, but it's doable:\n\n1) Have PG_MODULE_MAGIC also define a new variable for the extension name,\n empty at that point\n\n2) In internal_load_library(), look up that new variable, and fill it with a,\n mangled, libname.\n\n4) in elog.h, define a new macro depending on BUILDING_DLL (if it is set,\n we're in the server, otherwise an extension). In the backend itself, define\n it to NULL, otherwise to the variable created by PG_MODULE_MAGIC.\n\n5) In elog/ereport/errsave/... pass this new variable to\n errfinish/errsave_finish.\n\n\nI've attached a *very rough* prototype of this idea. My goal at this stage was\njust to show that it's possible, not for the code to be in a reviewable state.\n\n\nHere's e.g. what this produces with log_line_prefix='%m [%E] '\n\n2024-05-13 13:50:17.518 PDT [postgres] LOG: database system is ready to accept connections\n2024-05-13 13:50:19.138 PDT [cube] ERROR: invalid input syntax for cube at character 13\n2024-05-13 13:50:19.138 PDT [cube] DETAIL: syntax error at or near \"f\"\n2024-05-13 13:50:19.138 PDT [cube] STATEMENT: SELECT cube('foo');\n\n2024-05-13 13:43:07.484 PDT [postgres] LOG: database system is ready to accept connections\n2024-05-13 13:43:11.699 PDT [hstore] ERROR: syntax error in hstore: unexpected end of string at character 15\n2024-05-13 13:43:11.699 PDT [hstore] STATEMENT: SELECT hstore('foo');\n\n\nIt's worth pointing out that this, quite fundamentally, can only work when the\nlog message is triggered directly by the extension. If the extension code\ncalls some postgres function and that function then errors out, it'll be seens\nas being part of postgres.\n\nBut I think that's ok - they're going to be properly errcode-ified etc.\n\n\nThoughts?\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 13 May 2024 13:51:33 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "On Mon, May 13, 2024 at 5:51 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> It can be very useful to look at the log messages emitted by a larger\nnumber\n> of postgres instances to see if anything unusual is happening. E.g.\nchecking\n> whether there are an increased number of internal, IO, corruption errors\n(and\n> LOGs too, because we emit plenty bad things as LOG) . One difficulty is\nthat\n> extensions tend to not categorize their errors. But unfortunately errors\nin\n> extensions are hard to distinguish from errors emitted by postgres.\n>\n> A related issue is that it'd be useful to be able to group log messages by\n> extension, to e.g. see which extensions are emitting disproportionally\nmany\n> log messages.\n>\n> Therefore I'd like to collect the extension name in elog/ereport and add a\n> matching log_line_prefix escape code.\n>\n\nI liked the idea ... It is very helpful for troubleshooting problems in\nproduction.\n\n\n> It's not entirely trivial to provide errfinish() with a parameter\nindicating\n> the extension, but it's doable:\n>\n> 1) Have PG_MODULE_MAGIC also define a new variable for the extension name,\n> empty at that point\n>\n> 2) In internal_load_library(), look up that new variable, and fill it\nwith a,\n> mangled, libname.\n>\n> 4) in elog.h, define a new macro depending on BUILDING_DLL (if it is set,\n> we're in the server, otherwise an extension). In the backend itself,\ndefine\n> it to NULL, otherwise to the variable created by PG_MODULE_MAGIC.\n>\n> 5) In elog/ereport/errsave/... pass this new variable to\n> errfinish/errsave_finish.\n>\n\nThen every extension should define their own Pg_extension_filename, right?\n\n\n> I've attached a *very rough* prototype of this idea. My goal at this\nstage was\n> just to show that it's possible, not for the code to be in a reviewable\nstate.\n>\n>\n> Here's e.g. what this produces with log_line_prefix='%m [%E] '\n>\n> 2024-05-13 13:50:17.518 PDT [postgres] LOG: database system is ready to\naccept connections\n> 2024-05-13 13:50:19.138 PDT [cube] ERROR: invalid input syntax for cube\nat character 13\n> 2024-05-13 13:50:19.138 PDT [cube] DETAIL: syntax error at or near \"f\"\n> 2024-05-13 13:50:19.138 PDT [cube] STATEMENT: SELECT cube('foo');\n>\n> 2024-05-13 13:43:07.484 PDT [postgres] LOG: database system is ready to\naccept connections\n> 2024-05-13 13:43:11.699 PDT [hstore] ERROR: syntax error in hstore:\nunexpected end of string at character 15\n> 2024-05-13 13:43:11.699 PDT [hstore] STATEMENT: SELECT hstore('foo');\n>\n>\n\nWas not able to build your patch by simply:\n\n./configure --prefix=/tmp/pg\n...\nmake -j\n...\n/usr/bin/ld: ../../src/port/libpgport_srv.a(path_srv.o): warning:\nrelocation against `Pg_extension_filename' in read-only section `.text'\n/usr/bin/ld: access/brin/brin.o: in function `brininsert':\n/data/src/pg/main/src/backend/access/brin/brin.c:403: undefined reference\nto `Pg_extension_filename'\n/usr/bin/ld: access/brin/brin.o: in function `brinbuild':\n/data/src/pg/main/src/backend/access/brin/brin.c:1107: undefined reference\nto `Pg_extension_filename'\n/usr/bin/ld: access/brin/brin.o: in function `brin_summarize_range':\n/data/src/pg/main/src/backend/access/brin/brin.c:1383: undefined reference\nto `Pg_extension_filename'\n/usr/bin/ld: /data/src/pg/main/src/backend/access/brin/brin.c:1389:\nundefined reference to `Pg_extension_filename'\n/usr/bin/ld: /data/src/pg/main/src/backend/access/brin/brin.c:1434:\nundefined reference to `Pg_extension_filename'\n/usr/bin/ld:\naccess/brin/brin.o:/data/src/pg/main/src/backend/access/brin/brin.c:1450:\nmore undefined references to `Pg_extension_filename' follow\n/usr/bin/ld: warning: creating DT_TEXTREL in a PIE\ncollect2: error: ld returned 1 exit status\nmake[2]: *** [Makefile:67: postgres] Error 1\nmake[2]: Leaving directory '/data/src/pg/main/src/backend'\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake[1]: Leaving directory '/data/src/pg/main/src'\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\n> It's worth pointing out that this, quite fundamentally, can only work\nwhen the\n> log message is triggered directly by the extension. If the extension code\n> calls some postgres function and that function then errors out, it'll be\nseens\n> as being part of postgres.\n>\n> But I think that's ok - they're going to be properly errcode-ified etc.\n>\n\nHmmm, depending on the extension it can extensively call/use postgres code\nso would be nice if we can differentiate if the code is called from\nPostgres itself or from an extension.\n\nRegards,\n\n--\nFabrízio de Royes Mello\n\nOn Mon, May 13, 2024 at 5:51 PM Andres Freund <[email protected]> wrote:>> Hi,>> It can be very useful to look at the log messages emitted by a larger number> of postgres instances to see if anything unusual is happening. E.g. checking> whether there are an increased number of internal, IO, corruption errors (and> LOGs too, because we emit plenty bad things as LOG) . One difficulty is that> extensions tend to not categorize their errors. But unfortunately errors in> extensions are hard to distinguish from errors emitted by postgres.>> A related issue is that it'd be useful to be able to group log messages by> extension, to e.g. see which extensions are emitting disproportionally many> log messages.>> Therefore I'd like to collect the extension name in elog/ereport and add a> matching log_line_prefix escape code.>I liked the idea ... It is very helpful for troubleshooting problems in production.> It's not entirely trivial to provide errfinish() with a parameter indicating> the extension, but it's doable:>> 1) Have PG_MODULE_MAGIC also define a new variable for the extension name,>    empty at that point>> 2) In internal_load_library(), look up that new variable, and fill it with a,>    mangled, libname.>> 4) in elog.h, define a new macro depending on BUILDING_DLL (if it is set,>    we're in the server, otherwise an extension). In the backend itself, define>    it to NULL, otherwise to the variable created by PG_MODULE_MAGIC.>> 5) In elog/ereport/errsave/... pass this new variable to>    errfinish/errsave_finish.>Then every extension should define their own Pg_extension_filename, right?> I've attached a *very rough* prototype of this idea. My goal at this stage was> just to show that it's possible, not for the code to be in a reviewable state.>>> Here's e.g. what this produces with log_line_prefix='%m [%E] '>> 2024-05-13 13:50:17.518 PDT [postgres] LOG:  database system is ready to accept connections> 2024-05-13 13:50:19.138 PDT [cube] ERROR:  invalid input syntax for cube at character 13> 2024-05-13 13:50:19.138 PDT [cube] DETAIL:  syntax error at or near \"f\"> 2024-05-13 13:50:19.138 PDT [cube] STATEMENT:  SELECT cube('foo');>> 2024-05-13 13:43:07.484 PDT [postgres] LOG:  database system is ready to accept connections> 2024-05-13 13:43:11.699 PDT [hstore] ERROR:  syntax error in hstore: unexpected end of string at character 15> 2024-05-13 13:43:11.699 PDT [hstore] STATEMENT:  SELECT hstore('foo');>>Was not able to build your patch by simply:./configure --prefix=/tmp/pg...make -j.../usr/bin/ld: ../../src/port/libpgport_srv.a(path_srv.o): warning: relocation against `Pg_extension_filename' in read-only section `.text'/usr/bin/ld: access/brin/brin.o: in function `brininsert':/data/src/pg/main/src/backend/access/brin/brin.c:403: undefined reference to `Pg_extension_filename'/usr/bin/ld: access/brin/brin.o: in function `brinbuild':/data/src/pg/main/src/backend/access/brin/brin.c:1107: undefined reference to `Pg_extension_filename'/usr/bin/ld: access/brin/brin.o: in function `brin_summarize_range':/data/src/pg/main/src/backend/access/brin/brin.c:1383: undefined reference to `Pg_extension_filename'/usr/bin/ld: /data/src/pg/main/src/backend/access/brin/brin.c:1389: undefined reference to `Pg_extension_filename'/usr/bin/ld: /data/src/pg/main/src/backend/access/brin/brin.c:1434: undefined reference to `Pg_extension_filename'/usr/bin/ld: access/brin/brin.o:/data/src/pg/main/src/backend/access/brin/brin.c:1450: more undefined references to `Pg_extension_filename' follow/usr/bin/ld: warning: creating DT_TEXTREL in a PIEcollect2: error: ld returned 1 exit statusmake[2]: *** [Makefile:67: postgres] Error 1make[2]: Leaving directory '/data/src/pg/main/src/backend'make[1]: *** [Makefile:42: all-backend-recurse] Error 2make[1]: Leaving directory '/data/src/pg/main/src'make: *** [GNUmakefile:11: all-src-recurse] Error 2> It's worth pointing out that this, quite fundamentally, can only work when the> log message is triggered directly by the extension. If the extension code> calls some postgres function and that function then errors out, it'll be seens> as being part of postgres.>> But I think that's ok - they're going to be properly errcode-ified etc.>Hmmm, depending on the extension it can extensively call/use postgres code so would be nice if we can differentiate if the code is called from Postgres itself or from an extension.Regards,--Fabrízio de Royes Mello", "msg_date": "Mon, 13 May 2024 19:25:11 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Hi,\n\nOn 2024-05-13 19:25:11 -0300, Fabrízio de Royes Mello wrote:\n> On Mon, May 13, 2024 at 5:51 PM Andres Freund <[email protected]> wrote:\n> > It's not entirely trivial to provide errfinish() with a parameter\n> indicating\n> > the extension, but it's doable:\n> >\n> > 1) Have PG_MODULE_MAGIC also define a new variable for the extension name,\n> > empty at that point\n> >\n> > 2) In internal_load_library(), look up that new variable, and fill it\n> with a,\n> > mangled, libname.\n> >\n> > 4) in elog.h, define a new macro depending on BUILDING_DLL (if it is set,\n> > we're in the server, otherwise an extension). In the backend itself,\n> define\n> > it to NULL, otherwise to the variable created by PG_MODULE_MAGIC.\n> >\n> > 5) In elog/ereport/errsave/... pass this new variable to\n> > errfinish/errsave_finish.\n> >\n> \n> Then every extension should define their own Pg_extension_filename, right?\n\nIt'd be automatically set by postgres when loading libraries.\n\n\n> > I've attached a *very rough* prototype of this idea. My goal at this\n> stage was\n> > just to show that it's possible, not for the code to be in a reviewable\n> state.\n> >\n> >\n> > Here's e.g. what this produces with log_line_prefix='%m [%E] '\n> >\n> > 2024-05-13 13:50:17.518 PDT [postgres] LOG: database system is ready to\n> accept connections\n> > 2024-05-13 13:50:19.138 PDT [cube] ERROR: invalid input syntax for cube\n> at character 13\n> > 2024-05-13 13:50:19.138 PDT [cube] DETAIL: syntax error at or near \"f\"\n> > 2024-05-13 13:50:19.138 PDT [cube] STATEMENT: SELECT cube('foo');\n> >\n> > 2024-05-13 13:43:07.484 PDT [postgres] LOG: database system is ready to\n> accept connections\n> > 2024-05-13 13:43:11.699 PDT [hstore] ERROR: syntax error in hstore:\n> unexpected end of string at character 15\n> > 2024-05-13 13:43:11.699 PDT [hstore] STATEMENT: SELECT hstore('foo');\n> >\n> >\n> \n> Was not able to build your patch by simply:\n\nOh, turns out it only builds with meson right now. I forgot that, with\nautoconf, for some unknown reason, we only set BUILDING_DLL on some OSs.\n\nI attached a crude patch changing that.\n\n\n> > It's worth pointing out that this, quite fundamentally, can only work\n> when the\n> > log message is triggered directly by the extension. If the extension code\n> > calls some postgres function and that function then errors out, it'll be\n> seens\n> > as being part of postgres.\n> >\n> > But I think that's ok - they're going to be properly errcode-ified etc.\n> >\n> \n> Hmmm, depending on the extension it can extensively call/use postgres code\n> so would be nice if we can differentiate if the code is called from\n> Postgres itself or from an extension.\n\nI think that's not realistically possible. It's also very fuzzy what that'd\nmean. If there's a planner hook and then the query executes normally, what do\nyou report for an execution time error? And even the simpler case - should use\nof pg_stat_statements cause everything within to be logged as a\npg_stat_statement message?\n\nI think the best we can do is to actually say where the error is directly\ntriggered from.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 13 May 2024 16:02:01 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-05-13 19:25:11 -0300, Fabrízio de Royes Mello wrote:\n>> Hmmm, depending on the extension it can extensively call/use postgres code\n>> so would be nice if we can differentiate if the code is called from\n>> Postgres itself or from an extension.\n\n> I think that's not realistically possible. It's also very fuzzy what that'd\n> mean. If there's a planner hook and then the query executes normally, what do\n> you report for an execution time error? And even the simpler case - should use\n> of pg_stat_statements cause everything within to be logged as a\n> pg_stat_statement message?\n\nNot to mention that there could be more than one extension on the call\nstack. I think tying this statically to the ereport call site is\nfine.\n\nThe mechanism that Andres describes for sourcing the name seems a bit\novercomplex though. Why not just allow/require each extension to\nspecify its name as a constant string? We could force the matter by\nredefining PG_MODULE_MAGIC as taking an argument:\n\n\tPG_MODULE_MAGIC(\"hstore\");\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 19:11:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Hi,\n\nOn 2024-05-13 19:11:53 -0400, Tom Lane wrote:\n> The mechanism that Andres describes for sourcing the name seems a bit\n> overcomplex though. Why not just allow/require each extension to\n> specify its name as a constant string? We could force the matter by\n> redefining PG_MODULE_MAGIC as taking an argument:\n> \tPG_MODULE_MAGIC(\"hstore\");\n\nMostly because it seemed somewhat sad to require every extension to have\nversion-specific ifdefs around that, particularly because it's not hard for us\nto infer.\n\nI think there might be other use cases for the backend to provide \"extension\nscoped\" information, FWIW. Even just providing the full path to the extension\nlibrary could be useful.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 May 2024 16:27:53 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "On Mon, May 13, 2024 at 07:11:53PM GMT, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-05-13 19:25:11 -0300, Fabr�zio de Royes Mello wrote:\n> >> Hmmm, depending on the extension it can extensively call/use postgres code\n> >> so would be nice if we can differentiate if the code is called from\n> >> Postgres itself or from an extension.\n> \n> > I think that's not realistically possible. It's also very fuzzy what that'd\n> > mean. If there's a planner hook and then the query executes normally, what do\n> > you report for an execution time error? And even the simpler case - should use\n> > of pg_stat_statements cause everything within to be logged as a\n> > pg_stat_statement message?\n> \n> Not to mention that there could be more than one extension on the call\n> stack. I think tying this statically to the ereport call site is\n> fine.\n> \n> The mechanism that Andres describes for sourcing the name seems a bit\n> overcomplex though. Why not just allow/require each extension to\n> specify its name as a constant string? We could force the matter by\n> redefining PG_MODULE_MAGIC as taking an argument:\n> \n> \tPG_MODULE_MAGIC(\"hstore\");\n\nFTR there was a proposal at [1] some time ago that could be used for that need\n(and others), I thought it could be good to mention it just in case. That\nwould obviously only work if all extensions uses that framework.\n\n[1] https://www.postgresql.org/message-id/flat/3207907.AWbSqkKDnR%40aivenronan\n\n\n", "msg_date": "Tue, 14 May 2024 07:28:03 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "On 14.05.24 01:11, Tom Lane wrote:\n> The mechanism that Andres describes for sourcing the name seems a bit\n> overcomplex though. Why not just allow/require each extension to\n> specify its name as a constant string? We could force the matter by\n> redefining PG_MODULE_MAGIC as taking an argument:\n> \n> \tPG_MODULE_MAGIC(\"hstore\");\n\nWe kind of already have something like this, for NLS. If you look for \npg_bindtextdomain(TEXTDOMAIN) and ereport_domain(), this information \nalready trickles into the vicinity of the error data. Maybe the same \nthing could just be used for this, by wiring up the macros a bit \ndifferently.\n\n\n\n", "msg_date": "Wed, 15 May 2024 17:34:06 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 14.05.24 01:11, Tom Lane wrote:\n>> The mechanism that Andres describes for sourcing the name seems a bit\n>> overcomplex though. Why not just allow/require each extension to\n>> specify its name as a constant string? We could force the matter by\n>> redefining PG_MODULE_MAGIC as taking an argument:\n>> PG_MODULE_MAGIC(\"hstore\");\n\n> We kind of already have something like this, for NLS. If you look for \n> pg_bindtextdomain(TEXTDOMAIN) and ereport_domain(), this information \n> already trickles into the vicinity of the error data. Maybe the same \n> thing could just be used for this, by wiring up the macros a bit \n> differently.\n\nHmm, cute idea, but it'd only help for extensions that are\nNLS-enabled. Which I bet is a tiny fraction of the population.\nSo far as I can find, we don't even document how to set up\nTEXTDOMAIN for an extension --- you have to cargo-cult the\nmacro definition from some in-core extension.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 May 2024 11:50:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "On 05/15/24 11:50, Tom Lane wrote:\n> Hmm, cute idea, but it'd only help for extensions that are\n> NLS-enabled. Which I bet is a tiny fraction of the population.\n> So far as I can find, we don't even document how to set up\n> TEXTDOMAIN for an extension --- you have to cargo-cult the\n\nBut I'd bet, within the fraction of the population that does use it,\nit is already a short string that looks a whole lot like the name\nof the extension. So maybe enhancing the documentation and making it\neasy to set up would achieve much of the objective here.\n\nCould PGXS be made to supply the extension name as TEXTDOMAIN when\nbuilding code that does not otherwise define it, and would that have\nany ill effect on the otherwise not-NLS-enabled code? Would the worst-\ncase effect be a failed search for a nonexistent .mo file, followed by\noutput of the untranslated message as before?\n\nAt first glance, it appears elog will apply PG_TEXTDOMAIN(\"postgres\")\nin an extension that does not otherwise define TEXTDOMAIN. But I assume\nthe usual effect of that is already a failed lookup followed by output of\nthe untranslated message, except in the case of the out-of-core extension\nusing a message matching a PG_TEXTDOMAIN(\"postgres\") translation.\n\nIf that case is considered unexpected, or actively discouraged, perhaps\ndefining TEXTDOMAIN in an otherwise not-NLS-enabled extension could be\nrelatively painless.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 15 May 2024 12:54:45 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Hi,\n\nOn 2024-05-15 12:54:45 -0400, Chapman Flack wrote:\n> On 05/15/24 11:50, Tom Lane wrote:\n> > Hmm, cute idea, but it'd only help for extensions that are\n> > NLS-enabled. Which I bet is a tiny fraction of the population.\n> > So far as I can find, we don't even document how to set up\n> > TEXTDOMAIN for an extension --- you have to cargo-cult the\n> \n> But I'd bet, within the fraction of the population that does use it,\n> it is already a short string that looks a whole lot like the name\n> of the extension. So maybe enhancing the documentation and making it\n> easy to set up would achieve much of the objective here.\n\nThe likely outcome would IMO be that some extensions will have the data,\nothers not. Whereas inferring the information from our side will give you\nsomething reliable.\n\nBut I also just don't think it's something that architecturally fits together\nthat well. If we either had TEXTDOMAIN reliably set across extensions or it'd\narchitecturally be pretty, I'd go for it, but imo it's neither.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 May 2024 10:07:58 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-05-15 12:54:45 -0400, Chapman Flack wrote:\n>> But I'd bet, within the fraction of the population that does use it,\n>> it is already a short string that looks a whole lot like the name\n>> of the extension. So maybe enhancing the documentation and making it\n>> easy to set up would achieve much of the objective here.\n\n> The likely outcome would IMO be that some extensions will have the data,\n> others not. Whereas inferring the information from our side will give you\n> something reliable.\n> But I also just don't think it's something that architecturally fits together\n> that well. If we either had TEXTDOMAIN reliably set across extensions or it'd\n> architecturally be pretty, I'd go for it, but imo it's neither.\n\nThere is one advantage over my suggestion of changing PG_MODULE_MAGIC:\nif we tell people to write\n\n PG_MODULE_MAGIC;\n #undef TEXTDOMAIN\n #define TEXTDOMAIN PG_TEXTDOMAIN(\"hstore\")\n\nthen that's 100% backwards compatible and they don't need any\nversion-testing ifdef's.\n\nI still think that the kind of infrastructure Andres proposes\nis way overkill compared to the value, plus it's almost certainly\ngoing to have a bunch of platform-specific problems to solve.\nSo I think Peter's thought is worth pursuing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 May 2024 13:45:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "On 05/15/24 13:45, Tom Lane wrote:\n> if we tell people to write\n> \n> PG_MODULE_MAGIC;\n> #undef TEXTDOMAIN\n> #define TEXTDOMAIN PG_TEXTDOMAIN(\"hstore\")\n> \n> then that's 100% backwards compatible and they don't need any\n> version-testing ifdef's.\n\nOT for this thread, but related: supposing out-of-core extensions\nparticipate increasingly in NLS, would they really want to use\nthe PG_TEXTDOMAIN macro?\n\nThat munges the supplied domain name with PG's major version and\n.so version numbers.\n\nWere such versioning wanted for an out-of-core extension's message\ncatalogs, wouldn't the extension's own versioning be better suited?\n\nRegards,\n-Chap\n\n\n\n", "msg_date": "Wed, 15 May 2024 13:58:54 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Hi,\n\nOn 2024-05-15 13:45:30 -0400, Tom Lane wrote:\n> There is one advantage over my suggestion of changing PG_MODULE_MAGIC:\n> if we tell people to write\n> \n> PG_MODULE_MAGIC;\n> #undef TEXTDOMAIN\n> #define TEXTDOMAIN PG_TEXTDOMAIN(\"hstore\")\n> \n> then that's 100% backwards compatible and they don't need any\n> version-testing ifdef's.\n> \n> I still think that the kind of infrastructure Andres proposes\n> is way overkill compared to the value, plus it's almost certainly\n> going to have a bunch of platform-specific problems to solve.\n\nMaybe I missing something here. Even adding those two lines to the extensions\nin core and contrib is going to end up being more lines than what I proposed?\n\nWhat portability issues do you forsee? We already look up the same symbol in\nall the shared libraries (\"Pg_magic_func\"), so we know that we can deal with\nduplicate function names. Are you thinking that somehow we'd end up with\nsymbol interposition or something?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 May 2024 14:14:18 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> What portability issues do you forsee? We already look up the same symbol in\n> all the shared libraries (\"Pg_magic_func\"), so we know that we can deal with\n> duplicate function names. Are you thinking that somehow we'd end up with\n> symbol interposition or something?\n\nNo, it's the dependence on the physical library file name that's\nbothering me. Maybe that won't be an issue, but I foresee requests\nlike \"would you please case-fold it\" or \"the extension-trimming rule\nisn't quite right\", etc.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 May 2024 17:24:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" }, { "msg_contents": "On 15.05.24 17:50, Tom Lane wrote:\n>> We kind of already have something like this, for NLS. If you look for\n>> pg_bindtextdomain(TEXTDOMAIN) and ereport_domain(), this information\n>> already trickles into the vicinity of the error data. Maybe the same\n>> thing could just be used for this, by wiring up the macros a bit\n>> differently.\n> Hmm, cute idea, but it'd only help for extensions that are\n> NLS-enabled. Which I bet is a tiny fraction of the population.\n> So far as I can find, we don't even document how to set up\n> TEXTDOMAIN for an extension --- you have to cargo-cult the\n> macro definition from some in-core extension.\n\nYeah, the whole thing is a bit mysterious, and we don't need to use the \nexact mechanism we have now.\n\nBut abstractly, we should only have to specify the, uh, domain of the \nlog messages once. Whether that is used for building a message catalog \nor tagging the server log, those are just two different downstream uses \nof the same piece of information.\n\n\n\n", "msg_date": "Thu, 16 May 2024 13:43:10 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding the extension name to EData / log_line_prefix" } ]
[ { "msg_contents": "I noticed that we (kind of) accept underscores in positional parameters.\nFor example, this works:\n\n => PREPARE p1 AS SELECT $1_2;\n PREPARE\n => EXECUTE p1 (123);\n ?column?\n ----------\n 123\n (1 row)\n\nParameter $1_2 is taken as $1 because in rule {param} in scan.l we get\nthe parameter number with atol which stops at the underscore. That's a\nregression in faff8f8e47f. Before that commit, $1_2 resulted in\n\"ERROR: trailing junk after parameter\".\n\nI can't tell which fix is the way to go: (1) accept underscores without\nusing atol, or (2) just forbid underscores. Any ideas?\n\natol can be replaced with pg_strtoint32_safe to handle the underscores.\nThis also avoids atol's undefined behavior on overflows. AFAICT,\npositional parameters are not part of the SQL standard, so nothing\nprevents us from accepting underscores here as well. The attached patch\ndoes that and also adds a test case.\n\nBut reverting {param} to its old form to forbid underscores also makes\nsense. That is:\n\n param\t\t\t\\${decdigit}+\n param_junk\t\t\\${decdigit}+{ident_start}\n\nIt seems very unlikely that anybody uses that many parameters and still\ncares about readability to use underscores. But maybe users simply\nexpect that underscores are valid here as well.\n\n-- \nErik", "msg_date": "Tue, 14 May 2024 05:18:24 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Underscore in positional parameters?" }, { "msg_contents": "On Tue, May 14, 2024 at 05:18:24AM +0200, Erik Wienhold wrote:\n> Parameter $1_2 is taken as $1 because in rule {param} in scan.l we get\n> the parameter number with atol which stops at the underscore. That's a\n> regression in faff8f8e47f. Before that commit, $1_2 resulted in\n> \"ERROR: trailing junk after parameter\".\n\nIndeed, the behavior of HEAD is confusing. \"1_2\" means 12 as a\nconstant in a query, not 1, but HEAD implies 1 in the context of\nPREPARE here.\n\n> I can't tell which fix is the way to go: (1) accept underscores without\n> using atol, or (2) just forbid underscores. Any ideas?\n\nDoes the SQL specification tell anything about the way parameters\nshould be marked? Not everything out there uses dollar-marked\nparameters, so I guess that the answer to my question is no. My take\nis all these cases should be rejected for params, only apply to\nnumeric and integer constants in the queries.\n\n> atol can be replaced with pg_strtoint32_safe to handle the underscores.\n> This also avoids atol's undefined behavior on overflows. AFAICT,\n> positional parameters are not part of the SQL standard, so nothing\n> prevents us from accepting underscores here as well. The attached patch\n> does that and also adds a test case.\n> \n> But reverting {param} to its old form to forbid underscores also makes\n> sense. That is:\n> \n> param\t\t\t\\${decdigit}+\n> param_junk\t\t\\${decdigit}+{ident_start}\n> \n> It seems very unlikely that anybody uses that many parameters and still\n> cares about readability to use underscores. But maybe users simply\n> expect that underscores are valid here as well.\n\nAdding Dean in CC as the committer of faff8f8e47f, Peter E for the SQL\nspecification part, and an open item.\n--\nMichael", "msg_date": "Tue, 14 May 2024 15:43:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On Tue, 14 May 2024 at 07:43, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, May 14, 2024 at 05:18:24AM +0200, Erik Wienhold wrote:\n> > Parameter $1_2 is taken as $1 because in rule {param} in scan.l we get\n> > the parameter number with atol which stops at the underscore. That's a\n> > regression in faff8f8e47f. Before that commit, $1_2 resulted in\n> > \"ERROR: trailing junk after parameter\".\n>\n> Indeed, the behavior of HEAD is confusing. \"1_2\" means 12 as a\n> constant in a query, not 1, but HEAD implies 1 in the context of\n> PREPARE here.\n>\n> > I can't tell which fix is the way to go: (1) accept underscores without\n> > using atol, or (2) just forbid underscores. Any ideas?\n>\n> Does the SQL specification tell anything about the way parameters\n> should be marked? Not everything out there uses dollar-marked\n> parameters, so I guess that the answer to my question is no. My take\n> is all these cases should be rejected for params, only apply to\n> numeric and integer constants in the queries.\n>\n> Adding Dean in CC as the committer of faff8f8e47f, Peter E for the SQL\n> specification part, and an open item.\n\nI'm sure that this wasn't intentional -- I think we just failed to\nnotice that \"param\" also uses \"decinteger\" in the scanner. Taking a\nquick look, there don't appear to be any other uses of \"decinteger\",\nso at least it only affects params.\n\nUnless the spec explicitly says otherwise, I agree that we should\nreject this, as we used to do, and add a comment saying that it's\nintentionally not supported. I can't believe it would ever be useful,\nand the current behaviour is clearly broken.\n\nI've moved this to \"Older bugs affecting stable branches\", since it\ncame in with v16.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 14 May 2024 10:51:41 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Tue, 14 May 2024 at 07:43, Michael Paquier <[email protected]> wrote:\n>> On Tue, May 14, 2024 at 05:18:24AM +0200, Erik Wienhold wrote:\n>>> Parameter $1_2 is taken as $1 because in rule {param} in scan.l we get\n>>> the parameter number with atol which stops at the underscore. That's a\n>>> regression in faff8f8e47f. Before that commit, $1_2 resulted in\n>>> \"ERROR: trailing junk after parameter\".\n\n> I'm sure that this wasn't intentional -- I think we just failed to\n> notice that \"param\" also uses \"decinteger\" in the scanner. Taking a\n> quick look, there don't appear to be any other uses of \"decinteger\",\n> so at least it only affects params.\n\n> Unless the spec explicitly says otherwise, I agree that we should\n> reject this, as we used to do, and add a comment saying that it's\n> intentionally not supported. I can't believe it would ever be useful,\n> and the current behaviour is clearly broken.\n\n+1, let's put this back the way it was.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 May 2024 10:40:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-05-14 16:40 +0200, Tom Lane wrote:\n> Dean Rasheed <[email protected]> writes:\n> > On Tue, 14 May 2024 at 07:43, Michael Paquier <[email protected]> wrote:\n> >> On Tue, May 14, 2024 at 05:18:24AM +0200, Erik Wienhold wrote:\n> >>> Parameter $1_2 is taken as $1 because in rule {param} in scan.l we get\n> >>> the parameter number with atol which stops at the underscore. That's a\n> >>> regression in faff8f8e47f. Before that commit, $1_2 resulted in\n> >>> \"ERROR: trailing junk after parameter\".\n> \n> > I'm sure that this wasn't intentional -- I think we just failed to\n> > notice that \"param\" also uses \"decinteger\" in the scanner. Taking a\n> > quick look, there don't appear to be any other uses of \"decinteger\",\n> > so at least it only affects params.\n> \n> > Unless the spec explicitly says otherwise, I agree that we should\n> > reject this, as we used to do, and add a comment saying that it's\n> > intentionally not supported. I can't believe it would ever be useful,\n> > and the current behaviour is clearly broken.\n> \n> +1, let's put this back the way it was.\n\nI split the change in two independent patches:\n\nPatch 0001 changes rules param and param_junk to only accept digits 0-9.\n\nPatch 0002 replaces atol with pg_strtoint32_safe in the backend parser\nand strtoint in ECPG. This fixes overflows like:\n\n => PREPARE p1 AS SELECT $4294967297; -- same as $1\n PREPARE\n => EXECUTE p1 (123);\n ?column?\n ----------\n 123\n (1 row)\n\n => PREPARE p2 AS SELECT $2147483648;\n ERROR: there is no parameter $-2147483648\n LINE 1: PREPARE p2 AS SELECT $2147483648;\n\nIt now returns this error:\n\n => PREPARE p1 AS SELECT $4294967297;\n ERROR: parameter too large at or near $4294967297\n\n => PREPARE p2 AS SELECT $2147483648;\n ERROR: parameter too large at or near $2147483648\n\n-- \nErik", "msg_date": "Tue, 14 May 2024 18:07:51 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On Tue, May 14, 2024 at 10:51:41AM +0100, Dean Rasheed wrote:\n> I've moved this to \"Older bugs affecting stable branches\", since it\n> came in with v16.\n\nOops, thanks for fixing. I've somewhat missed that b2d47928908d was\nin REL_16_STABLE.\n--\nMichael", "msg_date": "Wed, 15 May 2024 09:46:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On Tue, May 14, 2024 at 06:07:51PM +0200, Erik Wienhold wrote:\n> I split the change in two independent patches:\n\nThe split makes sense to me.\n\n> Patch 0001 changes rules param and param_junk to only accept digits 0-9.\n\n-param\t\t\t\\${decinteger}\n-param_junk\t\t\\${decinteger}{ident_start}\n+/* Positional parameters don't accept underscores. */\n+param\t\t\t\\${decdigit}+\n+param_junk\t\t\\${decdigit}+{ident_start}\n\nscan.l, psqlscan.l and pgc.l are the three files impacted, so that's\ngood to me.\n\n> Patch 0002 replaces atol with pg_strtoint32_safe in the backend parser\n> and strtoint in ECPG. This fixes overflows like:\n> \n> => PREPARE p1 AS SELECT $4294967297; -- same as $1\n> PREPARE\n>\n> It now returns this error:\n> \n> => PREPARE p1 AS SELECT $4294967297;\n> ERROR: parameter too large at or near $4294967297\n\nThis one is a much older problem, though. What you are doing is an\nimprovement, still I don't see a huge point in backpatching that based\non the lack of complaints with these overflows in the yyac paths.\n\n+ if (errno == ERANGE)\n+ mmfatal(PARSE_ERROR, \"parameter too large\"); \n\nKnowong that this is working on decdigits, an ERANGE check should be\nenough, indeed.\n--\nMichael", "msg_date": "Wed, 15 May 2024 13:27:10 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 14.05.24 18:07, Erik Wienhold wrote:\n> Patch 0001 changes rules param and param_junk to only accept digits 0-9.\n\nI have committed this patch to PG16 and master.\n\nI was a little bit on the fence about what the behavior should be, but I \nchecked Perl for comparison:\n\nprint 1000; # ok\nprint 1_000; # ok\nprint $1000; # ok\nprint $1_000; # error\n\nSo this seems alright.\n\n> Patch 0002 replaces atol with pg_strtoint32_safe in the backend parser\n> and strtoint in ECPG. This fixes overflows like:\n\nSeems like a good idea, but as was said, this is an older issue, so \nlet's look at that separately.\n\n\n\n", "msg_date": "Wed, 15 May 2024 13:59:36 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On Wed, May 15, 2024 at 01:59:36PM +0200, Peter Eisentraut wrote:\n> On 14.05.24 18:07, Erik Wienhold wrote:\n>> Patch 0002 replaces atol with pg_strtoint32_safe in the backend parser\n>> and strtoint in ECPG. This fixes overflows like:\n> \n> Seems like a good idea, but as was said, this is an older issue, so let's\n> look at that separately.\n\nHmm, yeah. I would be really tempted to fix that now.\n\nNow, it has been this way for ages, and with my RMT hat on (aka I need\nto show the example), I'd suggest to wait for when the v18 branch\nopens as there is no urgency. I'm OK to apply it myself at the end,\nthe patch is a good idea.\n--\nMichael", "msg_date": "Thu, 16 May 2024 08:11:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 16.05.24 01:11, Michael Paquier wrote:\n> On Wed, May 15, 2024 at 01:59:36PM +0200, Peter Eisentraut wrote:\n>> On 14.05.24 18:07, Erik Wienhold wrote:\n>>> Patch 0002 replaces atol with pg_strtoint32_safe in the backend parser\n>>> and strtoint in ECPG. This fixes overflows like:\n>>\n>> Seems like a good idea, but as was said, this is an older issue, so let's\n>> look at that separately.\n> \n> Hmm, yeah. I would be really tempted to fix that now.\n> \n> Now, it has been this way for ages, and with my RMT hat on (aka I need\n> to show the example), I'd suggest to wait for when the v18 branch\n> opens as there is no urgency. I'm OK to apply it myself at the end,\n> the patch is a good idea.\n\nOn this specific patch, maybe reword \"parameter too large\" to \"parameter \nnumber too large\".\n\nAlso, I was bemused by the use of atol(), which is notoriously \nunportable (sizeof(long)). So I poked around and found more places that \nmight need fixing. I'm attaching a patch here with annotations too look \nat later.", "msg_date": "Thu, 16 May 2024 08:41:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On Thu, May 16, 2024 at 08:41:11AM +0200, Peter Eisentraut wrote:\n> On this specific patch, maybe reword \"parameter too large\" to \"parameter\n> number too large\".\n\nWFM here.\n\n> Also, I was bemused by the use of atol(), which is notoriously unportable\n> (sizeof(long)). So I poked around and found more places that might need\n> fixing. I'm attaching a patch here with annotations too look at later.\n\nYeah atoXX calls have been funky in the tree for some time. This\nreminds this thread, somewhat:\nhttps://www.postgresql.org/message-id/CALAY4q8be6Qw_2J%3DzOp_v1X-zfMBzvVMkAfmMYv%3DUkr%3D2hPcFQ%40mail.gmail.com\n\nThe issue is also that there is no \"safe\" parsing alternative for 64b\nintegers in the frontend (as you know long is 32b in Windows, which is\nwhy I'd encourage ripping it out as much as we can). This may be\nbetter as a complementary of strtoint() in src/common/string.c. Note\nas well strtoint64() in pgbench.c. I think I have a patch lying\naround, actually.. \n--\nMichael", "msg_date": "Fri, 17 May 2024 09:06:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-05-17 02:06 +0200, Michael Paquier wrote:\n> On Thu, May 16, 2024 at 08:41:11AM +0200, Peter Eisentraut wrote:\n> > On this specific patch, maybe reword \"parameter too large\" to \"parameter\n> > number too large\".\n> \n> WFM here.\n\nDone in v3.\n\nI noticed this compiler warning with my previous patch:\n\n scan.l:997:41: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n 997 | ErrorSaveContext escontext = {T_ErrorSaveContext};\n | ^~~~~~~~~~~~~~~~\n\nI thought that I had to factor this out into a function similar to\nprocess_integer_literal (which also uses ErrorSaveContext). But moving\nthat declaration to the start of the {param} action was enough in the\nend.\n\nWhile trying out the refactoring, I noticed two small things that can be\nfixed as well in scan.l:\n\n* Prototype and definition of addunicode do not match. The prototype\n uses yyscan_t while the definition uses core_yyscan_t.\n\n* Parameter base of process_integer_literal is unused.\n\nBut those should be one a separate thread, right, even for minor fixes?\n\n-- \nErik", "msg_date": "Sat, 18 May 2024 03:31:49 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "Hello Erik,\n\n18.05.2024 04:31, Erik Wienhold wrote:\n> On 2024-05-17 02:06 +0200, Michael Paquier wrote:\n>> On Thu, May 16, 2024 at 08:41:11AM +0200, Peter Eisentraut wrote:\n>>> On this specific patch, maybe reword \"parameter too large\" to \"parameter\n>>> number too large\".\n>> WFM here.\n> Done in v3.\n\nThank you for working on this!\n\nI encountered anomalies that you address with this patch too.\nAnd I can confirm that it fixes most cases, but there is another one:\nSELECT $300000000 \\bind 'foo' \\g\nERROR:  invalid memory alloc request size 1200000000\n\nMaybe you would find this worth fixing as well.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 19 May 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-05-19 07:00 +0200, Alexander Lakhin wrote:\n> I encountered anomalies that you address with this patch too.\n> And I can confirm that it fixes most cases, but there is another one:\n> SELECT $300000000 \\bind 'foo' \\g\n> ERROR:  invalid memory alloc request size 1200000000\n> \n> Maybe you would find this worth fixing as well.\n\nYes, that error message is not great. In variable_paramref_hook we\ncheck paramno > INT_MAX/sizeof(Oid) when in fact MaxAllocSize/sizeof(Oid)\nis the more appropriate limit to avoid that unspecific alloc size error.\n\nFixed in v4 with a separate patch because it's unrelated to the param\nnumber parsing. But it fits nicely into the broader issue on the upper\nlimit for param numbers. Note that $268435455 is still the largest\npossible param number ((2^30-1)/4) and that we just return a more\nuser-friendly error message for params beyond that limit.\n\n-- \nErik", "msg_date": "Sun, 19 May 2024 16:43:39 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On Sun, May 19, 2024 at 10:43 PM Erik Wienhold <[email protected]> wrote:\n>\n> On 2024-05-19 07:00 +0200, Alexander Lakhin wrote:\n> > I encountered anomalies that you address with this patch too.\n> > And I can confirm that it fixes most cases, but there is another one:\n> > SELECT $300000000 \\bind 'foo' \\g\n> > ERROR: invalid memory alloc request size 1200000000\n> >\n> > Maybe you would find this worth fixing as well.\n>\n> Yes, that error message is not great. In variable_paramref_hook we\n> check paramno > INT_MAX/sizeof(Oid) when in fact MaxAllocSize/sizeof(Oid)\n> is the more appropriate limit to avoid that unspecific alloc size error.\n>\n> Fixed in v4 with a separate patch because it's unrelated to the param\n> number parsing. But it fits nicely into the broader issue on the upper\n> limit for param numbers. Note that $268435455 is still the largest\n> possible param number ((2^30-1)/4) and that we just return a more\n> user-friendly error message for params beyond that limit.\n>\n\nhi, one minor issue:\n\n/* Check parameter number is in range */\nif (paramno <= 0 || paramno > MaxAllocSize / sizeof(Oid))\nereport(ERROR,\n(errcode(ERRCODE_UNDEFINED_PARAMETER),\nerrmsg(\"there is no parameter $%d\", paramno),\nparser_errposition(pstate, pref->location)));\n\nif paramno <= 0 then \"there is no parameter $%d\" makes sense to me.\n\nbut, if paramno > 0 why not just say, we can only allow MaxAllocSize /\nsizeof(Oid) number of parameters.\n\n\n", "msg_date": "Mon, 20 May 2024 09:26:11 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-05-20 03:26 +0200, jian he wrote:\n> On Sun, May 19, 2024 at 10:43 PM Erik Wienhold <[email protected]> wrote:\n> >\n> > On 2024-05-19 07:00 +0200, Alexander Lakhin wrote:\n> > > I encountered anomalies that you address with this patch too.\n> > > And I can confirm that it fixes most cases, but there is another one:\n> > > SELECT $300000000 \\bind 'foo' \\g\n> > > ERROR: invalid memory alloc request size 1200000000\n> > >\n> > > Maybe you would find this worth fixing as well.\n> >\n> > Yes, that error message is not great. In variable_paramref_hook we\n> > check paramno > INT_MAX/sizeof(Oid) when in fact MaxAllocSize/sizeof(Oid)\n> > is the more appropriate limit to avoid that unspecific alloc size error.\n> >\n> > Fixed in v4 with a separate patch because it's unrelated to the param\n> > number parsing. But it fits nicely into the broader issue on the upper\n> > limit for param numbers. Note that $268435455 is still the largest\n> > possible param number ((2^30-1)/4) and that we just return a more\n> > user-friendly error message for params beyond that limit.\n> >\n> \n> hi, one minor issue:\n> \n> /* Check parameter number is in range */\n> if (paramno <= 0 || paramno > MaxAllocSize / sizeof(Oid))\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_PARAMETER),\n> errmsg(\"there is no parameter $%d\", paramno),\n> parser_errposition(pstate, pref->location)));\n> \n> if paramno <= 0 then \"there is no parameter $%d\" makes sense to me.\n> \n> but, if paramno > 0 why not just say, we can only allow MaxAllocSize /\n> sizeof(Oid) number of parameters.\n\nYes, it makes sense to show the upper bound. How about a hint such as\n\"Valid parameters range from $%d to $%d.\"?\n\n-- \nErik\n\n\n", "msg_date": "Mon, 20 May 2024 04:55:38 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-05-20 03:26 +0200, jian he wrote:\n>> /* Check parameter number is in range */\n>> if (paramno <= 0 || paramno > MaxAllocSize / sizeof(Oid))\n>> ereport(ERROR, ...\n\n> Yes, it makes sense to show the upper bound. How about a hint such as\n> \"Valid parameters range from $%d to $%d.\"?\n\nI kind of feel like this upper bound is ridiculous. In what scenario\nis parameter 250000000 not a mistake, if not indeed somebody trying\nto break the system?\n\nThe \"Bind\" protocol message only allows an int16 parameter count,\nso rejecting parameter numbers above 32K would make sense to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 May 2024 23:02:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-05-20 05:02 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > On 2024-05-20 03:26 +0200, jian he wrote:\n> >> /* Check parameter number is in range */\n> >> if (paramno <= 0 || paramno > MaxAllocSize / sizeof(Oid))\n> >> ereport(ERROR, ...\n> \n> > Yes, it makes sense to show the upper bound. How about a hint such as\n> > \"Valid parameters range from $%d to $%d.\"?\n> \n> I kind of feel like this upper bound is ridiculous. In what scenario\n> is parameter 250000000 not a mistake, if not indeed somebody trying\n> to break the system?\n> \n> The \"Bind\" protocol message only allows an int16 parameter count,\n> so rejecting parameter numbers above 32K would make sense to me.\n\nAgree. I was already wondering upthread why someone would use that many\nparameters.\n\nOut of curiosity, I checked if there might be an even lower limit. And\nindeed, max_stack_depth puts a limit due to some recursive evaluation:\n\n ERROR: stack depth limit exceeded\n HINT: Increase the configuration parameter \"max_stack_depth\" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.\n\nAttached is the stacktrace for EXECUTE on HEAD (I snipped most of the\nrecursive frames).\n\nRunning \\bind, PREPARE, and EXECUTE with following number of parameters\nworks as expected, although the number varies between releases which is\nnot ideal IMO. The commands hit the stack depth limit for #Params+1.\n\nVersion Command #Params\n----------------- ------- -------\nHEAD (18cbed13d5) \\bind 4365\nHEAD (18cbed13d5) PREPARE 8182\nHEAD (18cbed13d5) EXECUTE 4363\n16.2 \\bind 3968\n16.2 PREPARE 6889\n16.2 EXECUTE 3966\n\nThose are already pretty large numbers in my view (compared to the 100\nparameters that we accept at most for functions). And I guess nobody\ncomplained about those limits yet, or they just increased\nmax_stack_depth.\n\nThe Python script to generate the test scripts:\n\n import sys\n n_params = 1 << 16\n if len(sys.argv) > 1:\n n_params = min(n_params, int(sys.argv[1]))\n params = '+'.join(f'${i+1}::int' for i in range(n_params))\n bind_vals = ' '.join('1' for _ in range(n_params))\n exec_vals = ','.join('1' for _ in range(n_params))\n print(fr\"SELECT {params} \\bind {bind_vals} \\g\")\n print(f\"PREPARE p AS SELECT {params};\")\n print(f\"EXECUTE p ({exec_vals});\")\n\n-- \nErik", "msg_date": "Mon, 20 May 2024 15:59:30 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 19.05.24 16:43, Erik Wienhold wrote:\n> On 2024-05-19 07:00 +0200, Alexander Lakhin wrote:\n>> I encountered anomalies that you address with this patch too.\n>> And I can confirm that it fixes most cases, but there is another one:\n>> SELECT $300000000 \\bind 'foo' \\g\n>> ERROR:  invalid memory alloc request size 1200000000\n>>\n>> Maybe you would find this worth fixing as well.\n> \n> Yes, that error message is not great. In variable_paramref_hook we\n> check paramno > INT_MAX/sizeof(Oid) when in fact MaxAllocSize/sizeof(Oid)\n> is the more appropriate limit to avoid that unspecific alloc size error.\n> \n> Fixed in v4 with a separate patch because it's unrelated to the param\n> number parsing. But it fits nicely into the broader issue on the upper\n> limit for param numbers. Note that $268435455 is still the largest\n> possible param number ((2^30-1)/4) and that we just return a more\n> user-friendly error message for params beyond that limit.\n\nI have committed your two v4 patches.\n\nI made a small adjustment in 0001: I changed the ecpg part to also store \nthe result from strtoint() into a local variable before checking for \nerror, like you had done in the scan.l part. I think this is a bit \nbetter style. In 0002 you had a typo in the commit message: MAX_INT \ninstead of INT_MAX.\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 10:14:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-07-02 10:14 +0200, Peter Eisentraut wrote:\n> On 19.05.24 16:43, Erik Wienhold wrote:\n> > On 2024-05-19 07:00 +0200, Alexander Lakhin wrote:\n> > > I encountered anomalies that you address with this patch too.\n> > > And I can confirm that it fixes most cases, but there is another one:\n> > > SELECT $300000000 \\bind 'foo' \\g\n> > > ERROR:  invalid memory alloc request size 1200000000\n> > > \n> > > Maybe you would find this worth fixing as well.\n> > \n> > Yes, that error message is not great. In variable_paramref_hook we\n> > check paramno > INT_MAX/sizeof(Oid) when in fact MaxAllocSize/sizeof(Oid)\n> > is the more appropriate limit to avoid that unspecific alloc size error.\n> > \n> > Fixed in v4 with a separate patch because it's unrelated to the param\n> > number parsing. But it fits nicely into the broader issue on the upper\n> > limit for param numbers. Note that $268435455 is still the largest\n> > possible param number ((2^30-1)/4) and that we just return a more\n> > user-friendly error message for params beyond that limit.\n> \n> I have committed your two v4 patches.\n> \n> I made a small adjustment in 0001: I changed the ecpg part to also store the\n> result from strtoint() into a local variable before checking for error, like\n> you had done in the scan.l part. I think this is a bit better style. In\n> 0002 you had a typo in the commit message: MAX_INT instead of INT_MAX.\n\nThanks Peter!\n\n-- \nErik\n\n\n", "msg_date": "Tue, 2 Jul 2024 10:45:24 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 02.07.24 10:14, Peter Eisentraut wrote:\n> On 19.05.24 16:43, Erik Wienhold wrote:\n>> On 2024-05-19 07:00 +0200, Alexander Lakhin wrote:\n>>> I encountered anomalies that you address with this patch too.\n>>> And I can confirm that it fixes most cases, but there is another one:\n>>> SELECT $300000000 \\bind 'foo' \\g\n>>> ERROR:  invalid memory alloc request size 1200000000\n>>>\n>>> Maybe you would find this worth fixing as well.\n>>\n>> Yes, that error message is not great.  In variable_paramref_hook we\n>> check paramno > INT_MAX/sizeof(Oid) when in fact MaxAllocSize/sizeof(Oid)\n>> is the more appropriate limit to avoid that unspecific alloc size error.\n>>\n>> Fixed in v4 with a separate patch because it's unrelated to the param\n>> number parsing.  But it fits nicely into the broader issue on the upper\n>> limit for param numbers.  Note that $268435455 is still the largest\n>> possible param number ((2^30-1)/4) and that we just return a more\n>> user-friendly error message for params beyond that limit.\n> \n> I have committed your two v4 patches.\n> \n> I made a small adjustment in 0001: I changed the ecpg part to also store \n> the result from strtoint() into a local variable before checking for \n> error, like you had done in the scan.l part.  I think this is a bit \n> better style.  In 0002 you had a typo in the commit message: MAX_INT \n> instead of INT_MAX.\n\nI had to revert the test case from the 0002 patch. It ended up running \nsome build farm machines out of memory.\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 10:45:44 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-07-02 10:45 +0200, Peter Eisentraut wrote:\n> On 02.07.24 10:14, Peter Eisentraut wrote:\n> > I have committed your two v4 patches.\n> \n> I had to revert the test case from the 0002 patch. It ended up running some\n> build farm machines out of memory.\n\ndhole, morepork, and schnauzer. For example, schnauzer[1]:\n\n> diff -U3 /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/prepare.out /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/prepare.out\n> --- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/prepare.out\tTue Jul 2 10:31:34 2024\n> +++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/prepare.out\tTue Jul 2 10:33:15 2024\n> @@ -186,9 +186,8 @@\n> \n> -- max parameter number and one above\n> PREPARE q9 AS SELECT $268435455, $268435456;\n> -ERROR: there is no parameter $268435456\n> -LINE 1: PREPARE q9 AS SELECT $268435455, $268435456;\n> - ^\n> +ERROR: out of memory\n> +DETAIL: Failed on request of size 1073741820 in memory context \"PortalContext\".\n> -- test DEALLOCATE ALL;\n> DEALLOCATE ALL;\n> SELECT name, statement, parameter_types FROM pg_prepared_statements\n\nThat means paramno is less than MaxAllocSize/sizeof(Oid) if it tries to\nallocate memory. MaxAllocSize is always 0x3fffffff. Is sizeof(Oid)\nless than 4 on those machines?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=schnauzer&dt=2024-07-02%2008%3A31%3A34\n\n-- \nErik\n\n\n", "msg_date": "Tue, 2 Jul 2024 11:37:45 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I had to revert the test case from the 0002 patch. It ended up running \n> some build farm machines out of memory.\n\nThat ties into what I said upthread: why are we involving MaxAllocSize\nin this at all? The maximum parameter number you can actually use in\nextended queries is 65535 (because 16-bit fields), and I can't see a\ngood reason to permit more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 10:14:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-07-02 10:45 +0200, Peter Eisentraut wrote:\n>> I had to revert the test case from the 0002 patch. It ended up running some\n>> build farm machines out of memory.\n\n>> +ERROR: out of memory\n>> +DETAIL: Failed on request of size 1073741820 in memory context \"PortalContext\".\n\n> That means paramno is less than MaxAllocSize/sizeof(Oid) if it tries to\n> allocate memory. MaxAllocSize is always 0x3fffffff. Is sizeof(Oid)\n> less than 4 on those machines?\n\nNo. Y'know, it's not really *that* astonishing for a machine to not\nhave a spare 1GB of RAM available on-demand. This test would\ncertainly have failed on our 32-bit animals, although it doesn't\nlook like any of them had gotten to it yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jul 2024 10:21:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 02.07.24 16:14, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> I had to revert the test case from the 0002 patch. It ended up running\n>> some build farm machines out of memory.\n> \n> That ties into what I said upthread: why are we involving MaxAllocSize\n> in this at all? The maximum parameter number you can actually use in\n> extended queries is 65535 (because 16-bit fields), and I can't see a\n> good reason to permit more.\n\nThere are arguably a few things that could be done in this area of code \nto improve it, like consistently using int16 and strtoint16 and so on \nfor parameter numbers. But that's a different project.\n\nThe change here was merely to an existing check that apparently wanted \nto avoid some kind of excessive memory allocation but did so \nineffectively by checking against INT_MAX, which had nothing to do with \nhow the memory allocation checking actually works. The fixed code now \navoids the error for \"invalid memory alloc request size\", but of course \nit can still fail if the OS does not have enough memory.\n\n\n\n", "msg_date": "Tue, 2 Jul 2024 23:51:37 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Underscore in positional parameters?" }, { "msg_contents": "On 2024-07-02 16:21 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > On 2024-07-02 10:45 +0200, Peter Eisentraut wrote:\n> >> I had to revert the test case from the 0002 patch. It ended up running some\n> >> build farm machines out of memory.\n> \n> >> +ERROR: out of memory\n> >> +DETAIL: Failed on request of size 1073741820 in memory context \"PortalContext\".\n> \n> > That means paramno is less than MaxAllocSize/sizeof(Oid) if it tries to\n> > allocate memory. MaxAllocSize is always 0x3fffffff. Is sizeof(Oid)\n> > less than 4 on those machines?\n> \n> No. Y'know, it's not really *that* astonishing for a machine to not\n> have a spare 1GB of RAM available on-demand. This test would\n> certainly have failed on our 32-bit animals, although it doesn't\n> look like any of them had gotten to it yet.\n\nAh, sorry. I somehow missed that it allocates memory for each param,\ninstead of first checking *all* params. m(\n\n-- \nErik\n\n\n", "msg_date": "Thu, 4 Jul 2024 14:34:35 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Underscore in positional parameters?" } ]
[ { "msg_contents": "hi.\n\nwhile reading this[1],\n<< More information about partial indexes can be found in [ston89b],\n[olson93], and [seshadri95].\nI googled around, still cannot find [olson93] related pdf or html link.\n\n\nin [2],\nI found out\n[ong90] “A Unified Framework for Version Modeling Using Production\nRules in a Database System”. L. Ong and J. Goh. ERL Technical\nMemorandum M90/33. University of California. Berkeley, California.\nApril, 1990.\nrelated link is\nhttps://www2.eecs.berkeley.edu/Pubs/TechRpts/1990/1466.html\n\n\nan idea:\nIn case these external links in\n(https://www.postgresql.org/docs/current/biblio.html) become dead\nlinks,\nwe can send these external pdf or html files in the mailing list for\narchive purposes.\n\n[1] https://www.postgresql.org/docs/current/indexes-partial.html\n[2] https://www.postgresql.org/docs/current/biblio.html\n\n\n", "msg_date": "Tue, 14 May 2024 13:40:59 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Bibliography section, some references cannot be found" }, { "msg_contents": "> On 14 May 2024, at 07:40, jian he <[email protected]> wrote:\n\n> I googled around, still cannot find [olson93] related pdf or html link.\n\nJudging by the bibliography reference it's a technical report from UCB, and\nsearching for the T7 identifier in the UCB library reveals that it's a M.Sc\nthesis only available in physical form:\n\nhttps://search.library.berkeley.edu/permalink/01UCS_BER/iqob43/alma991082339239706532\n\nThe link might not be useful to many, but it may save a few minutes of googling\nfrom other readers.\n\n> in [2],\n> I found out\n> [ong90] “A Unified Framework for Version Modeling Using Production\n> Rules in a Database System”. L. Ong and J. Goh. ERL Technical\n> Memorandum M90/33. University of California. Berkeley, California.\n> April, 1990.\n> related link is\n> https://www2.eecs.berkeley.edu/Pubs/TechRpts/1990/1466.html\n\nLinking to that seems like a good idea.\n\n> an idea:\n> In case these external links in\n> (https://www.postgresql.org/docs/current/biblio.html) become dead\n> links,\n> we can send these external pdf or html files in the mailing list for\n> archive purposes.\n\nIf a link stops working, and no replacement is available, it's probably better\nto link to the archive.org entry instead.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 14 May 2024 08:51:24 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bibliography section, some references cannot be found" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile resuming the work on refilenode stats (mentioned in [1] but did not share\nthe patch yet), I realized that my current POC patch is buggy enough to produce\nthings like:\n\n024-05-14 09:51:14.783 UTC [1788714] FATAL: can only drop stats once\n\nWhile the CONTEXT provides the list of dropped stats:\n\n2024-05-14 09:51:14.783 UTC [1788714] CONTEXT: WAL redo at 0/D75F478 for Transaction/ABORT: 2024-05-14 09:51:14.782223+00; dropped stats: 2/16384/27512/0 2/16384/27515/0 2/16384/27516/0\n\nIt's not clear which one generates the error (don't pay attention to the actual\nvalues, the issue comes from the new refilenode stats that I removed from the\noutput).\n\nAttached a tiny patch to report the stat that generates the error. The patch uses\nerrdetail_internal() as the extra details don't seem to be useful to average\nusers.\n\n[1]: https://www.postgresql.org/message-id/ZbIdgTjR2QcFJ2mE%40ip-10-97-1-34.eu-west-3.compute.internal\n\nLooking forward to your feedback,\n\nRegards\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 14 May 2024 10:07:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Log details for stats dropped more than once" }, { "msg_contents": "On Tue, May 14, 2024 at 10:07:14AM +0000, Bertrand Drouvot wrote:\n> While resuming the work on refilenode stats (mentioned in [1] but did not share\n> the patch yet), I realized that my current POC patch is buggy enough to produce\n> things like:\n> \n> 024-05-14 09:51:14.783 UTC [1788714] FATAL: can only drop stats once\n> \n> While the CONTEXT provides the list of dropped stats:\n> \n> 2024-05-14 09:51:14.783 UTC [1788714] CONTEXT: WAL redo at 0/D75F478 for Transaction/ABORT: 2024-05-14 09:51:14.782223+00; dropped stats: 2/16384/27512/0 2/16384/27515/0 2/16384/27516/0\n\nCan refcount be useful to know in this errcontext?\n\n> Attached a tiny patch to report the stat that generates the error. The patch uses\n> errdetail_internal() as the extra details don't seem to be useful to average\n> users.\n\nI think that's fine. Overall that looks like useful information for\ndebugging, so no objections from here.\n--\nMichael", "msg_date": "Wed, 15 May 2024 14:47:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log details for stats dropped more than once" }, { "msg_contents": "Hi,\n\nOn Wed, May 15, 2024 at 02:47:29PM +0900, Michael Paquier wrote:\n> On Tue, May 14, 2024 at 10:07:14AM +0000, Bertrand Drouvot wrote:\n> > While resuming the work on refilenode stats (mentioned in [1] but did not share\n> > the patch yet), I realized that my current POC patch is buggy enough to produce\n> > things like:\n> > \n> > 024-05-14 09:51:14.783 UTC [1788714] FATAL: can only drop stats once\n> > \n> > While the CONTEXT provides the list of dropped stats:\n> > \n> > 2024-05-14 09:51:14.783 UTC [1788714] CONTEXT: WAL redo at 0/D75F478 for Transaction/ABORT: 2024-05-14 09:51:14.782223+00; dropped stats: 2/16384/27512/0 2/16384/27515/0 2/16384/27516/0\n> \n> Can refcount be useful to know in this errcontext?\n\nThanks for looking at it!\n\nDo you mean as part of the added errdetail_internal()? If so, yeah I think it's\na good idea (done that way in v2 attached).\n\n> > Attached a tiny patch to report the stat that generates the error. The patch uses\n> > errdetail_internal() as the extra details don't seem to be useful to average\n> > users.\n> \n> I think that's fine. Overall that looks like useful information for\n> debugging, so no objections from here.\n\nThanks! BTW, I just realized that adding more details for this error has already\nbeen mentioned in [1] (and Andres did propose a slightly different version).\n\n[1]: https://www.postgresql.org/message-id/20240505160915.6boysum4f34siqct%40awork3.anarazel.de\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 15 May 2024 08:04:48 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Log details for stats dropped more than once" }, { "msg_contents": "On Wed, May 15, 2024 at 08:04:48AM +0000, Bertrand Drouvot wrote:\n> Thanks! BTW, I just realized that adding more details for this error has already\n> been mentioned in [1] (and Andres did propose a slightly different version).\n> \n> [1]: https://www.postgresql.org/message-id/20240505160915.6boysum4f34siqct%40awork3.anarazel.de\n\nAh, it is not surprising. I'd agree with doing what is posted there\nfor simplicity's sake. Rather than putting that in a errdetail, let's\nkeep all the information in an errmsg() as that makes deparsing\neasier, and let's keep the elog().\n--\nMichael", "msg_date": "Thu, 16 May 2024 08:17:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Log details for stats dropped more than once" } ]
[ { "msg_contents": "Hello, hackers!\r\n\r\nRecently I've been building postgres with different cflags and cppflags.\r\nAnd suddenly on REL_15_STABLE, REL_16_STABLE and master\r\nI faced a failure of a src/test/subscription/t/029_on_error.pl test when\r\n      CPPFLAGS=\"-DWAL_DEBUG\"\r\nand\r\n      printf \"wal_debug = on\\n\" >> \"${TEMP_CONFIG}\"\r\n(or when both publisher and subscriber or only subscriber are run with wal_debug=on)\r\n\r\nSo I propose a little fix to the test.\r\n\r\n\r\nKind regards,\r\nIan Ilyasov.\r\n\r\nJunior Software Developer at Postgres Professional", "msg_date": "Tue, 14 May 2024 10:22:29 +0000", "msg_from": "Ilyasov Ian <[email protected]>", "msg_from_op": true, "msg_subject": "Fix src/test/subscription/t/029_on_error.pl test when wal_debug is\n enabled" }, { "msg_contents": "On Tue, May 14, 2024 at 10:22:29AM +0000, Ilyasov Ian wrote:\n> Hello, hackers!\n> \n> Recently I've been building postgres with different cflags and cppflags.\n> And suddenly on REL_15_STABLE, REL_16_STABLE and master\n> I faced a failure of a src/test/subscription/t/029_on_error.pl test when\n>       CPPFLAGS=\"-DWAL_DEBUG\"\n> and\n>       printf \"wal_debug = on\\n\" >> \"${TEMP_CONFIG}\"\n> (or when both publisher and subscriber or only subscriber are run with wal_debug=on)\n> \n> So I propose a little fix to the test.\n\nRather than assuming that the last line is the one to check, wouldn't\nit be better to grab the data from the CONTEXT line located just after\nthe ERROR reporting the primary key violation?\n\nA multi-line regexp, grabbing the LSN with more matching context based\non the ERROR and the DETAIL strings generating the CONTEXT we want\nseems like a more stable alternative to me than grabbing the last line\nof the logs.\n--\nMichael", "msg_date": "Wed, 15 May 2024 12:55:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Wed, May 15, 2024 at 9:26 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, May 14, 2024 at 10:22:29AM +0000, Ilyasov Ian wrote:\n> > Hello, hackers!\n> >\n> > Recently I've been building postgres with different cflags and cppflags.\n> > And suddenly on REL_15_STABLE, REL_16_STABLE and master\n> > I faced a failure of a src/test/subscription/t/029_on_error.pl test when\n> > CPPFLAGS=\"-DWAL_DEBUG\"\n> > and\n> > printf \"wal_debug = on\\n\" >> \"${TEMP_CONFIG}\"\n> > (or when both publisher and subscriber or only subscriber are run with wal_debug=on)\n> >\n> > So I propose a little fix to the test.\n>\n> Rather than assuming that the last line is the one to check, wouldn't\n> it be better to grab the data from the CONTEXT line located just after\n> the ERROR reporting the primary key violation?\n>\n\nI guess it could be more work if we want to enhance the test for\nERRORs other than the primary key violation. One simple fix is to\nupdate the log_offset to the location of the LOG after successful\nreplication of un-conflicted data. For example, the Log location after\nwe execute the below line in the test:\n\n# Check replicated data\n my $res =\n $node_subscriber->safe_psql('postgres', \"SELECT\ncount(*) FROM tbl\");\n is($res, $expected, $msg);\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 May 2024 17:58:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "Dear Amit, Ian,\r\n\r\n> I guess it could be more work if we want to enhance the test for\r\n> ERRORs other than the primary key violation. One simple fix is to\r\n> update the log_offset to the location of the LOG after successful\r\n> replication of un-conflicted data. For example, the Log location after\r\n> we execute the below line in the test:\r\n> \r\n> # Check replicated data\r\n> my $res =\r\n> $node_subscriber->safe_psql('postgres', \"SELECT\r\n> count(*) FROM tbl\");\r\n> is($res, $expected, $msg);\r\n\r\nI made a patch for confirmation purpose. This worked well on my environment.\r\nIan, how about you?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/", "msg_date": "Wed, 15 May 2024 13:35:48 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "Dear Hayato,\n\n> I made a patch for confirmation purpose. This worked well on my environment.\n> Ian, how about you?\n\nI checked this patch on my environment. It also works well.\nI like this change, but as I see it makes a different approach from Michael's advice.\nHonesly, I do not know what would be better for this test.\n\n\nKind regards,\nIan Ilyasov.\n\n\nJunior Software Developer at Postgres Professional\n\n\n\n\n\n\n\n\n\n\nDear Hayato,\n\n\n\n\n> I made a patch for confirmation purpose. This worked well on my environment.\n\n> Ian, how about you?\n\n\n\n\n\nI checked this patch on my environment. It also works well.\n\nI like this change, but as I see it makes a different approach from Michael's advice.\n\nHonesly, I do not know what would be better for this test.\n\n\n\nKind regards,\nIan Ilyasov.\n\n\nJunior Software Developer at Postgres Professional", "msg_date": "Wed, 15 May 2024 13:45:05 +0000", "msg_from": "Ilyasov Ian <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Wed, May 15, 2024 at 05:58:18PM +0530, Amit Kapila wrote:\n> I guess it could be more work if we want to enhance the test for\n> ERRORs other than the primary key violation.\n\nAnd? You could pass the ERROR string expected as argument of the\nfunction if more flexibility is wanted at some point, no? It happens\nthat you don't require that now, which is fine for the three scenarios\ncalling test_skip_lsn.\n\n> One simple fix is to\n> update the log_offset to the location of the LOG after successful\n> replication of un-conflicted data. For example, the Log location after\n> we execute the below line in the test:\n> \n> # Check replicated data\n> my $res =\n> $node_subscriber->safe_psql('postgres', \"SELECT\n> count(*) FROM tbl\");\n> is($res, $expected, $msg);\n\nThat still looks like a shortcut to me, weak to race conditions on\nslow machines where more log entries could be generated in-between.\nSo it seems to me that you'd still want to make sure that the CONTEXT\nfrom which the LSN is retrieved maps to the sole expected error. This\nis not going to be stable unless there are stronger checks to avoid\nlog entries that can parasite the output, and a stronger matching\nensures that.\n--\nMichael", "msg_date": "Thu, 16 May 2024 07:13:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Thu, May 16, 2024 at 3:43 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 05:58:18PM +0530, Amit Kapila wrote:\n> > I guess it could be more work if we want to enhance the test for\n> > ERRORs other than the primary key violation.\n>\n> And? You could pass the ERROR string expected as argument of the\n> function if more flexibility is wanted at some point, no?\n>\n\nNormally, we consider error_codes for comparison as they are less\nprone to change but here it may be okay to use error_string as we can\nchange it, if required. But let's discuss a bit more on the other\nsolution being discussed below.\n\n> It happens\n> that you don't require that now, which is fine for the three scenarios\n> calling test_skip_lsn.\n>\n> > One simple fix is to\n> > update the log_offset to the location of the LOG after successful\n> > replication of un-conflicted data. For example, the Log location after\n> > we execute the below line in the test:\n> >\n> > # Check replicated data\n> > my $res =\n> > $node_subscriber->safe_psql('postgres', \"SELECT\n> > count(*) FROM tbl\");\n> > is($res, $expected, $msg);\n>\n> That still looks like a shortcut to me, weak to race conditions on\n> slow machines where more log entries could be generated in-between.\n> So it seems to me that you'd still want to make sure that the CONTEXT\n> from which the LSN is retrieved maps to the sole expected error. This\n> is not going to be stable unless there are stronger checks to avoid\n> log entries that can parasite the output, and a stronger matching\n> ensures that.\n>\n\nThis can only be a problem if the apply worker generates more LOG\nentries with the required context but it won't do that unless there is\nan operation on the publisher to replicate. If we see any such danger\nthen I agree this can break on some slow machines but as of now, I\ndon't see any such race condition.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 May 2024 09:00:47 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Thu, May 16, 2024 at 09:00:47AM +0530, Amit Kapila wrote:\n> This can only be a problem if the apply worker generates more LOG\n> entries with the required context but it won't do that unless there is\n> an operation on the publisher to replicate. If we see any such danger\n> then I agree this can break on some slow machines but as of now, I\n> don't see any such race condition.\n\nPerhaps, still I'm not completely sure if this assumption is going to\nalways stand for all the possible configurations we support. So,\nrather than coming back to this test again, I would choose to make the\ntest as stable as possible from the start. That's what mapping with\nthe error message would offer when grabbing the LSN from the CONTEXT\npart of it, because there's a one-one mapping between the expected\nERROR and its CONTEXT from which the information used by the test is\nretrieved.\n--\nMichael", "msg_date": "Fri, 17 May 2024 08:55:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Fri, May 17, 2024 at 5:25 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 09:00:47AM +0530, Amit Kapila wrote:\n> > This can only be a problem if the apply worker generates more LOG\n> > entries with the required context but it won't do that unless there is\n> > an operation on the publisher to replicate. If we see any such danger\n> > then I agree this can break on some slow machines but as of now, I\n> > don't see any such race condition.\n>\n> Perhaps, still I'm not completely sure if this assumption is going to\n> always stand for all the possible configurations we support.\n>\n\nAs per my understanding, this will primarily rely on the apply worker\ndesign, not the other configurations, whether we see additional LOG.\n\n> So,\n> rather than coming back to this test again, I would choose to make the\n> test as stable as possible from the start. That's what mapping with\n> the error message would offer when grabbing the LSN from the CONTEXT\n> part of it, because there's a one-one mapping between the expected\n> ERROR and its CONTEXT from which the information used by the test is\n> retrieved.\n>\n\nI was slightly hesitant to do an ERROR string-based check because the\nerror string can change and it doesn't seem to bring additional\nstability for this particular test. But if you and others are still\nnot convinced with the simple fix suggested by me then feel free to\nproceed with an error-string based check.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 20 May 2024 16:39:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "Dear Michael, Amit, Hayato\n\nI corrected my patch according to what I think\nMichael wanted. I attached the new patch to the letter.\n\n--\nKind regards,\nIan Ilyasov.\n\nJunior Software Developer at Postgres Professional", "msg_date": "Wed, 22 May 2024 14:24:37 +0000", "msg_from": "Ilyasov Ian <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Wed, May 22, 2024 at 02:24:37PM +0000, Ilyasov Ian wrote:\n> I corrected my patch according to what I think\n> Michael wanted. I attached the new patch to the letter.\n\nThanks for compiling this patch. Yes, that's the idea.\n\n-\t qr/processing remote data for replication origin \\\"pg_\\d+\\\" during message type \"INSERT\" for replication target relation \"public.tbl\" in transaction \\d+, finished at ([[:xdigit:]]+\\/[[:xdigit:]]+)/\n+\t qr/ERROR: duplicate key.*\\n.*DETAIL:.*\\n.*CONTEXT:.* finished at ([[:xdigit:]]+\\/[[:xdigit:]]+)/m\n\nThere are three CONTEXT strings that could map to this context. It\nseems to me that we should keep the 'for replication target relation\n\"public.tbl\" in transaction \\d+,', before the \"finished at\" so as it\nis possible to make a difference with the context that has a column\nname and the context where there is no target relation. That makes\nthe regexp longer, but it is not that bad.\n--\nMichael", "msg_date": "Thu, 23 May 2024 14:26:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "> It seems to me that we should keep the 'for replication target relation\n\"public.tbl\" in transaction \\d+,', before the \"finished at\" so as it\nis possible to make a difference with the context that has a column\nname and the context where there is no target relation.\n\nI agree. Attached the updated patch.\n\n--\nKind regards,\nIan Ilyasov.\n\nJunior Software Developer at Postgres Professional", "msg_date": "Thu, 23 May 2024 08:12:07 +0000", "msg_from": "Ilyasov Ian <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" }, { "msg_contents": "On Thu, May 23, 2024 at 08:12:07AM +0000, Ilyasov Ian wrote:\n> > It seems to me that we should keep the 'for replication target relation\n> \"public.tbl\" in transaction \\d+,', before the \"finished at\" so as it\n> is possible to make a difference with the context that has a column\n> name and the context where there is no target relation.\n> \n> I agree. Attached the updated patch.\n\nOne issue that you have here is that the regexp detection would still\nfail when setting log_error_verbosity = verbose because of the extra\nerror code added between the ERROR and the string. This caused the\nLSN to not be fetched properly.\n\nAt the end, I've come up with a regexp that checks for a match of the\nerror string after the ERROR to not confuse the last part getting the\nxdigits, and applied that down to 15. Perhaps I would have added the\nfirst \"ERROR\" part in the check, but could not come down to it for the\nreadability of the thing.\n--\nMichael", "msg_date": "Fri, 24 May 2024 11:24:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix src/test/subscription/t/029_on_error.pl test when wal_debug\n is enabled" } ]
[ { "msg_contents": "plpgsql fails to parse 1_000..1_000 as 1000..1000 in FOR loops:\n\n DO $$\n DECLARE\n i int;\n BEGIN\n FOR i IN 1_000..1_000 LOOP\n END LOOP;\n END $$;\n\n ERROR: syntax error at or near \"1_000.\"\n LINE 5: FOR i IN 1_000..1_000 LOOP\n\nThe scan.l defines rule \"numericfail\" to handle this ambiguity without\nrequiring extra whitespace or parenthesis around the integer literals.\nBut the rule only accepts digits 0-9. Again, an oversight in\nfaff8f8e47. Fixed in the attached patch.\n\n-- \nErik", "msg_date": "Wed, 15 May 2024 03:14:36 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "plpgsql: fix parsing of integer range with underscores" }, { "msg_contents": "On Wed, 15 May 2024 at 02:14, Erik Wienhold <[email protected]> wrote:\n>\n> plpgsql fails to parse 1_000..1_000 as 1000..1000 in FOR loops:\n>\n> Fixed in the attached patch.\n>\n\nNice catch! The patch looks good to me on a quick read-through.\n\nI'll take a closer look next week, after the beta release, since it's\na v16+ bug.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 17 May 2024 09:22:43 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql: fix parsing of integer range with underscores" }, { "msg_contents": "On Fri, 17 May 2024 at 09:22, Dean Rasheed <[email protected]> wrote:\n>\n> On Wed, 15 May 2024 at 02:14, Erik Wienhold <[email protected]> wrote:\n> >\n> > plpgsql fails to parse 1_000..1_000 as 1000..1000 in FOR loops:\n> >\n> > Fixed in the attached patch.\n> >\n>\n> Nice catch! The patch looks good to me on a quick read-through.\n>\n> I'll take a closer look next week, after the beta release, since it's\n> a v16+ bug.\n>\n\n(finally got back to this)\n\nCommitted and back-patched to v16. Thanks for the report and patch.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 4 Jun 2024 12:21:21 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql: fix parsing of integer range with underscores" }, { "msg_contents": "On 2024-06-04 13:21 +0200, Dean Rasheed wrote:\n> Committed and back-patched to v16. Thanks for the report and patch.\n\nThanks for the review and push Dean.\n\n-- \nErik\n\n\n", "msg_date": "Tue, 4 Jun 2024 20:08:05 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql: fix parsing of integer range with underscores" } ]
[ { "msg_contents": "This page has 3 items that are between parentheses, there is an explanation\nwhy they are used this way ?\n\nhttps://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-TABLE\n\n\nEach syntax element is described below in more detail.\n*context_item*, *path_expression* [ AS *json_path_name* ] [ PASSING {\n*value* AS *varname* } [, ...]]\n\nThe input data to query (*context_item*), the JSON path expression defining\nthe query (*path_expression*) with an optional name (*json_path_name*), and\nan optional PASSING clause, which can provide data\n\nWhy (*context_item*), (*path_expression*) and (*json_path_name*) are inside\na parentheses ? This is not usual when explaining any other feature.\n\nregards\nMarcos\n\nThis page has 3 items that are between parentheses, there is an explanation why they are used this way ?https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-TABLE Each syntax element is described below in more detail.context_item, path_expression [ AS json_path_name ] [ PASSING { value AS varname } [, ...]]The input data to query (context_item), the JSON path expression defining the query (path_expression) with an optional name (json_path_name), and an optional PASSING clause, which can provide data Why (context_item), (path_expression) and (json_path_name) are inside a parentheses ? This is not usual when explaining any other feature. regardsMarcos", "msg_date": "Wed, 15 May 2024 09:04:36 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": true, "msg_subject": "</replaceable> in parentesis is not usual on DOCs" }, { "msg_contents": "> On 15 May 2024, at 14:04, Marcos Pegoraro <[email protected]> wrote:\n\n> Why (context_item), (path_expression) and (json_path_name) are inside a parentheses ? This is not usual when explaining any other feature. \n\nAgreed, that's inconsisent with how for example json_table_column is documented\nin the next list item under COLUMNS. Unless objected to I will remove these\nparenthesis.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 15 May 2024 14:34:21 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: </replaceable> in parentesis is not usual on DOCs" }, { "msg_contents": "On Wed, May 15, 2024 at 8:34 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 15 May 2024, at 14:04, Marcos Pegoraro <[email protected]> wrote:\n>\n> > Why (context_item), (path_expression) and (json_path_name) are inside a parentheses ? This is not usual when explaining any other feature.\n>\n> Agreed, that's inconsisent with how for example json_table_column is documented\n> in the next list item under COLUMNS. Unless objected to I will remove these\n> parenthesis.\n>\n\n>> The input data to query (context_item), the JSON path expression defining the query (path_expression) with an optional name (json_path_name)\n\ni think the parentheses is for explaining that\ncontext_item refers \"The input data to query\";\npath_expression refers \"the JSON path expression defining the query\";\njson_path_name refers to \"an optional name\";\n\n\n\nremoving parentheses means we need to rephrase this sentence?\nSo I come up with the following rephrase:\n\nThe context_item specifies the input data to query, the\npath_expression is a JSON path expression defining the query,\njson_path_name is an optional name for the path_expression. The\noptional PASSING clause can provide data values to the\npath_expression.\n\n\n", "msg_date": "Thu, 16 May 2024 12:14:56 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: </replaceable> in parentesis is not usual on DOCs" }, { "msg_contents": "On Thu, May 16, 2024 at 12:14 PM jian he <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 8:34 PM Daniel Gustafsson <[email protected]> wrote:\n> >\n> > > On 15 May 2024, at 14:04, Marcos Pegoraro <[email protected]> wrote:\n> >\n> > > Why (context_item), (path_expression) and (json_path_name) are inside a parentheses ? This is not usual when explaining any other feature.\n> >\n> > Agreed, that's inconsisent with how for example json_table_column is documented\n> > in the next list item under COLUMNS. Unless objected to I will remove these\n> > parenthesis.\n> >\n>\n> >> The input data to query (context_item), the JSON path expression defining the query (path_expression) with an optional name (json_path_name)\n>\n> i think the parentheses is for explaining that\n> context_item refers \"The input data to query\";\n> path_expression refers \"the JSON path expression defining the query\";\n> json_path_name refers to \"an optional name\";\n>\n>\n\n\n> removing parentheses means we need to rephrase this sentence?\n> So I come up with the following rephrase:\n>\n> The context_item specifies the input data to query, the\n> path_expression is a JSON path expression defining the query,\n> json_path_name is an optional name for the path_expression. The\n> optional PASSING clause can provide data values to the\n> path_expression.\n\nBased on this, write a simple patch.", "msg_date": "Mon, 20 May 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: </replaceable> in parentesis is not usual on DOCs" }, { "msg_contents": "On 20.05.24 02:00, jian he wrote:\n>> removing parentheses means we need to rephrase this sentence?\n>> So I come up with the following rephrase:\n>>\n>> The context_item specifies the input data to query, the\n>> path_expression is a JSON path expression defining the query,\n>> json_path_name is an optional name for the path_expression. The\n>> optional PASSING clause can provide data values to the\n>> path_expression.\n> \n> Based on this, write a simple patch.\n\nYour patch kind of messes up the indentation of the text you are \nchanging. Please check that.\n\n\n", "msg_date": "Wed, 22 May 2024 13:14:42 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: </replaceable> in parentesis is not usual on DOCs" }, { "msg_contents": "On Wed, May 22, 2024 at 7:14 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 20.05.24 02:00, jian he wrote:\n> >> removing parentheses means we need to rephrase this sentence?\n> >> So I come up with the following rephrase:\n> >>\n> >> The context_item specifies the input data to query, the\n> >> path_expression is a JSON path expression defining the query,\n> >> json_path_name is an optional name for the path_expression. The\n> >> optional PASSING clause can provide data values to the\n> >> path_expression.\n> >\n> > Based on this, write a simple patch.\n>\n> Your patch kind of messes up the indentation of the text you are\n> changing. Please check that.\n\nplease check attached.", "msg_date": "Wed, 22 May 2024 19:22:26 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: </replaceable> in parentesis is not usual on DOCs" }, { "msg_contents": "On Wed, May 22, 2024 at 8:22 PM jian he <[email protected]> wrote:\n> On Wed, May 22, 2024 at 7:14 PM Peter Eisentraut <[email protected]> wrote:\n> >\n> > On 20.05.24 02:00, jian he wrote:\n> > >> removing parentheses means we need to rephrase this sentence?\n> > >> So I come up with the following rephrase:\n> > >>\n> > >> The context_item specifies the input data to query, the\n> > >> path_expression is a JSON path expression defining the query,\n> > >> json_path_name is an optional name for the path_expression. The\n> > >> optional PASSING clause can provide data values to the\n> > >> path_expression.\n> > >\n> > > Based on this, write a simple patch.\n> >\n> > Your patch kind of messes up the indentation of the text you are\n> > changing. Please check that.\n>\n> please check attached.\n\nSorry about not noticing this earlier.\n\nThanks for the patch and the reviews. I've pushed it now after minor changes.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Tue, 16 Jul 2024 14:14:01 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: </replaceable> in parentesis is not usual on DOCs" } ]
[ { "msg_contents": "While looking at pg_dump performance today I noticed that pg_dump fails to\nclear query results in binary_upgrade_set_pg_class_oids during binary upgrade\nmode. 9a974cbcba00 moved the query to the outer block, but left the PQclear\nand query buffer destruction in the is_index conditional, making it not always\nbe executed. 353708e1fb2d fixed the leak of the query buffer but left the\nPGresult leak. The attached fixes the PGresult leak which when upgrading large\nschemas can be non-trivial.\n\nThis needs to be backpatched down to v15.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 15 May 2024 20:40:43 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Fix PGresult leak in pg_dump during binary upgrade" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> While looking at pg_dump performance today I noticed that pg_dump fails to\n> clear query results in binary_upgrade_set_pg_class_oids during binary upgrade\n> mode. 9a974cbcba00 moved the query to the outer block, but left the PQclear\n> and query buffer destruction in the is_index conditional, making it not always\n> be executed. 353708e1fb2d fixed the leak of the query buffer but left the\n> PGresult leak. The attached fixes the PGresult leak which when upgrading large\n> schemas can be non-trivial.\n\n+1 --- in 353708e1f I was just fixing what Coverity complained about.\nI wonder why it missed this; it does seem to understand that PGresult\nleaks are a thing. But anyway, I missed it too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 May 2024 14:46:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix PGresult leak in pg_dump during binary upgrade" }, { "msg_contents": "> On 15 May 2024, at 20:46, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> While looking at pg_dump performance today I noticed that pg_dump fails to\n>> clear query results in binary_upgrade_set_pg_class_oids during binary upgrade\n>> mode. 9a974cbcba00 moved the query to the outer block, but left the PQclear\n>> and query buffer destruction in the is_index conditional, making it not always\n>> be executed. 353708e1fb2d fixed the leak of the query buffer but left the\n>> PGresult leak. The attached fixes the PGresult leak which when upgrading large\n>> schemas can be non-trivial.\n> \n> +1 --- in 353708e1f I was just fixing what Coverity complained about.\n> I wonder why it missed this; it does seem to understand that PGresult\n> leaks are a thing. But anyway, I missed it too.\n\nDone, backpatched to v15. Thanks for review!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 15 May 2024 23:04:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix PGresult leak in pg_dump during binary upgrade" } ]
[ { "msg_contents": "While seeing changes and new features of\nhttps://www.postgresql.org/docs/devel/release-17.html\nI see that there are too little links to other DOC pages, which would be\nuseful.\n\nThere are links to\n\"logical-replication\", \"sql-merge\", \"plpgsql\", \"libpq\" and\n\"pgstatstatements\"\n\nBut no one link is available to\nCOPY \"ON_ERROR ignore\", pg_dump, JSON_TABLE(), xmltext(), pg_basetype() ,\nand a lot of other important features. So, wouldn't it be good to have\ntheir own links, so the reader doesn't need to manually search for that\nfeature ?\n\nregards\nMarcos\n\nWhile seeing changes and new features ofhttps://www.postgresql.org/docs/devel/release-17.htmlI see that there are too little links to other DOC pages, which would be useful.There are links to\"logical-replication\", \"sql-merge\", \"plpgsql\", \"libpq\" and \"pgstatstatements\"But no one link is available to COPY \"ON_ERROR ignore\", pg_dump, JSON_TABLE(), xmltext(), pg_basetype() , and a lot of other important features. So, wouldn't it be good to have their own links, so the reader doesn't need to manually search for that feature ?regardsMarcos", "msg_date": "Wed, 15 May 2024 16:50:47 -0300", "msg_from": "Marcos Pegoraro <[email protected]>", "msg_from_op": true, "msg_subject": "More links on release-17.html" }, { "msg_contents": "On Wed, May 15, 2024 at 04:50:47PM -0300, Marcos Pegoraro wrote:\n> While seeing changes and new features of\n> https://www.postgresql.org/docs/devel/release-17.html\n> I see that there are too little links to other DOC pages, which would be\n> useful.\n> \n> There are links to\n> \"logical-replication\", \"sql-merge\", \"plpgsql\", \"libpq\" and \"pgstatstatements\"\n> \n> But no one link is available to \n> COPY \"ON_ERROR ignore\", pg_dump, JSON_TABLE(), xmltext(), pg_basetype() , and a\n> lot of other important features. So, wouldn't it be good to have their own\n> links, so the reader doesn't need to manually search for that feature ?\n\nYes, it would be nice to have them. I will be looking for them in the\ncoming weeks. I usually choose the closest link.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 15 May 2024 23:14:48 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More links on release-17.html" } ]
[ { "msg_contents": "Prompted by an off-list bugreport of pg_upgrade hanging (which turned out to be\nslow enough to be perceived to hang) for large schemas I had a look at pg_dump\nperformance during --binary-upgrade mode today. My initial take was to write\nmore or less exactly what Nathan did in [0], only to realize that it was a)\nalready proposed and b) I had even reviewed it. Doh.\n\nThe next attempt was to reduce more per-object queries from binary upgrade, and\nthe typarray lookup binary_upgrade_set_type_oids_by_type_oid seemed like a good\ncandidate for a cache lookup. Since already cache type TypeInfo objects, if we\nadd typarray to TypeInfo we can use the existing lookup code.\n\nAs a baseline, pg_dump dumps a synthetic workload of 10,000 (empty) relations\nwith a width of 1-10 columns:\n\n$ time ./bin/pg_dump --schema-only --quote-all-identifiers --format=custom \\\n --file a postgres > /dev/null\n\nreal\t0m1.256s\nuser\t0m0.273s\nsys\t0m0.059s\n\nThe same dump in binary upgrade mode runs significantly slower:\n\n$ time ./bin/pg_dump --schema-only --quote-all-identifiers --binary-upgrade \\\n --format=custom --file a postgres > /dev/null\n\nreal\t1m9.921s\nuser\t0m0.782s\nsys\t0m0.436s\n\nWith the typarray caching from the patch attached here added:\n\n$ time ./bin/pg_dump --schema-only --quote-all-identifiers --binary-upgrade \\\n --format=custom --file b postgres > /dev/null\n\nreal\t0m45.210s\nuser\t0m0.655s\nsys\t0m0.299s\n\nWith the typarray caching from the patch attached here added *and* Nathan's\npatch from [0] added:\n\n$ time ./bin/pg_dump --schema-only --quote-all-identifiers --binary-upgrade \\\n --format=custom --file a postgres > /dev/null\n\nreal\t0m1.566s\nuser\t0m0.309s\nsys\t0m0.080s\n\nThe combination of these patches thus puts binary uphrade mode almost on par\nwith a plain dump, which has the potential to make upgrades of large schemas\nfaster. Parallel-parking this patch with Nathan's in the July CF, just wanted\nto type it up while it was fresh in my mind.\n\n--\nDaniel Gustafsson\n\n[0] https://commitfest.postgresql.org/48/4936/", "msg_date": "Wed, 15 May 2024 22:15:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "More performance improvements for pg_dump in binary upgrade mode" }, { "msg_contents": "On Wed, May 15, 2024 at 10:15:13PM +0200, Daniel Gustafsson wrote:\n> With the typarray caching from the patch attached here added *and* Nathan's\n> patch from [0] added:\n> \n> $ time ./bin/pg_dump --schema-only --quote-all-identifiers --binary-upgrade \\\n> --format=custom --file a postgres > /dev/null\n> \n> real\t0m1.566s\n> user\t0m0.309s\n> sys\t0m0.080s\n> \n> The combination of these patches thus puts binary uphrade mode almost on par\n> with a plain dump, which has the potential to make upgrades of large schemas\n> faster. Parallel-parking this patch with Nathan's in the July CF, just wanted\n> to type it up while it was fresh in my mind.\n\nNice! I'll plan on taking a closer look at this one. I have a couple\nother ideas in-flight (e.g., parallelizing the once-in-each-database\noperations with libpq's asynchronous APIs) that I'm hoping to post soon,\ntoo. v18 should have a lot of good stuff for pg_upgrade...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 May 2024 15:21:36 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More performance improvements for pg_dump in binary upgrade mode" }, { "msg_contents": "On Wed, May 15, 2024 at 03:21:36PM -0500, Nathan Bossart wrote:\n> Nice! I'll plan on taking a closer look at this one.\n\nLGTM. I've marked the commitfest entry as ready-for-committer.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 4 Jun 2024 21:39:24 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More performance improvements for pg_dump in binary upgrade mode" }, { "msg_contents": "> On 5 Jun 2024, at 04:39, Nathan Bossart <[email protected]> wrote:\n> \n> On Wed, May 15, 2024 at 03:21:36PM -0500, Nathan Bossart wrote:\n>> Nice! I'll plan on taking a closer look at this one.\n> \n> LGTM. I've marked the commitfest entry as ready-for-committer.\n\nThanks for review, committed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 2 Sep 2024 10:59:11 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More performance improvements for pg_dump in binary upgrade mode" } ]
[ { "msg_contents": "Hi hackers,\n\nThere are lots of subscription options listed on the CREATE\nSUBSCRIPTION page [1].\n\nAlthough these boolean options are capable of accepting different\nvalues like \"1|0\", \"on|off\", \"true|false\", here they are all described\nonly using values \"true|false\".\n\n~\n\nIMO this consistent way of describing the boolean option values ought\nto be followed also on the ALTER SUBSCRIPTION page [2]. Specifically,\nthe ALTER SUBSCRIPTION page has one mention of \"SET (failover =\non|off)\" which I think should be changed to say \"SET (failover =\ntrue|false)\"\n\nNow this little change hardly seems important, but actually, it is\nmotivated by another thread [3] under development where this ALTER\nSUBSCRIPTION \"(failover = on|off)\" was copied again, thereby making\nthe consistency between the CREATE SUBSCRIPTION and ALTER SUBSCRIPTION\npages grow further apart, so I think it is best to nip that in the bud\nand simply use \"true|false\" values everywhere.\n\nPSA a patch to make this proposed change.\n\n======\n[1] https://www.postgresql.org/docs/devel/sql-createsubscription.html\n[2] https://www.postgresql.org/docs/devel/sql-altersubscription.html\n[3] https://www.postgresql.org/message-id/flat/8fab8-65d74c80-1-2f28e880%4039088166\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 16 May 2024 10:28:49 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Docs: Always use true|false instead of sometimes on|off for the\n subscription options" }, { "msg_contents": "On Thu, 16 May 2024 at 12:29, Peter Smith <[email protected]> wrote:\n> There are lots of subscription options listed on the CREATE\n> SUBSCRIPTION page [1].\n>\n> Although these boolean options are capable of accepting different\n> values like \"1|0\", \"on|off\", \"true|false\", here they are all described\n> only using values \"true|false\".\n\nIf you want to do this, what's the reason to limit it to just this one\npage of the docs?\n\nIf the following is anything to go by, it doesn't seem we're very\nconsistent about this over the entire documentation.\n\ndoc$ git grep \"<literal>on</literal>\" | wc -l\n122\n\ndoc$ git grep \"<literal>true</literal>\" | wc -l\n222\n\nAnd:\n\ndoc$ git grep \"<literal>off</literal>\" | wc -l\n102\n\ndoc$ git grep \"<literal>false</literal>\" | wc -l\n162\n\nI think unless we're going to standardise on something then there's\nnot much point in adjusting individual cases. IMO, there could be an\nendless stream of follow-on patches as a result of accepting this.\n\nDavid\n\n\n", "msg_date": "Thu, 16 May 2024 17:11:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: Always use true|false instead of sometimes on|off for the\n subscription options" }, { "msg_contents": "On Thu, May 16, 2024 at 3:11 PM David Rowley <[email protected]> wrote:\n>\n> On Thu, 16 May 2024 at 12:29, Peter Smith <[email protected]> wrote:\n> > There are lots of subscription options listed on the CREATE\n> > SUBSCRIPTION page [1].\n> >\n> > Although these boolean options are capable of accepting different\n> > values like \"1|0\", \"on|off\", \"true|false\", here they are all described\n> > only using values \"true|false\".\n>\n> If you want to do this, what's the reason to limit it to just this one\n> page of the docs?\n\nYeah, I had a vested interest in this one place because I've been\nreviewing the other thread [1] that was mentioned above. If that other\nthread chooses \"true|false\" then putting \"true|false\" adjacent to\nanother \"on|off\" will look silly. But if that other thread adopts the\nexisting 'failover=on|off' values then it will diverge even further\nfrom being consistent with the CREATE SUBSCRIPTION page.\nUnfortunately, that other thread cannot take it upon itself to make\nthis change because it has nothing to do with the 'failover' option,\nSo I saw no choice but to post an independent patch for this.\n\nI checked all the PUBLICATION/SUBSCRIPTION reference pages and this\nwas the only inconsistent value that I found. But I might be mistaken.\n\n>\n> If the following is anything to go by, it doesn't seem we're very\n> consistent about this over the entire documentation.\n>\n> doc$ git grep \"<literal>on</literal>\" | wc -l\n> 122\n>\n> doc$ git grep \"<literal>true</literal>\" | wc -l\n> 222\n>\n> And:\n>\n> doc$ git grep \"<literal>off</literal>\" | wc -l\n> 102\n>\n> doc$ git grep \"<literal>false</literal>\" | wc -l\n> 162\n>\n\nHmm. I'm not entirely sure if those stats are meaningful because I'm\nnot saying option values should avoid \"on|off\" -- the point was I just\nsuggesting they should be used consistent with where they are\ndescribed. For example, the CREATE SUBSCRIPTION page describes every\nboolean option value as \"true|false\", so let's use \"true|false\" in\nevery other docs place where those are mentioned. OTOH, other options\non other pages may be described as \"on|off\" which is fine by me, but\nthen those should similarly be using \"on|off\" again wherever they are\nmentioned.\n\n> I think unless we're going to standardise on something then there's\n> not much point in adjusting individual cases. IMO, there could be an\n> endless stream of follow-on patches as a result of accepting this.\n>\n\nStandardisation might be ideal, but certainly, I'm not going to\nattempt to make a giant patch that impacts the entire documentation\njust for this one small improvement.\n\nIt seems a shame if \"perfect\" becomes the enemy of \"good\"; How else do\nyou suggest I can make the ALTER SUBSCRIPTION page better? If this\none-line change is rejected then the most likely outcome is nothing\nwill ever happen to change it.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 16 May 2024 17:04:38 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Docs: Always use true|false instead of sometimes on|off for the\n subscription options" }, { "msg_contents": "On Thu, 16 May 2024 at 19:05, Peter Smith <[email protected]> wrote:\n>\n> On Thu, May 16, 2024 at 3:11 PM David Rowley <[email protected]> wrote:\n> > If you want to do this, what's the reason to limit it to just this one\n> > page of the docs?\n>\n> Yeah, I had a vested interest in this one place because I've been\n> reviewing the other thread [1] that was mentioned above. If that other\n> thread chooses \"true|false\" then putting \"true|false\" adjacent to\n> another \"on|off\" will look silly.\n\nOK, looking a bit further I see this option is new to v17. After a\nbit more study of the sgml, I agree that it's worth changing this.\n\nI've pushed your patch.\n\nDavid\n\n\n", "msg_date": "Fri, 17 May 2024 00:42:13 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: Always use true|false instead of sometimes on|off for the\n subscription options" }, { "msg_contents": "Peter Smith <[email protected]> writes:\n> Yeah, I had a vested interest in this one place because I've been\n> reviewing the other thread [1] that was mentioned above. If that other\n> thread chooses \"true|false\" then putting \"true|false\" adjacent to\n> another \"on|off\" will look silly. But if that other thread adopts the\n> existing 'failover=on|off' values then it will diverge even further\n> from being consistent with the CREATE SUBSCRIPTION page.\n\nIt's intentional that we allow more than one spelling of boolean\nvalues. I can see some value in being consistent within nearby\nexamples, but I don't agree at all that it should be uniform\nacross all our docs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 12:40:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: Always use true|false instead of sometimes on|off for the\n subscription options" }, { "msg_contents": "On Thu, May 16, 2024 at 10:42 PM David Rowley <[email protected]> wrote:\n>\n> On Thu, 16 May 2024 at 19:05, Peter Smith <[email protected]> wrote:\n> >\n> > On Thu, May 16, 2024 at 3:11 PM David Rowley <[email protected]> wrote:\n> > > If you want to do this, what's the reason to limit it to just this one\n> > > page of the docs?\n> >\n> > Yeah, I had a vested interest in this one place because I've been\n> > reviewing the other thread [1] that was mentioned above. If that other\n> > thread chooses \"true|false\" then putting \"true|false\" adjacent to\n> > another \"on|off\" will look silly.\n>\n> OK, looking a bit further I see this option is new to v17. After a\n> bit more study of the sgml, I agree that it's worth changing this.\n>\n> I've pushed your patch.\n>\n\nThanks very much for pushing. It was just a one-time thing -- I won't\ngo looking for more examples like it.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 17 May 2024 09:07:31 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Docs: Always use true|false instead of sometimes on|off for the\n subscription options" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a copy of the PostgreSQL 17 Beta 1 release announcement \r\ndraft. This contains a user-facing summary of some of the features that \r\nwill be available in the Beta, as well as a call to test. I've made an \r\neffort to group them logically around the different workflows they affect.\r\n\r\nA few notes:\r\n\r\n* The section with the features is not 80-char delimited. I will do that \r\nbefore the final copy\r\n\r\n* There is an explicit callout that we've added in the SQL/JSON features \r\nthat were previously reverted in PG15. I want to ensure we're \r\ntransparent about that, but also use it as a hook to get people testing.\r\n\r\nWhen reviewing:\r\n\r\n* Please check for correctness of feature descriptions, keeping in mind \r\nthis is targeted for a general audience\r\n\r\n* Please indicate if you believe there's a notable omission, or if we \r\nshould omit any of these callouts\r\n\r\n* Please indicate if a description is confusing - I'm happy to rewrite \r\nto ensure it's clearer.\r\n\r\nPlease provide feedback no later than Wed 2024-05-22 18:00 UTC. As the \r\nbeta release takes some extra effort, I want to ensure all changes are \r\nin with time to spare before release day.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 15 May 2024 21:45:35 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Thanks for writing that up. It's always exciting to see this each year.\n\nOn Thu, 16 May 2024 at 13:45, Jonathan S. Katz <[email protected]> wrote:\n> * Please indicate if you believe there's a notable omission, or if we\n> should omit any of these callouts\n\nI'd say the streaming read stuff added in b5a9b18cd and subsequent\ncommits like b7b0f3f27 and 041b96802 are worth a mention. I'd be happy\nto see this over the IS NOT NULL qual stuff that I worked on in there\nor even the AVX512 bit counting. Speeding up a backwater aggregate\nfunction is nice, but IMO, not compatible with reducing the number\nreads.\n\nThere's some benchmarking in a youtube video:\nhttps://youtu.be/QAYzWAlxCYc?si=L0UT6Lrf067ZBv46&t=237\n\n> * Please indicate if a description is confusing - I'm happy to rewrite\n> to ensure it's clearer.\n>\n> Please provide feedback no later than Wed 2024-05-22 18:00 UTC.\n\nThe only other thing I saw from a quick read was a stray \"the\" in \"the\ncopy proceed even if the there is an error inserting a row.\"\n\nDavid\n\n\n", "msg_date": "Thu, 16 May 2024 14:42:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Hi Jonathan\n\nDid the review and did not find any issues.\n\nRegards\nKashif Zeeshan\nBitnine Global\n\nOn Thu, May 16, 2024 at 6:45 AM Jonathan S. Katz <[email protected]>\nwrote:\n\n> Hi,\n>\n> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement\n> draft. This contains a user-facing summary of some of the features that\n> will be available in the Beta, as well as a call to test. I've made an\n> effort to group them logically around the different workflows they affect.\n>\n> A few notes:\n>\n> * The section with the features is not 80-char delimited. I will do that\n> before the final copy\n>\n> * There is an explicit callout that we've added in the SQL/JSON features\n> that were previously reverted in PG15. I want to ensure we're\n> transparent about that, but also use it as a hook to get people testing.\n>\n> When reviewing:\n>\n> * Please check for correctness of feature descriptions, keeping in mind\n> this is targeted for a general audience\n>\n> * Please indicate if you believe there's a notable omission, or if we\n> should omit any of these callouts\n>\n> * Please indicate if a description is confusing - I'm happy to rewrite\n> to ensure it's clearer.\n>\n> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\n> beta release takes some extra effort, I want to ensure all changes are\n> in with time to spare before release day.\n>\n> Thanks,\n>\n> Jonathan\n>\n\nHi JonathanDid the review and did not find any issues.RegardsKashif ZeeshanBitnine GlobalOn Thu, May 16, 2024 at 6:45 AM Jonathan S. Katz <[email protected]> wrote:Hi,\n\nAttached is a copy of the PostgreSQL 17 Beta 1 release announcement \ndraft. This contains a user-facing summary of some of the features that \nwill be available in the Beta, as well as a call to test. I've made an \neffort to group them logically around the different workflows they affect.\n\nA few notes:\n\n* The section with the features is not 80-char delimited. I will do that \nbefore the final copy\n\n* There is an explicit callout that we've added in the SQL/JSON features \nthat were previously reverted in PG15. I want to ensure we're \ntransparent about that, but also use it as a hook to get people testing.\n\nWhen reviewing:\n\n* Please check for correctness of feature descriptions, keeping in mind \nthis is targeted for a general audience\n\n* Please indicate if you believe there's a notable omission, or if we \nshould omit any of these callouts\n\n* Please indicate if a description is confusing - I'm happy to rewrite \nto ensure it's clearer.\n\nPlease provide feedback no later than Wed 2024-05-22 18:00 UTC. As the \nbeta release takes some extra effort, I want to ensure all changes are \nin with time to spare before release day.\n\nThanks,\n\nJonathan", "msg_date": "Thu, 16 May 2024 09:11:04 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Thu, May 16, 2024, 02:45 Jonathan S. Katz <[email protected]> wrote:\n\n> Hi,\n>\n> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement\n> draft. This contains a user-facing summary of some of the features that\n> will be available in the Beta, as well as a call to test. I've made an\n> effort to group them logically around the different workflows they affect.\n>\n> A few notes:\n>\n> * The section with the features is not 80-char delimited. I will do that\n> before the final copy\n>\n> * There is an explicit callout that we've added in the SQL/JSON features\n> that were previously reverted in PG15. I want to ensure we're\n> transparent about that, but also use it as a hook to get people testing.\n>\n> When reviewing:\n>\n> * Please check for correctness of feature descriptions, keeping in mind\n> this is targeted for a general audience\n>\n> * Please indicate if you believe there's a notable omission, or if we\n> should omit any of these callouts\n>\n> * Please indicate if a description is confusing - I'm happy to rewrite\n> to ensure it's clearer.\n>\n> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\n> beta release takes some extra effort, I want to ensure all changes are\n> in with time to spare before release day.\n>\n\n\"Now as of PostgreSQL 17, you can now use parallel index builds for [BRIN](\nhttps://www.postgresql.org/docs/17/brin.html) indexes.\"\n\nThe 2nd \"now\" is redundant.\n\n\n\"Finally, PostgreSQL 17 adds more explicitly SIMD instructions, including\nAVX-512 support for the [`bit_count](\nhttps://www.postgresql.org/docs/17/functions-bitstring.html) function.\"\n\nWould \"SIMD-explicit instructions\" be better? Also, I know you may not be\nusing markdown for the final version, but the bit_count backtick isn't\nmatched by a closing backtick.\n\n\n\"[`COPY`](https://www.postgresql.org/docs/17/sql-copy.html), used to\nefficiently bulk load data into PostgreSQL\"\n\nThe \"used to\" makes me stumble into reading it as meaning \"it previously\ncould efficiently bulk load data\".\n\nPerhaps just add a \"which is\" before \"used\"?\n\n\n\"PostgreSQL 17 includes a built-in collation provider that provides similar\nsemantics to the `C` collation provided by libc.\"\n\n\"provider\", \"provides\", and \"provided\" feels too repetitive.\n\nHow about, \"PostgreSQL 17 includes a built-in collation provider with\nsemantics similar to the `C` collation offered by libc.\"?\n\n\nRegards\n\nThom\n\nOn Thu, May 16, 2024, 02:45 Jonathan S. Katz <[email protected]> wrote:Hi,\n\nAttached is a copy of the PostgreSQL 17 Beta 1 release announcement \ndraft. This contains a user-facing summary of some of the features that \nwill be available in the Beta, as well as a call to test. I've made an \neffort to group them logically around the different workflows they affect.\n\nA few notes:\n\n* The section with the features is not 80-char delimited. I will do that \nbefore the final copy\n\n* There is an explicit callout that we've added in the SQL/JSON features \nthat were previously reverted in PG15. I want to ensure we're \ntransparent about that, but also use it as a hook to get people testing.\n\nWhen reviewing:\n\n* Please check for correctness of feature descriptions, keeping in mind \nthis is targeted for a general audience\n\n* Please indicate if you believe there's a notable omission, or if we \nshould omit any of these callouts\n\n* Please indicate if a description is confusing - I'm happy to rewrite \nto ensure it's clearer.\n\nPlease provide feedback no later than Wed 2024-05-22 18:00 UTC. As the \nbeta release takes some extra effort, I want to ensure all changes are \nin with time to spare before release day.\"Now as of PostgreSQL 17, you can now use parallel index builds for [BRIN](https://www.postgresql.org/docs/17/brin.html) indexes.\"The 2nd \"now\" is redundant.\"Finally, PostgreSQL 17 adds more explicitly SIMD instructions, including AVX-512 support for the [`bit_count](https://www.postgresql.org/docs/17/functions-bitstring.html) function.\"Would \"SIMD-explicit instructions\" be better? Also, I know you may not be using markdown for the final version, but the bit_count backtick isn't matched by a closing backtick.\"[`COPY`](https://www.postgresql.org/docs/17/sql-copy.html), used to efficiently bulk load data into PostgreSQL\"The \"used to\" makes me stumble into reading it as meaning \"it previously could efficiently bulk load data\".Perhaps just add a \"which is\" before \"used\"?\"PostgreSQL 17 includes a built-in collation provider that provides similar semantics to the `C` collation provided by libc.\"\"provider\", \"provides\", and \"provided\" feels too repetitive.How about, \"PostgreSQL 17 includes a built-in collation provider with semantics similar to the `C` collation offered by libc.\"?RegardsThom", "msg_date": "Thu, 16 May 2024 06:10:57 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Hi,\n\nOn Wed, May 15, 2024 at 09:45:35PM -0400, Jonathan S. Katz wrote:\n> Hi,\n> \n> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement draft.\n\nThanks for working on it!\n\nI've one comment:\n\n> PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into an operation.\n\nInstead of \"to give more insight into an operation\", what about \"to give more\ninsight about what a session is waiting for (should it be active)\"?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 May 2024 05:15:58 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Hello,\n\nI am trying to open the 17 docs but it looks removed. Getting\nfollowing message \"Page not found\"\n\nhttps://www.postgresql.org/docs/17/\n\n\nRegards,\nZaid Shabbir\n\nOn Thu, May 16, 2024 at 10:16 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, May 15, 2024 at 09:45:35PM -0400, Jonathan S. Katz wrote:\n> > Hi,\n> >\n> > Attached is a copy of the PostgreSQL 17 Beta 1 release announcement draft.\n>\n> Thanks for working on it!\n>\n> I've one comment:\n>\n> > PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into an operation.\n>\n> Instead of \"to give more insight into an operation\", what about \"to give more\n> insight about what a session is waiting for (should it be active)\"?\n>\n> Regards,\n>\n> --\n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>", "msg_date": "Thu, 16 May 2024 10:36:49 +0500", "msg_from": "zaidagilist <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Thu, May 16, 2024, 06:37 zaidagilist <[email protected]> wrote:\n\n> Hello,\n>\n> I am trying to open the 17 docs but it looks removed. Getting\n> following message \"Page not found\"\n>\n> https://www.postgresql.org/docs/17/\n>\n>\n> Regards,\n> Zaid Shabbir\n>\n\nThat link isn't set up yet, but will be (or should be) when the\nannouncement goes out.\n\nRegards\n\nThom\n\n>\n\nOn Thu, May 16, 2024, 06:37 zaidagilist <[email protected]> wrote:Hello,\n\nI am trying to open the 17 docs but it looks removed. Getting\nfollowing message \"Page not found\"\n\nhttps://www.postgresql.org/docs/17/\n\n\nRegards,\nZaid ShabbirThat link isn't set up yet, but will be (or should be) when the announcement goes out.Regards Thom", "msg_date": "Thu, 16 May 2024 06:48:52 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Thu, 16 May 2024 at 17:37, zaidagilist <[email protected]> wrote:\n> I am trying to open the 17 docs but it looks removed. Getting\n> following message \"Page not found\"\n>\n> https://www.postgresql.org/docs/17/\n\nIt's called \"devel\" for \"development\" until we branch sometime before July:\n\nhttps://www.postgresql.org/docs/devel/\n\nDavid\n\n\n", "msg_date": "Thu, 16 May 2024 18:19:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Thu, 16 May 2024 at 03:45, Jonathan S. Katz <[email protected]> wrote:\n> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement\n> draft.\n\nI think we can quickly mention c4ab7da6061 in the COPY paragraph, in\nsome benchmarks it improved perf by close to 2x. Something like this:\n\"has improved performance in PostgreSQL 17 when the source encoding\nmatches the destination encoding *and when sending large rows from\nserver to client*\"\n\nAlso, I think it's a bit weird to put the current COPY paragraph under\nDeveloper Experience. I think if you want to keep it there instead of\nmove it to the per section, we should put the line about IGNORE_ERROR\nfirst instead of the perf improvements. Now the IGNORE_ERROR addition\nseems more of an afterthought.\n\ns/IGNORE_ERROR/ON_ERROR\n\nI think it would be good to clarify if the following applies when\nupgrading from or to PostgreSQL 17:\n\"Starting with PostgreSQL 17, you no longer need to drop logical\nreplication slots when using pg_upgrade\"\n\nFinally, I personally would have included a lot more links for the new\nitems in this document. Some that would benefit from being a link\nimho:\n- pg_createsubscriber\n- JSON_TABLE\n- SQL/JSON constructor\n- SQL/JSON query functions\n- ON_ERROR\n- sslnegotiation\n- PQchangePassword\n- pg_maintain\n\n\n", "msg_date": "Thu, 16 May 2024 12:41:50 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/15/24 21:45, Jonathan S. Katz wrote:\n> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\n> beta release takes some extra effort, I want to ensure all changes are\n> in with time to spare before release day.\n\n\"You can find information about all of the features and changes found in\nPostgreSQL 17\"\n\nSounds repetitive; maybe:\n\n\"Information about all of the features and changes in PostgreSQL 17 can \nbe found in the [release notes]\"\n\n\n\"more explicitly SIMD instructions\" I think ought to be \"more explicit \nSIMD instructions\"\n\n\n\"PostgreSQL 17 includes a built-in collation provider that provides \nsimilar semantics to the `C` collation provided by libc.\"\n\nI think that needs to mention UTF-8 encoding somehow, and \"provided by \nlibc\" is not really true; maybe:\n\n\"PostgreSQL 17 includes a built-in collation provider that provides \nsimilar sorting semantics to the `C` collation except with UTF-8 \nencoding rather than SQL_ASCII.\"\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 08:05:58 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/16/24 08:05, Joe Conway wrote:\n> On 5/15/24 21:45, Jonathan S. Katz wrote:\n>> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\n>> beta release takes some extra effort, I want to ensure all changes are\n>> in with time to spare before release day.\n\n\"`EXPLAIN` can now show how much time is spent for I/O block reads and \nwrites\"\n\nIs that really EXPLAIN, or rather EXPLAIN ANALYZE?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 10:04:02 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/15/24 10:42 PM, David Rowley wrote:\r\n> Thanks for writing that up. It's always exciting to see this each year.\r\n> \r\n> On Thu, 16 May 2024 at 13:45, Jonathan S. Katz <[email protected]> wrote:\r\n>> * Please indicate if you believe there's a notable omission, or if we\r\n>> should omit any of these callouts\r\n> \r\n> I'd say the streaming read stuff added in b5a9b18cd and subsequent\r\n> commits like b7b0f3f27 and 041b96802 are worth a mention. I'd be happy\r\n> to see this over the IS NOT NULL qual stuff that I worked on in there\r\n> or even the AVX512 bit counting. Speeding up a backwater aggregate\r\n> function is nice, but IMO, not compatible with reducing the number\r\n> reads.\r\n> There's some benchmarking in a youtube video:\r\n> https://youtu.be/QAYzWAlxCYc?si=L0UT6Lrf067ZBv46&t=237\r\n\r\nNice! Definitely agree on including this - it wasn't initially clear to \r\nme on the read of the release notes. I'll update it. Please see in the \r\nnext revision (will posted upthread), proposed text here for \r\nconvenience, as I'm not sure I'm appropriately capturing it:\r\n\r\n==\r\nThis release introduces adds an interface to stream I/O, and can show \r\nperformance improvements when performing sequential scans and running \r\n[`ANALYZE`](https://www.postgresql.org/docs/17/sql-analyze.html).\r\n==\r\n\r\nThe AVX-512 bit counting showed solid impact[1] on the binary distance \r\nfunctions in pgvector (I have to re-run again w/v17, as I seem to recall \r\nseeing some numbers that boosted it 5-7x [but recall isn't 100% ;)]).\r\n\r\n>> * Please indicate if a description is confusing - I'm happy to rewrite\r\n>> to ensure it's clearer.\r\n>>\r\n>> Please provide feedback no later than Wed 2024-05-22 18:00 UTC.\r\n> \r\n> The only other thing I saw from a quick read was a stray \"the\" in \"the\r\n> copy proceed even if the there is an error inserting a row.\"\r\n\r\nThanks!\r\n\r\nJonathan\r\n\r\n[1] https://github.com/pgvector/pgvector/pull/519", "msg_date": "Sun, 19 May 2024 17:02:10 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/16/24 1:10 AM, Thom Brown wrote:\r\n> On Thu, May 16, 2024, 02:45 Jonathan S. Katz <[email protected] \r\n> <mailto:[email protected]>> wrote:\r\n> \r\n> Hi,\r\n> \r\n> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement\r\n> draft. This contains a user-facing summary of some of the features that\r\n> will be available in the Beta, as well as a call to test. I've made an\r\n> effort to group them logically around the different workflows they\r\n> affect.\r\n> \r\n> A few notes:\r\n> \r\n> * The section with the features is not 80-char delimited. I will do\r\n> that\r\n> before the final copy\r\n> \r\n> * There is an explicit callout that we've added in the SQL/JSON\r\n> features\r\n> that were previously reverted in PG15. I want to ensure we're\r\n> transparent about that, but also use it as a hook to get people testing.\r\n> \r\n> When reviewing:\r\n> \r\n> * Please check for correctness of feature descriptions, keeping in mind\r\n> this is targeted for a general audience\r\n> \r\n> * Please indicate if you believe there's a notable omission, or if we\r\n> should omit any of these callouts\r\n> \r\n> * Please indicate if a description is confusing - I'm happy to rewrite\r\n> to ensure it's clearer.\r\n> \r\n> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\r\n> beta release takes some extra effort, I want to ensure all changes are\r\n> in with time to spare before release day.\r\n> \r\n> \r\n> \"Now as of PostgreSQL 17, you can now use parallel index builds for \r\n> [BRIN](https://www.postgresql.org/docs/17/brin.html \r\n> <https://www.postgresql.org/docs/17/brin.html>) indexes.\"\r\n> \r\n> The 2nd \"now\" is redundant.\r\n> \r\n> \r\n> \"Finally, PostgreSQL 17 adds more explicitly SIMD instructions, \r\n> including AVX-512 support for the \r\n> [`bit_count](https://www.postgresql.org/docs/17/functions-bitstring.html \r\n> <https://www.postgresql.org/docs/17/functions-bitstring.html>) function.\"\r\n> \r\n> Would \"SIMD-explicit instructions\" be better? Also, I know you may not \r\n> be using markdown for the final version, but the bit_count backtick \r\n> isn't matched by a closing backtick.\r\n> \r\n> \r\n> \"[`COPY`](https://www.postgresql.org/docs/17/sql-copy.html \r\n> <https://www.postgresql.org/docs/17/sql-copy.html>), used to efficiently \r\n> bulk load data into PostgreSQL\"\r\n> \r\n> The \"used to\" makes me stumble into reading it as meaning \"it previously \r\n> could efficiently bulk load data\".\r\n> \r\n> Perhaps just add a \"which is\" before \"used\"?\r\n> \r\n> \r\n> \"PostgreSQL 17 includes a built-in collation provider that provides \r\n> similar semantics to the `C` collation provided by libc.\"\r\n> \r\n> \"provider\", \"provides\", and \"provided\" feels too repetitive.\r\n> \r\n> How about, \"PostgreSQL 17 includes a built-in collation provider with \r\n> semantics similar to the `C` collation offered by libc.\"?\r\n\r\nThanks - I accepted (with modifications) most of the suggestions here. \r\nI'll include in the next version of the draft.\r\n\r\nJonathan", "msg_date": "Sun, 19 May 2024 17:07:07 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/16/24 1:15 AM, Bertrand Drouvot wrote:\r\n> Hi,\r\n> \r\n> On Wed, May 15, 2024 at 09:45:35PM -0400, Jonathan S. Katz wrote:\r\n>> Hi,\r\n>>\r\n>> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement draft.\r\n> \r\n> Thanks for working on it!\r\n> \r\n> I've one comment:\r\n> \r\n>> PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into an operation.\r\n> \r\n> Instead of \"to give more insight into an operation\", what about \"to give more\r\n> insight about what a session is waiting for (should it be active)\"?\r\n\r\nI put:\r\n\r\n\"to give more in insight into why a session is blocked.\"\r\n\r\nDoes that work?\r\n\r\nJonathan", "msg_date": "Sun, 19 May 2024 17:10:10 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/16/24 6:41 AM, Jelte Fennema-Nio wrote:\r\n> On Thu, 16 May 2024 at 03:45, Jonathan S. Katz <[email protected]> wrote:\r\n>> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement\r\n>> draft.\r\n> \r\n> I think we can quickly mention c4ab7da6061 in the COPY paragraph, in\r\n> some benchmarks it improved perf by close to 2x. Something like this:\r\n> \"has improved performance in PostgreSQL 17 when the source encoding\r\n> matches the destination encoding *and when sending large rows from\r\n> server to client*\"\r\n\r\n(I'm going to make a note to test this with loading large vectors :) \r\nI've modified the text to reflect this. Please see the new language \r\nupthread.\r\n\r\n> Also, I think it's a bit weird to put the current COPY paragraph under\r\n> Developer Experience. I think if you want to keep it there instead of\r\n> move it to the per section, we should put the line about IGNORE_ERROR\r\n> first instead of the perf improvements. Now the IGNORE_ERROR addition\r\n> seems more of an afterthought.\r\n\r\nI don't agree with this. I think we want to push COPY as a developer \r\nfeature - I see a lot of people not utilizing COPY appropriate when it \r\nwould really benefit the performance of their app, and I think \r\nemphasizing it as the way to do bulk loads (while touting that it's even \r\nfaster!) will help make it more apparent.\r\n\r\n> s/IGNORE_ERROR/ON_ERROR\r\n\r\nThanks.\r\n\r\n> I think it would be good to clarify if the following applies when\r\n> upgrading from or to PostgreSQL 17:\r\n> \"Starting with PostgreSQL 17, you no longer need to drop logical\r\n> replication slots when using pg_upgrade\"\r\n\r\nAdjusted.\r\n\r\n> Finally, I personally would have included a lot more links for the new\r\n> items in this document. Some that would benefit from being a link\r\n> imho:\r\n> - pg_createsubscriber\r\n> - JSON_TABLE\r\n> - SQL/JSON constructor\r\n> - SQL/JSON query functions\r\n> - ON_ERROR\r\n> - sslnegotiation\r\n> - PQchangePassword\r\n> - pg_maintain\r\n\r\nI have to check if these have deep links or not, but I was planning to \r\nmake another pass once the copy (no pun intended) is closer to \r\nfinalized, so I don't have to constantly edit markdown.\r\n\r\nJonathan", "msg_date": "Sun, 19 May 2024 17:25:00 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/16/24 8:05 AM, Joe Conway wrote:\r\n> On 5/15/24 21:45, Jonathan S. Katz wrote:\r\n>> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\r\n>> beta release takes some extra effort, I want to ensure all changes are\r\n>> in with time to spare before release day.\r\n> \r\n> \"You can find information about all of the features and changes found in\r\n> PostgreSQL 17\"\r\n> \r\n> Sounds repetitive; maybe:\r\n> \r\n> \"Information about all of the features and changes in PostgreSQL 17 can \r\n> be found in the [release notes]\"\r\n\r\nThe first is active voice, the suggestion passive. However, I tightened \r\nthe language:\r\n\r\nYou can find information about all of the PostgreSQL 17 features and \r\nchanges in the [release \r\nnotes](https://www.postgresql.org/docs/17/release-17.html):\r\n\r\n> \"PostgreSQL 17 includes a built-in collation provider that provides \r\n> similar semantics to the `C` collation provided by libc.\"\r\n> \r\n> I think that needs to mention UTF-8 encoding somehow, and \"provided by \r\n> libc\" is not really true; maybe:\r\n> \r\n> \"PostgreSQL 17 includes a built-in collation provider that provides \r\n> similar sorting semantics to the `C` collation except with UTF-8 \r\n> encoding rather than SQL_ASCII.\"\r\n\r\nWFM. Taken verbatim.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 19 May 2024 17:30:56 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/15/24 9:45 PM, Jonathan S. Katz wrote:\r\n> Hi,\r\n> \r\n> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement \r\n> draft. This contains a user-facing summary of some of the features that \r\n> will be available in the Beta, as well as a call to test. I've made an \r\n> effort to group them logically around the different workflows they affect.\r\n> \r\n> A few notes:\r\n> \r\n> * The section with the features is not 80-char delimited. I will do that \r\n> before the final copy\r\n> \r\n> * There is an explicit callout that we've added in the SQL/JSON features \r\n> that were previously reverted in PG15. I want to ensure we're \r\n> transparent about that, but also use it as a hook to get people testing.\r\n> \r\n> When reviewing:\r\n> \r\n> * Please check for correctness of feature descriptions, keeping in mind \r\n> this is targeted for a general audience\r\n> \r\n> * Please indicate if you believe there's a notable omission, or if we \r\n> should omit any of these callouts\r\n> \r\n> * Please indicate if a description is confusing - I'm happy to rewrite \r\n> to ensure it's clearer.\r\n> \r\n> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the \r\n> beta release takes some extra effort, I want to ensure all changes are \r\n> in with time to spare before release day.\r\n\r\nThanks for all the feedback to date. Please see the next revision. \r\nAgain, please provide feedback no later than 2024-05-22 18:00 UTC.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 19 May 2024 17:34:56 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Op 5/19/24 om 23:34 schreef Jonathan S. Katz:\n> On 5/15/24 9:45 PM, Jonathan S. Katz wrote:\n>> Hi,\n>>\n>> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement \n\n'This release introduces adds an interface' should be:\n'This release adds an interface'\n (or 'introduces'; just not both...)\n\nThanks,\n\nErik Rijkers\n\n\n\n", "msg_date": "Mon, 20 May 2024 00:15:54 +0200", "msg_from": "Erik Rijkers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, 20 May 2024 at 09:35, Jonathan S. Katz <[email protected]> wrote:\n> Thanks for all the feedback to date. Please see the next revision.\n> Again, please provide feedback no later than 2024-05-22 18:00 UTC.\n\nThanks for the updates.\n\n> [`COPY`](https://www.postgresql.org/docs/17/sql-copy.html) is used to efficiently bulk load data into PostgreSQL, and with PostgreSQL 17 shows a 2x performance improvement when loading large rows.\n\nThe 2x thing mentioned by Jelte is for COPY TO rather than COPY FROM.\nSo I think \"exporting\" or \"sending large rows to the client\" rather\nthan \"loading\".\n\nThere's also a stray \"with\" in that sentence.\n\nDavid\n\n\n", "msg_date": "Mon, 20 May 2024 11:24:06 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Hi Jon,\n\nRegarding vacuum \"has shown up to a 6x improvement in overall time to\ncomplete its work\" -- I believe I've seen reported numbers close to\nthat only 1) when measuring the index phase in isolation or maybe 2)\nthe entire vacuum of unlogged tables with one, perfectly-correlated\nindex (testing has less variance with WAL out of the picture). I\nbelieve tables with many indexes would show a lot of improvement, but\nI'm not aware of testing that case specifically. Can you clarify where\n6x came from?\n\n\n", "msg_date": "Mon, 20 May 2024 13:58:48 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Hi,\n\nOn Sun, May 19, 2024 at 05:10:10PM -0400, Jonathan S. Katz wrote:\n> On 5/16/24 1:15 AM, Bertrand Drouvot wrote:\n> > Hi,\n> > \n> > On Wed, May 15, 2024 at 09:45:35PM -0400, Jonathan S. Katz wrote:\n> > > Hi,\n> > > \n> > > Attached is a copy of the PostgreSQL 17 Beta 1 release announcement draft.\n> > \n> > Thanks for working on it!\n> > \n> > I've one comment:\n> > \n> > > PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into an operation.\n> > \n> > Instead of \"to give more insight into an operation\", what about \"to give more\n> > insight about what a session is waiting for (should it be active)\"?\n> \n> I put:\n> \n> \"to give more in insight into why a session is blocked.\"\n\nThanks!\n\n> \n> Does that work?\n> \n\nI think using \"waiting\" is better (as the view is \"pg_wait_events\" and the\njoin with pg_stat_activity would be on the \"wait_event_type\" and \"wait_event\"\ncolumns).\n\nThe reason I mentioned \"should it be active\" is because wait_event and wait_event_type\ncould be non empty in pg_stat_activity while the session is not in an active state\nanymore (then not waiting).\n\nA right query would be like the one in [1]:\n\n\"\nSELECT a.pid, a.wait_event, w.description\n FROM pg_stat_activity a JOIN\n pg_wait_events w ON (a.wait_event_type = w.type AND\n a.wait_event = w.name)\n WHERE a.wait_event is NOT NULL and a.state = 'active';\n\"\n\nmeans filtering on the \"active\" state too, and that's what the description\nproposal I made was trying to highlight.\n\n[1]: https://www.postgresql.org/docs/devel/monitoring-stats.html\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 May 2024 09:34:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 2024-May-19, Jonathan S. Katz wrote:\n\n> ### Query and Operational Performance Improvements\n\nIn this section I'd add mention the new GUCs to control SLRU memory\nsize, which is going to be a huge performance boon for cases where the\ncurrent fixed-size buffers cause bottlenecks. Perhaps something like\n\n\"Increase scalability of transaction, subtransaction and multixact\nshared memory buffer handling, and make their buffer sizes configurable\".\n\nI don't know if we have any published numbers of the performance\nimprovement achieved, but with this patch (or ancestor ones) some\nsystems go from completely unoperational to working perfectly fine.\nMaybe the best link is here\nhttps://www.postgresql.org/docs/devel/runtime-config-resource.html#GUC-MULTIXACT-MEMBER-BUFFERS\nthough exactly which GUC affects any particular user is workload-\ndependant, so I'm not sure how best to do it.\n\n> ### Developer Experience\n\nI think this section should also include the libpq query cancellation\nimprovements Jelte wrote. Maybe something like \"On the client side,\nPostgreSQL 17 provides better support for asynchronous and more secure\nquery cancellation routines in libpq.\" --> link to\nhttps://www.postgresql.org/docs/17/libpq-cancel.html\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n", "msg_date": "Mon, 20 May 2024 12:08:32 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 2024-May-16, David Rowley wrote:\n\n> On Thu, 16 May 2024 at 17:37, zaidagilist <[email protected]> wrote:\n> > I am trying to open the 17 docs but it looks removed. Getting\n> > following message \"Page not found\"\n> >\n> > https://www.postgresql.org/docs/17/\n> \n> It's called \"devel\" for \"development\" until we branch sometime before July:\n> \n> https://www.postgresql.org/docs/devel/\n\nHmm, but that would mean that the Beta1 announce would ship full of\nlinks that will remain broken until July. I'm not sure what the\nworkflow for this is, but I hope the /17/ URLs would become valid with\nbeta1, later this week.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No renuncies a nada. No te aferres a nada.\"\n\n\n", "msg_date": "Mon, 20 May 2024 12:11:37 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, 20 May 2024 at 22:11, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-May-16, David Rowley wrote:\n>\n> > On Thu, 16 May 2024 at 17:37, zaidagilist <[email protected]> wrote:\n> > > I am trying to open the 17 docs but it looks removed. Getting\n> > > following message \"Page not found\"\n> > >\n> > > https://www.postgresql.org/docs/17/\n> >\n> > It's called \"devel\" for \"development\" until we branch sometime before July:\n> >\n> > https://www.postgresql.org/docs/devel/\n>\n> Hmm, but that would mean that the Beta1 announce would ship full of\n> links that will remain broken until July. I'm not sure what the\n> workflow for this is, but I hope the /17/ URLs would become valid with\n> beta1, later this week.\n\nI didn't quite click that it was Jonathan's links that were being\ncomplained about.\n\nI don't know how the website picks up where to link the doc page for a\ngiven version. I see from e0b82fc8e8 that the PACKAGE_VERSION was\nchanged from 16devel to 16beta1. Does the website have something that\nextracts \"devel\" from the former and \"16\" from the latter? I see the\nrelease announcement for 16beta1 had /16/ links per [1]. So, I guess\nit works. I just don't know how.\n\nDavid\n\n[1] https://www.postgresql.org/about/news/postgresql-16-beta-1-released-2643/\n\n\n", "msg_date": "Mon, 20 May 2024 22:32:36 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, 20 May 2024 at 00:24, David Rowley <[email protected]> wrote:\n\n> On Mon, 20 May 2024 at 09:35, Jonathan S. Katz <[email protected]>\n> wrote:\n> > Thanks for all the feedback to date. Please see the next revision.\n> > Again, please provide feedback no later than 2024-05-22 18:00 UTC.\n>\n> Thanks for the updates.\n>\n> > [`COPY`](https://www.postgresql.org/docs/17/sql-copy.html) is used to\n> efficiently bulk load data into PostgreSQL, and with PostgreSQL 17 shows a\n> 2x performance improvement when loading large rows.\n>\n> The 2x thing mentioned by Jelte is for COPY TO rather than COPY FROM.\n> So I think \"exporting\" or \"sending large rows to the client\" rather\n> than \"loading\".\n>\n> There's also a stray \"with\" in that sentence.\n>\n\nAre you referring to the \"with\" in \"and with PostgreSQL 17\"? If so, it\nlooks valid to me.\n-- \nThom\n\nOn Mon, 20 May 2024 at 00:24, David Rowley <[email protected]> wrote:On Mon, 20 May 2024 at 09:35, Jonathan S. Katz <[email protected]> wrote:\n> Thanks for all the feedback to date. Please see the next revision.\n> Again, please provide feedback no later than 2024-05-22 18:00 UTC.\n\nThanks for the updates.\n\n> [`COPY`](https://www.postgresql.org/docs/17/sql-copy.html) is used to efficiently bulk load data into PostgreSQL, and with PostgreSQL 17 shows a 2x performance improvement when loading large rows.\n\nThe 2x thing mentioned by Jelte is for COPY TO rather than COPY FROM.\nSo I think \"exporting\" or \"sending large rows to the client\"  rather\nthan \"loading\".\n\nThere's also a stray \"with\" in that sentence.Are you referring to the \"with\" in \"and with PostgreSQL 17\"? If so, it looks valid to me. -- Thom", "msg_date": "Mon, 20 May 2024 12:15:32 +0100", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, 20 May 2024 at 23:16, Thom Brown <[email protected]> wrote:\n>\n> On Mon, 20 May 2024 at 00:24, David Rowley <[email protected]> wrote:\n>>\n>> On Mon, 20 May 2024 at 09:35, Jonathan S. Katz <[email protected]> wrote:\n>> > Thanks for all the feedback to date. Please see the next revision.\n>> > Again, please provide feedback no later than 2024-05-22 18:00 UTC.\n>>\n>> Thanks for the updates.\n>>\n>> > [`COPY`](https://www.postgresql.org/docs/17/sql-copy.html) is used to efficiently bulk load data into PostgreSQL, and with PostgreSQL 17 shows a 2x performance improvement when loading large rows.\n>>\n>> The 2x thing mentioned by Jelte is for COPY TO rather than COPY FROM.\n>> So I think \"exporting\" or \"sending large rows to the client\" rather\n>> than \"loading\".\n>>\n>> There's also a stray \"with\" in that sentence.\n>\n>\n> Are you referring to the \"with\" in \"and with PostgreSQL 17\"? If so, it looks valid to me.\n\nYes that one. It sounds wrong to me, but that's from a British\nEnglish point of view. I'm continuing to learn the subtle differences\nwith American English. Maybe this is one.\n\nIt would make sense to me if it was \"and with PostgreSQL 17, a 2x\n...\". From my point of view either \"with\" shouldn't be there or\n\"shows\" could be replaced with a comma. However, if you're ok with it,\nI'll say no more. I know this is well into your territory.\n\nDavid\n\n\n", "msg_date": "Mon, 20 May 2024 23:31:57 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/19/24 7:24 PM, David Rowley wrote:\r\n> On Mon, 20 May 2024 at 09:35, Jonathan S. Katz <[email protected]> wrote:\r\n>> Thanks for all the feedback to date. Please see the next revision.\r\n>> Again, please provide feedback no later than 2024-05-22 18:00 UTC.\r\n> \r\n> Thanks for the updates.\r\n> \r\n>> [`COPY`](https://www.postgresql.org/docs/17/sql-copy.html) is used to efficiently bulk load data into PostgreSQL, and with PostgreSQL 17 shows a 2x performance improvement when loading large rows.\r\n> \r\n> The 2x thing mentioned by Jelte is for COPY TO rather than COPY FROM.\r\n> So I think \"exporting\" or \"sending large rows to the client\" rather\r\n> than \"loading\".\r\n\r\nThanks for the clarification - I've edited it as such. That also brings \r\nup a good point to highlight that COPY is not just for loading (since my \r\nbias is to do loads these days :) Now it reads:\r\n\r\n[`COPY`](https://www.postgresql.org/docs/17/sql-copy.html) is used to \r\nefficiently bulk load and export data from PostgreSQL, and now with \r\nPostgreSQL 17 you may see up to a 2x performance improvement when \r\nexporting large rows.\r\n\r\n> There's also a stray \"with\" in that sentence.\r\n\r\nThanks, fixed.\r\n\r\nJonathan", "msg_date": "Mon, 20 May 2024 07:44:11 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/20/24 2:58 AM, John Naylor wrote:\r\n> Hi Jon,\r\n> \r\n> Regarding vacuum \"has shown up to a 6x improvement in overall time to\r\n> complete its work\" -- I believe I've seen reported numbers close to\r\n> that only 1) when measuring the index phase in isolation or maybe 2)\r\n> the entire vacuum of unlogged tables with one, perfectly-correlated\r\n> index (testing has less variance with WAL out of the picture). I\r\n> believe tables with many indexes would show a lot of improvement, but\r\n> I'm not aware of testing that case specifically. Can you clarify where\r\n> 6x came from?\r\n\r\nSawada-san showed me the original context, but I can't rapidly find it \r\nin the thread. Sawada-san, can you please share the numbers behind this?\r\n\r\nWe can adjust the claim - but I'd like to ensure we highlight how the \r\nchanges to vacuum will visibly impact users.\r\n\r\nJonathan", "msg_date": "Mon, 20 May 2024 07:47:54 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, May 20, 2024 at 5:35 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> On 5/15/24 9:45 PM, Jonathan S. Katz wrote:\n> > Hi,\n> >\n> > Attached is a copy of the PostgreSQL 17 Beta 1 release announcement\n> > draft. This contains a user-facing summary of some of the features that\n> > will be available in the Beta, as well as a call to test. I've made an\n> > effort to group them logically around the different workflows they affect.\n> >\n> > A few notes:\n> >\n> > * The section with the features is not 80-char delimited. I will do that\n> > before the final copy\n> >\n> > * There is an explicit callout that we've added in the SQL/JSON features\n> > that were previously reverted in PG15. I want to ensure we're\n> > transparent about that, but also use it as a hook to get people testing.\n> >\n> > When reviewing:\n> >\n> > * Please check for correctness of feature descriptions, keeping in mind\n> > this is targeted for a general audience\n> >\n> > * Please indicate if you believe there's a notable omission, or if we\n> > should omit any of these callouts\n> >\n> > * Please indicate if a description is confusing - I'm happy to rewrite\n> > to ensure it's clearer.\n> >\n> > Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the\n> > beta release takes some extra effort, I want to ensure all changes are\n> > in with time to spare before release day.\n>\n> Thanks for all the feedback to date. Please see the next revision.\n> Again, please provide feedback no later than 2024-05-22 18:00 UTC.\n>\n> Thanks,\n>\n> Jonathan\n>\n\nrelease note (https://momjian.us/pgsql_docs/release-17.html)\nis\n\"Add jsonpath methods to convert JSON values to other JSON data types\n(Jeevan Chalke)\"\n\n\n>> Additionally, PostgreSQL 17 adds more functionality to its `jsonpath` implementation, including the ability to convert JSON values to different data types.\nso, I am not sure this is 100% correct.\n\nMaybe we can rephrase it like:\n\n>> Additionally, PostgreSQL 17 adds more functionality to its `jsonpath` implementation, including the ability to convert JSON values to other JSON data types.\n\n\n", "msg_date": "Mon, 20 May 2024 20:31:38 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, May 20, 2024 at 8:47 PM Jonathan S. Katz <[email protected]> wrote:\n>\n> On 5/20/24 2:58 AM, John Naylor wrote:\n> > Hi Jon,\n> >\n> > Regarding vacuum \"has shown up to a 6x improvement in overall time to\n> > complete its work\" -- I believe I've seen reported numbers close to\n> > that only 1) when measuring the index phase in isolation or maybe 2)\n> > the entire vacuum of unlogged tables with one, perfectly-correlated\n> > index (testing has less variance with WAL out of the picture). I\n> > believe tables with many indexes would show a lot of improvement, but\n> > I'm not aware of testing that case specifically. Can you clarify where\n> > 6x came from?\n>\n> Sawada-san showed me the original context, but I can't rapidly find it\n> in the thread. Sawada-san, can you please share the numbers behind this?\n>\n\nI referenced the numbers that I measured during the development[1]\n(test scripts are here[2]). IIRC I used unlogged tables and indexes,\nand these numbers were the entire vacuum execution time including heap\nscanning, index vacuuming and heap vacuuming.\n\nFYI today I've run the same script with PG17 and measured the\nexecution times. Here are results:\n\nmonotonically ordered int column index:\nsystem usage: CPU: user: 1.72 s, system: 0.47 s, elapsed: 2.20 s\n\nuuid column index:\nsystem usage: CPU: user: 3.62 s, system: 0.89 s, elapsed: 4.52 s\n\nint & uuid indexes in parallel:\nsystem usage: CPU: user: 2.24 s, system: 0.44 s, elapsed: 5.01 s\n\nThese numbers are better than ones I measured with v62 patch set as we\nnow introduced some optimization into tidstore (8a1b31e6 and f35bd9b).\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoBci3Hujzijubomo1tdwH3XtQ9F89cTNQ4bsQijOmqnEw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CANWCAZYqWibTRCWs5mV57mLj1A0nbKX-eV5G%2Bd-KmBOGHTVY-w%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 May 2024 22:40:32 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/20/24 8:31 AM, jian he wrote:\r\n\r\n> release note (https://momjian.us/pgsql_docs/release-17.html)\r\n> is\r\n> \"Add jsonpath methods to convert JSON values to other JSON data types\r\n> (Jeevan Chalke)\"\r\n> \r\n> \r\n>>> Additionally, PostgreSQL 17 adds more functionality to its `jsonpath` implementation, including the ability to convert JSON values to different data types.\r\n> so, I am not sure this is 100% correct.\r\n> \r\n> Maybe we can rephrase it like:\r\n> \r\n>>> Additionally, PostgreSQL 17 adds more functionality to its `jsonpath` implementation, including the ability to convert JSON values to other JSON data types.\r\n\r\nThe release note goes on to state:\r\n\r\n==\r\nThe jsonpath methods are .bigint(), .boolean(), .date(), \r\n.decimal([precision [, scale]]), .integer(), .number(), .string(), \r\n.time(), .time_tz(), .timestamp(), and .timestamp_tz().\r\n==\r\n\r\nAnd reviewing the docs[1], these are converted to a PostgreSQL native \r\ntypes, and not JSON types (additionally a bunch of those are not JSON \r\ntypes).\r\n\r\nJeevan: can you please confirm that this work converts into the \r\nPostgreSQL native types?\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/devel/functions-json.html", "msg_date": "Mon, 20 May 2024 12:44:23 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/20/24 6:32 AM, David Rowley wrote:\r\n> On Mon, 20 May 2024 at 22:11, Alvaro Herrera <[email protected]> wrote:\r\n>>\r\n>> On 2024-May-16, David Rowley wrote:\r\n>>\r\n>>> On Thu, 16 May 2024 at 17:37, zaidagilist <[email protected]> wrote:\r\n>>>> I am trying to open the 17 docs but it looks removed. Getting\r\n>>>> following message \"Page not found\"\r\n>>>>\r\n>>>> https://www.postgresql.org/docs/17/\r\n>>>\r\n>>> It's called \"devel\" for \"development\" until we branch sometime before July:\r\n>>>\r\n>>> https://www.postgresql.org/docs/devel/\r\n>>\r\n>> Hmm, but that would mean that the Beta1 announce would ship full of\r\n>> links that will remain broken until July. I'm not sure what the\r\n>> workflow for this is, but I hope the /17/ URLs would become valid with\r\n>> beta1, later this week.\r\n> \r\n> I didn't quite click that it was Jonathan's links that were being\r\n> complained about.\r\n> \r\n> I don't know how the website picks up where to link the doc page for a\r\n> given version. I see from e0b82fc8e8 that the PACKAGE_VERSION was\r\n> changed from 16devel to 16beta1. Does the website have something that\r\n> extracts \"devel\" from the former and \"16\" from the latter? I see the\r\n> release announcement for 16beta1 had /16/ links per [1]. So, I guess\r\n> it works. I just don't know how.\r\n\r\nThe tl;dr is that the /17/ links will be available on release day. I've \r\nvalidated the current links using the /devel/ heading.\r\n\r\nJonathan", "msg_date": "Mon, 20 May 2024 12:45:39 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On Mon, May 20, 2024 at 8:41 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, May 20, 2024 at 8:47 PM Jonathan S. Katz <[email protected]> wrote:\n> >\n> > On 5/20/24 2:58 AM, John Naylor wrote:\n> > > Hi Jon,\n> > >\n> > > Regarding vacuum \"has shown up to a 6x improvement in overall time to\n> > > complete its work\" -- I believe I've seen reported numbers close to\n> > > that only 1) when measuring the index phase in isolation or maybe 2)\n> > > the entire vacuum of unlogged tables with one, perfectly-correlated\n> > > index (testing has less variance with WAL out of the picture). I\n> > > believe tables with many indexes would show a lot of improvement, but\n> > > I'm not aware of testing that case specifically. Can you clarify where\n> > > 6x came from?\n> >\n> > Sawada-san showed me the original context, but I can't rapidly find it\n> > in the thread. Sawada-san, can you please share the numbers behind this?\n> >\n>\n> I referenced the numbers that I measured during the development[1]\n> (test scripts are here[2]). IIRC I used unlogged tables and indexes,\n> and these numbers were the entire vacuum execution time including heap\n> scanning, index vacuuming and heap vacuuming.\n\nThanks for confirming.\n\nThe wording \"has a new internal data structure that reduces memory\nusage and has shown up to a 6x improvement in overall time to complete\nits work\" is specific for runtime, and the memory use is less\nspecific. Unlogged tables are not the norm, so I'd be cautious of\nreporting numbers specifically designed (for testing) to isolate the\nthing that changed.\n\nI'm wondering if it might be both more impressive-sounding and more\nrealistic for the average user experience to reverse that: specific on\nmemory, and less specific on speed. The best-case memory reduction\noccurs for table update patterns that are highly localized, such as\nthe most recently inserted records, and I'd say those are a lot more\ncommon than the use of unlogged tables.\n\nMaybe something like \"has a new internal data structure that reduces\noverall time to complete its work and can use up to 20x less memory.\"\n\nNow, it is true that when dead tuples are sparse and evenly spaced\n(e.g. 1 every 100 pages), vacuum can now actually use more memory than\nv16. However, the nature of that scenario also means that the number\nof TIDs just can't get very big to begin with. In contrast, while the\nruntime improvement for normal (logged) tables is likely not\nearth-shattering, I believe it will always be at least somewhat\nfaster, and never slower.\n\n\n", "msg_date": "Tue, 21 May 2024 17:40:46 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/20/24 5:34 AM, Bertrand Drouvot wrote:\r\n> Hi,\r\n> \r\n> On Sun, May 19, 2024 at 05:10:10PM -0400, Jonathan S. Katz wrote:\r\n>> On 5/16/24 1:15 AM, Bertrand Drouvot wrote:\r\n>>> Hi,\r\n>>>\r\n>>> On Wed, May 15, 2024 at 09:45:35PM -0400, Jonathan S. Katz wrote:\r\n>>>> Hi,\r\n>>>>\r\n>>>> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement draft.\r\n>>>\r\n>>> Thanks for working on it!\r\n>>>\r\n>>> I've one comment:\r\n>>>\r\n>>>> PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into an operation.\r\n>>>\r\n>>> Instead of \"to give more insight into an operation\", what about \"to give more\r\n>>> insight about what a session is waiting for (should it be active)\"?\r\n>>\r\n>> I put:\r\n>>\r\n>> \"to give more in insight into why a session is blocked.\"\r\n> \r\n> Thanks!\r\n> \r\n>>\r\n>> Does that work?\r\n>>\r\n> \r\n> I think using \"waiting\" is better (as the view is \"pg_wait_events\" and the\r\n> join with pg_stat_activity would be on the \"wait_event_type\" and \"wait_event\"\r\n> columns).\r\n> \r\n> The reason I mentioned \"should it be active\" is because wait_event and wait_event_type\r\n> could be non empty in pg_stat_activity while the session is not in an active state\r\n> anymore (then not waiting).\r\n> \r\n> A right query would be like the one in [1]:\r\n> \r\n> \"\r\n> SELECT a.pid, a.wait_event, w.description\r\n> FROM pg_stat_activity a JOIN\r\n> pg_wait_events w ON (a.wait_event_type = w.type AND\r\n> a.wait_event = w.name)\r\n> WHERE a.wait_event is NOT NULL and a.state = 'active';\r\n> \"\r\n> \r\n> means filtering on the \"active\" state too, and that's what the description\r\n> proposal I made was trying to highlight.\r\n\r\nThanks. As such I made it:\r\n\r\n\"which provides descriptions about wait events and can be combined with \r\n`pg_stat_activity` to give more insight into why an active session is \r\nwaiting.\"\r\n\r\nJonathan", "msg_date": "Wed, 22 May 2024 19:01:54 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/19/24 6:15 PM, Erik Rijkers wrote:\r\n> Op 5/19/24 om 23:34 schreef Jonathan S. Katz:\r\n>> On 5/15/24 9:45 PM, Jonathan S. Katz wrote:\r\n>>> Hi,\r\n>>>\r\n>>> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement \r\n> \r\n> 'This release introduces adds an interface'  should be:\r\n> 'This release adds an interface'\r\n>    (or 'introduces'; just not both...)\r\n\r\nThanks; adjusted in the next copy.\r\n\r\nJonathan", "msg_date": "Wed, 22 May 2024 19:02:36 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/20/24 6:08 AM, Alvaro Herrera wrote:\r\n> On 2024-May-19, Jonathan S. Katz wrote:\r\n> \r\n>> ### Query and Operational Performance Improvements\r\n> \r\n> In this section I'd add mention the new GUCs to control SLRU memory\r\n> size, which is going to be a huge performance boon for cases where the\r\n> current fixed-size buffers cause bottlenecks. Perhaps something like\r\n> \r\n> \"Increase scalability of transaction, subtransaction and multixact\r\n> shared memory buffer handling, and make their buffer sizes configurable\".\r\n> \r\n> I don't know if we have any published numbers of the performance\r\n> improvement achieved, but with this patch (or ancestor ones) some\r\n> systems go from completely unoperational to working perfectly fine.\r\n> Maybe the best link is here\r\n> https://www.postgresql.org/docs/devel/runtime-config-resource.html#GUC-MULTIXACT-MEMBER-BUFFERS\r\n> though exactly which GUC affects any particular user is workload-\r\n> dependant, so I'm not sure how best to do it.\r\n\r\nI originally had penciled in this change, but didn't have a good way of \r\ndescribing it. The above solves that problem. I went with:\r\n\r\n\"PostgreSQL 17 also includes configuration parameters that can control \r\nscalability of [transaction, subtransaction and multixact \r\nbuffers](https://www.postgresql.org/docs/devel/runtime-config-resource.html#GUC-MULTIXACT-MEMBER-BUFFERS).\"\r\n\r\n>> ### Developer Experience\r\n> \r\n> I think this section should also include the libpq query cancellation\r\n> improvements Jelte wrote. Maybe something like \"On the client side,\r\n> PostgreSQL 17 provides better support for asynchronous and more secure\r\n> query cancellation routines in libpq.\" --> link to\r\n> https://www.postgresql.org/docs/17/libpq-cancel.html\r\n\r\nI went with:\r\n\r\nPostgreSQL 17 also provides better support for [asynchronous and more \r\nsecure query cancellation \r\nroutines](https://www.postgresql.org/docs/17/libpq-cancel.html), which \r\ndrivers can adopt using the libpq API.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 22 May 2024 19:10:33 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/21/24 6:40 AM, John Naylor wrote:\r\n> On Mon, May 20, 2024 at 8:41 PM Masahiko Sawada <[email protected]> wrote:\r\n>>\r\n>> On Mon, May 20, 2024 at 8:47 PM Jonathan S. Katz <[email protected]> wrote:\r\n>>>\r\n>>> On 5/20/24 2:58 AM, John Naylor wrote:\r\n>>>> Hi Jon,\r\n>>>>\r\n>>>> Regarding vacuum \"has shown up to a 6x improvement in overall time to\r\n>>>> complete its work\" -- I believe I've seen reported numbers close to\r\n>>>> that only 1) when measuring the index phase in isolation or maybe 2)\r\n>>>> the entire vacuum of unlogged tables with one, perfectly-correlated\r\n>>>> index (testing has less variance with WAL out of the picture). I\r\n>>>> believe tables with many indexes would show a lot of improvement, but\r\n>>>> I'm not aware of testing that case specifically. Can you clarify where\r\n>>>> 6x came from?\r\n>>>\r\n>>> Sawada-san showed me the original context, but I can't rapidly find it\r\n>>> in the thread. Sawada-san, can you please share the numbers behind this?\r\n>>>\r\n>>\r\n>> I referenced the numbers that I measured during the development[1]\r\n>> (test scripts are here[2]). IIRC I used unlogged tables and indexes,\r\n>> and these numbers were the entire vacuum execution time including heap\r\n>> scanning, index vacuuming and heap vacuuming.\r\n> \r\n> Thanks for confirming.\r\n> \r\n> The wording \"has a new internal data structure that reduces memory\r\n> usage and has shown up to a 6x improvement in overall time to complete\r\n> its work\" is specific for runtime, and the memory use is less\r\n> specific. Unlogged tables are not the norm, so I'd be cautious of\r\n> reporting numbers specifically designed (for testing) to isolate the\r\n> thing that changed.\r\n> \r\n> I'm wondering if it might be both more impressive-sounding and more\r\n> realistic for the average user experience to reverse that: specific on\r\n> memory, and less specific on speed. The best-case memory reduction\r\n> occurs for table update patterns that are highly localized, such as\r\n> the most recently inserted records, and I'd say those are a lot more\r\n> common than the use of unlogged tables.\r\n> \r\n> Maybe something like \"has a new internal data structure that reduces\r\n> overall time to complete its work and can use up to 20x less memory.\"\r\n> \r\n> Now, it is true that when dead tuples are sparse and evenly spaced\r\n> (e.g. 1 every 100 pages), vacuum can now actually use more memory than\r\n> v16. However, the nature of that scenario also means that the number\r\n> of TIDs just can't get very big to begin with. In contrast, while the\r\n> runtime improvement for normal (logged) tables is likely not\r\n> earth-shattering, I believe it will always be at least somewhat\r\n> faster, and never slower.\r\n\r\nThanks for the feedback. I flipped it around, per your suggestion:\r\n\r\n\"has a new internal data structure that has shown up to a 20x memory \r\nreduction for vacuum, along with improvements in overall time to \r\ncomplete its work.\"\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 22 May 2024 19:16:33 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/19/24 5:34 PM, Jonathan S. Katz wrote:\r\n> On 5/15/24 9:45 PM, Jonathan S. Katz wrote:\r\n>> Hi,\r\n>>\r\n>> Attached is a copy of the PostgreSQL 17 Beta 1 release announcement \r\n>> draft. This contains a user-facing summary of some of the features \r\n>> that will be available in the Beta, as well as a call to test. I've \r\n>> made an effort to group them logically around the different workflows \r\n>> they affect.\r\n>>\r\n>> A few notes:\r\n>>\r\n>> * The section with the features is not 80-char delimited. I will do \r\n>> that before the final copy\r\n>>\r\n>> * There is an explicit callout that we've added in the SQL/JSON \r\n>> features that were previously reverted in PG15. I want to ensure we're \r\n>> transparent about that, but also use it as a hook to get people testing.\r\n>>\r\n>> When reviewing:\r\n>>\r\n>> * Please check for correctness of feature descriptions, keeping in \r\n>> mind this is targeted for a general audience\r\n>>\r\n>> * Please indicate if you believe there's a notable omission, or if we \r\n>> should omit any of these callouts\r\n>>\r\n>> * Please indicate if a description is confusing - I'm happy to rewrite \r\n>> to ensure it's clearer.\r\n>>\r\n>> Please provide feedback no later than Wed 2024-05-22 18:00 UTC. As the \r\n>> beta release takes some extra effort, I want to ensure all changes are \r\n>> in with time to spare before release day.\r\n> \r\n> Thanks for all the feedback to date. Please see the next revision. \r\n> Again, please provide feedback no later than 2024-05-22 18:00 UTC.\r\n\r\nThanks again everyone for all your feedback. Attached is the final(-ish, \r\nas I'll do one more readthrough before release) draft of the release \r\nannouncement.\r\n\r\nIf you catch something and are able to post it prior to 2024-05-23 12:00 \r\nUTC, I may be able to incorporate into the announcement.\r\n\r\nThanks!\r\n\r\nJonathan", "msg_date": "Wed, 22 May 2024 19:17:39 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Hi,\n\nOn Wed, May 22, 2024 at 07:01:54PM -0400, Jonathan S. Katz wrote:\n> Thanks. As such I made it:\n> \n> \"which provides descriptions about wait events and can be combined with\n> `pg_stat_activity` to give more insight into why an active session is\n> waiting.\"\n> \n\nThanks! Works for me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 23 May 2024 04:21:08 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "Looks good. Some minor changes:\n\nOn 2024-May-22, Jonathan S. Katz wrote:\n\n> ### Query and Operational Performance Improvements\n> \n> PostgreSQL 17 builds on recent releases and continues to improve performance across the entire system. [Vacuum](https://www.postgresql.org/docs/17/routine-vacuuming.html), the PostgreSQL process responsible for reclaiming storage, has a new internal data structure that has shown up to a 20x memory reduction for vacuum,\n\nThis reads funny:\n\"Vacuum ... has shown a memory reduction for vacuum\"\nMaybe just removing the \"for vacuum\" words at the end of the phrase is a\nsufficient fix.\n\n> PostgreSQL 17 can now use both planner statistics and the sort order of [common table expressions](https://www.postgresql.org/docs/17/queries-with.html) (aka [`WITH` queries](https://www.postgresql.org/docs/17/queries-with.html)) to further\n\nIs usage of \"aka\" typical? I would have expected \"a.k.a.\" but maybe I'm\njust outdated.\n\n\n> Finally, PostgreSQL 17 adds more explicit SIMD instructions, including AVX-512 support for the [`bit_count](https://www.postgresql.org/docs/17/functions-bitstring.html) function.\n\nNote the lack of closing backtick in [`bit_count`].\n\n> ### Developer Experience\n> \n> PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for the `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table, and SQL/JSON constructor (`JSON`, `JSON_SCALAR`, `JSON_SERIALIZE`) and query functions (`JSON_EXISTS`, `JSON_QUERY`, `JSON_VALUE`). Notably, these features were originally planned for the PostgreSQL 15 release but were reverted during the beta period due to design considerations, which is one reason we ask for you to help us test features during beta! Additionally, PostgreSQL 17 adds more functionality to its `jsonpath` implementation, including the ability to convert JSON values to different data types.\n\nI'm not sure it's accurate to say that converting JSON values to\ndifferent datatypes is part of the jsonpath implementation; as I\nunderstand, jsonpath is the representation used to search for elements\nwithin JSON values. If you replace \"including\" with \"and\", the result\nseems reasonable.\n\n> PostgreSQL 17 adds a new connection parameter, `sslnegotation`, which allows PostgreSQL to perform direct TLS handshakes when using [ALPN](https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation), eliminating a network roundtrip. PostgreSQL is registered as `postgresql` in the ALPN directory.\n\nTypo here \"sslnegotation\" missing an i, sslnegotiation.\n\n\n> PostgreSQL 17 normalizes the parameters for `CALL` in [`pg_stat_statements`](https://www.postgresql.org/docs/17/pgstatstatements.html), reducing the number of entries for frequently called stored procedures. Additionally, [`VACUUM` progress reporting](https://www.postgresql.org/docs/devel/progress-reporting.html#VACUUM-PROGRESS-REPORTING) now shows the progress of vacuuming indexes. PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into why an active session is waiting. Additionally, some information in the [`pg_stat_bgwriter`](https://www.postgresql.org/docs/17/monitoring-stats.html#MONITORING-PG-STAT-BGWRITER-VIEW) view is now split out into the new [`pg_stat_checkpointer`](https://www.postgresql.org/docs/17/monitoring-stats.html#MONITORING-PG-STAT-CHECKPOINTER-VIEW) view.\n\nNote use of one link to \"/devel/\" here.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n\n\n", "msg_date": "Thu, 23 May 2024 14:00:13 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" }, { "msg_contents": "On 5/23/24 8:00 AM, Alvaro Herrera wrote:\r\n> Looks good. Some minor changes:\r\n\r\nThanks for this - right at the deadline! :D\r\n\r\n>> ### Query and Operational Performance Improvements\r\n>>\r\n>> PostgreSQL 17 builds on recent releases and continues to improve performance across the entire system. [Vacuum](https://www.postgresql.org/docs/17/routine-vacuuming.html), the PostgreSQL process responsible for reclaiming storage, has a new internal data structure that has shown up to a 20x memory reduction for vacuum,\r\n> \r\n> This reads funny:\r\n> \"Vacuum ... has shown a memory reduction for vacuum\"\r\n> Maybe just removing the \"for vacuum\" words at the end of the phrase is a\r\n> sufficient fix.\r\n\r\nFixed.\r\n\r\n>> PostgreSQL 17 can now use both planner statistics and the sort order of [common table expressions](https://www.postgresql.org/docs/17/queries-with.html) (aka [`WITH` queries](https://www.postgresql.org/docs/17/queries-with.html)) to further\r\n> \r\n> Is usage of \"aka\" typical? I would have expected \"a.k.a.\" but maybe I'm\r\n> just outdated.\r\n\r\nRemoved, just b/c I can't quickly verify this.\r\n\r\n>> Finally, PostgreSQL 17 adds more explicit SIMD instructions, including AVX-512 support for the [`bit_count](https://www.postgresql.org/docs/17/functions-bitstring.html) function.\r\n> \r\n> Note the lack of closing backtick in [`bit_count`].\r\n\r\nFixed.\r\n\r\n>> ### Developer Experience\r\n>>\r\n>> PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for the `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table, and SQL/JSON constructor (`JSON`, `JSON_SCALAR`, `JSON_SERIALIZE`) and query functions (`JSON_EXISTS`, `JSON_QUERY`, `JSON_VALUE`). Notably, these features were originally planned for the PostgreSQL 15 release but were reverted during the beta period due to design considerations, which is one reason we ask for you to help us test features during beta! Additionally, PostgreSQL 17 adds more functionality to its `jsonpath` implementation, including the ability to convert JSON values to different data types.\r\n> \r\n> I'm not sure it's accurate to say that converting JSON values to\r\n> different datatypes is part of the jsonpath implementation; as I\r\n> understand, jsonpath is the representation used to search for elements\r\n> within JSON values. If you replace \"including\" with \"and\", the result\r\n> seems reasonable.\r\n\r\nFixed.\r\n\r\n>> PostgreSQL 17 adds a new connection parameter, `sslnegotation`, which allows PostgreSQL to perform direct TLS handshakes when using [ALPN](https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation), eliminating a network roundtrip. PostgreSQL is registered as `postgresql` in the ALPN directory.\r\n> \r\n> Typo here \"sslnegotation\" missing an i, sslnegotiation.\r\n\r\nFixed.\r\n\r\n>> PostgreSQL 17 normalizes the parameters for `CALL` in [`pg_stat_statements`](https://www.postgresql.org/docs/17/pgstatstatements.html), reducing the number of entries for frequently called stored procedures. Additionally, [`VACUUM` progress reporting](https://www.postgresql.org/docs/devel/progress-reporting.html#VACUUM-PROGRESS-REPORTING) now shows the progress of vacuuming indexes. PostgreSQL 17 also introduces a new view, [`pg_wait_events`](https://www.postgresql.org/docs/17/view-pg-wait-events.html), which provides descriptions about wait events and can be combined with `pg_stat_activity` to give more insight into why an active session is waiting. Additionally, some information in the [`pg_stat_bgwriter`](https://www.postgresql.org/docs/17/monitoring-stats.html#MONITORING-PG-STAT-BGWRITER-VIEW) view is now split out into the new [`pg_stat_checkpointer`](https://www.postgresql.org/docs/17/monitoring-stats.html#MONITORING-PG-STAT-CHECKPOINTER-VIEW) view.\r\n> \r\n> Note use of one link to \"/devel/\" here.\r\n\r\nFixed.\r\n\r\nThanks!\r\n\r\nJonathan", "msg_date": "Thu, 23 May 2024 08:22:49 -0400", "msg_from": "\"Jonathan S. Katz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 17 Beta 1 release announcement draft" } ]
[ { "msg_contents": "Hi hackers!\n\nStHighload conference will be held on June 24-25[0]. I’m planning to do “Pre-Commitfest Party” there.\n\nThe idea is to help promote patches among potential reviewers. And start working with the very beginning of PG18 development cycle.\nGood patch review of a valuable feature is a great addition to a CV, and we will advertise this fact among conference attendees.\n\nIf you are the patch author, can be around on conference dates and willing to present your patch - please contact me or just fill the registration form [1].\n\nPostgres Professional will organize the event, provide us ~1h of a stage time and unlimited backstage discussion in their tent. I’ll serve as a moderator, and maybe present something myself.\nIf your work is not on Commitfest yet, but you are planning to finish a prototype by the end of the June - feel free to register anyway.\nIf you do not have a ticket to StHighload - we have some speaker entrance tickets.\nAt the moment we have 4 potential patch authors ready to present.\n\nPlease contact me with any questions regarding the event. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://highload.ru/spb/2024/\n[1] https://forms.yandex.ru/u/6634e043c417f3cae70775a6/\n\n", "msg_date": "Thu, 16 May 2024 10:59:22 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Pre-Commitfest Party on StHighload conf" }, { "msg_contents": "Great initiative.\n\nOn Thu, May 16, 2024 at 10:59 AM Andrey M. Borodin <[email protected]>\nwrote:\n\n> Hi hackers!\n>\n> StHighload conference will be held on June 24-25[0]. I’m planning to do\n> “Pre-Commitfest Party” there.\n>\n> The idea is to help promote patches among potential reviewers. And start\n> working with the very beginning of PG18 development cycle.\n> Good patch review of a valuable feature is a great addition to a CV, and\n> we will advertise this fact among conference attendees.\n>\n> If you are the patch author, can be around on conference dates and willing\n> to present your patch - please contact me or just fill the registration\n> form [1].\n>\n> Postgres Professional will organize the event, provide us ~1h of a stage\n> time and unlimited backstage discussion in their tent. I’ll serve as a\n> moderator, and maybe present something myself.\n> If your work is not on Commitfest yet, but you are planning to finish a\n> prototype by the end of the June - feel free to register anyway.\n> If you do not have a ticket to StHighload - we have some speaker entrance\n> tickets.\n> At the moment we have 4 potential patch authors ready to present.\n>\n> Please contact me with any questions regarding the event. Thanks!\n>\n>\n> Best regards, Andrey Borodin.\n>\n> [0] https://highload.ru/spb/2024/\n> [1] https://forms.yandex.ru/u/6634e043c417f3cae70775a6/\n>\n>\n\nGreat initiative.On Thu, May 16, 2024 at 10:59 AM Andrey M. Borodin <[email protected]> wrote:Hi hackers!\n\nStHighload conference will be held on June 24-25[0]. I’m planning to do “Pre-Commitfest Party” there.\n\nThe idea is to help promote patches among potential reviewers. And start working with the very beginning of PG18 development cycle.\nGood patch review of a valuable feature is a great addition to a CV, and we will advertise this fact among conference attendees.\n\nIf you are the patch author, can be around on conference dates and willing to present your patch - please contact me or just fill the registration form [1].\n\nPostgres Professional will organize the event, provide us ~1h of a stage time and unlimited backstage discussion in their tent. I’ll serve as a moderator, and maybe present something myself.\nIf your work is not on Commitfest yet, but you are planning to finish a prototype by the end of the June - feel free to register anyway.\nIf you do not have a ticket to StHighload - we have some speaker entrance tickets.\nAt the moment we have 4 potential patch authors ready to present.\n\nPlease contact me with any questions regarding the event. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://highload.ru/spb/2024/\n[1] https://forms.yandex.ru/u/6634e043c417f3cae70775a6/", "msg_date": "Thu, 16 May 2024 11:21:18 +0500", "msg_from": "Kashif Zeeshan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pre-Commitfest Party on StHighload conf" }, { "msg_contents": "Hi,\n\n> StHighload conference will be held on June 24-25[0]. I’m planning to do “Pre-Commitfest Party” there.\n>\n> The idea is to help promote patches among potential reviewers. And start working with the very beginning of PG18 development cycle.\n> Good patch review of a valuable feature is a great addition to a CV, and we will advertise this fact among conference attendees.\n>\n> If you are the patch author, can be around on conference dates and willing to present your patch - please contact me or just fill the registration form [1].\n>\n> Postgres Professional will organize the event, provide us ~1h of a stage time and unlimited backstage discussion in their tent. I’ll serve as a moderator, and maybe present something myself.\n> If your work is not on Commitfest yet, but you are planning to finish a prototype by the end of the June - feel free to register anyway.\n> If you do not have a ticket to StHighload - we have some speaker entrance tickets.\n> At the moment we have 4 potential patch authors ready to present.\n>\n> Please contact me with any questions regarding the event. Thanks!\n\nGreat initiative, thanks!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 16 May 2024 16:28:44 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pre-Commitfest Party on StHighload conf" } ]
[ { "msg_contents": "When writing a new SSL test for another patch it struck me that the SSL tests\nare doing configuration management without using the test framework API's. The\nattached patches cleans this up, no testcases are altered as part of this.\n\n0001 makes the test for PG_TEST_EXTRA a top-level if statement not attached to\nany other conditional. There is no change in functionality, it's mainly for\nreadability (PG_TEST_EXTRA is it's own concept, not tied to library presence).\n\n0002 ports over editing configfiles to using append_conf() instead of opening\nand writing to them directly.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 16 May 2024 09:24:12 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Minor cleanups in the SSL tests" }, { "msg_contents": "On 16.05.24 09:24, Daniel Gustafsson wrote:\n> When writing a new SSL test for another patch it struck me that the SSL tests\n> are doing configuration management without using the test framework API's. The\n> attached patches cleans this up, no testcases are altered as part of this.\n> \n> 0001 makes the test for PG_TEST_EXTRA a top-level if statement not attached to\n> any other conditional. There is no change in functionality, it's mainly for\n> readability (PG_TEST_EXTRA is it's own concept, not tied to library presence).\n\nMakes sense to me.\n\n> 0002 ports over editing configfiles to using append_conf() instead of opening\n> and writing to them directly.\n\nYes, it's probably preferable to use append_conf() here. You might want \nto run your patch through pgperltidy. The result doesn't look bad, but \na bit different than what you had crafted.\n\nappend_conf() opens and closes the file for each call. It might be nice \nif it could accept a list. Or you can just pass the whole block as one \nstring, like it was done for pg_ident.conf before.\n\n\n\n", "msg_date": "Thu, 16 May 2024 11:43:12 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor cleanups in the SSL tests" }, { "msg_contents": "> On 16 May 2024, at 11:43, Peter Eisentraut <[email protected]> wrote:\n\n> You might want to run your patch through pgperltidy. The result doesn't look bad, but a bit different than what you had crafted.\n\nUgh, I thought I had but clearly had forgotten. Fixed now.\n\n> append_conf() opens and closes the file for each call. It might be nice if it could accept a list. Or you can just pass the whole block as one string, like it was done for pg_ident.conf before.\n\nThe attached v2 pass the whole block as a here-doc which seemed like the best\noption to retain readability of the config.\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 16 May 2024 23:27:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor cleanups in the SSL tests" }, { "msg_contents": "On 16.05.24 23:27, Daniel Gustafsson wrote:\n>> On 16 May 2024, at 11:43, Peter Eisentraut <[email protected]> wrote:\n> \n>> You might want to run your patch through pgperltidy. The result doesn't look bad, but a bit different than what you had crafted.\n> \n> Ugh, I thought I had but clearly had forgotten. Fixed now.\n> \n>> append_conf() opens and closes the file for each call. It might be nice if it could accept a list. Or you can just pass the whole block as one string, like it was done for pg_ident.conf before.\n> \n> The attached v2 pass the whole block as a here-doc which seemed like the best\n> option to retain readability of the config.\n\nWorks for me.\n\n\n\n", "msg_date": "Fri, 17 May 2024 07:57:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor cleanups in the SSL tests" }, { "msg_contents": "> On 17 May 2024, at 07:57, Peter Eisentraut <[email protected]> wrote:\n> \n> On 16.05.24 23:27, Daniel Gustafsson wrote:\n>>> On 16 May 2024, at 11:43, Peter Eisentraut <[email protected]> wrote:\n>>> You might want to run your patch through pgperltidy. The result doesn't look bad, but a bit different than what you had crafted.\n>> Ugh, I thought I had but clearly had forgotten. Fixed now.\n>>> append_conf() opens and closes the file for each call. It might be nice if it could accept a list. Or you can just pass the whole block as one string, like it was done for pg_ident.conf before.\n>> The attached v2 pass the whole block as a here-doc which seemed like the best\n>> option to retain readability of the config.\n> \n> Works for me.\n\nThanks for review. Once the tree opens up for v18 I'll go ahead with this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 17 May 2024 09:58:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor cleanups in the SSL tests" }, { "msg_contents": "> On 17 May 2024, at 09:58, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 17 May 2024, at 07:57, Peter Eisentraut <[email protected]> wrote:\n>> \n>> On 16.05.24 23:27, Daniel Gustafsson wrote:\n>>>> On 16 May 2024, at 11:43, Peter Eisentraut <[email protected]> wrote:\n>>>> You might want to run your patch through pgperltidy. The result doesn't look bad, but a bit different than what you had crafted.\n>>> Ugh, I thought I had but clearly had forgotten. Fixed now.\n>>>> append_conf() opens and closes the file for each call. It might be nice if it could accept a list. Or you can just pass the whole block as one string, like it was done for pg_ident.conf before.\n>>> The attached v2 pass the whole block as a here-doc which seemed like the best\n>>> option to retain readability of the config.\n>> \n>> Works for me.\n> \n> Thanks for review. Once the tree opens up for v18 I'll go ahead with this.\n\nThis has now been pushed after a little bit of editorializing and another\npgperltidy run.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 20:35:12 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor cleanups in the SSL tests" } ]
[ { "msg_contents": "I wonder if we can avoid making MERGE_ACTION a keyword.\n\nI think we could parse it initially as a function and then transform it \nto a more special node later. In the attached patch, I'm doing this in \nparse analysis. We could try to do it even later and actually execute \nit as a function, if we could get the proper context passed into it somehow.\n\nI'm thinking about this with half an eye on future features. For \nexample, row pattern recognition might introduce similar magic functions \nmatch_number() and classifier() (somewhat the inspiration for the \nmerge_action() syntax), which could use similar treatment.\n\nThoughts?", "msg_date": "Thu, 16 May 2024 16:15:09 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "avoid MERGE_ACTION keyword?" }, { "msg_contents": "On Thu, 16 May 2024 at 15:15, Peter Eisentraut <[email protected]> wrote:\n>\n> I wonder if we can avoid making MERGE_ACTION a keyword.\n>\n\nYeah, there was a lot of back and forth on this point on the original\nthread, and I'm still not sure which approach is best.\n\n> I think we could parse it initially as a function and then transform it\n> to a more special node later. In the attached patch, I'm doing this in\n> parse analysis. We could try to do it even later and actually execute\n> it as a function, if we could get the proper context passed into it somehow.\n>\n\nWhichever way it's done, I do think it's preferable to have the parse\nanalysis check, to ensure that it's being used in the right part of\nthe query, rather than leaving that to plan/execution time.\n\nIf it is turned into a function, the patch also needs to update the\nruleutils code --- it needs to be prepared to output a\nschema-qualified function name, if necessary (something that the\nkeyword approach saves us from).\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 17 May 2024 09:13:22 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: avoid MERGE_ACTION keyword?" } ]
[ { "msg_contents": "Hi,\n\nThe original intent of CommitFests, and of commitfest.postgresql.org\nby extension, was to provide a place where patches could be registered\nto indicate that they needed to be reviewed, thus enabling patch\nauthors and patch reviewers to find each other in a reasonably\nefficient way. I don't think it's working any more. I spent a good\ndeal of time going through the CommitFest this week, and I didn't get\nthrough a very large percentage of it, and what I found is that the\nstatus of the patches registered there is often much messier than can\nbe captured by a simple \"Needs Review\" or \"Waiting on Author,\" and the\nnumber of patches that are actually in need of review is not all that\nlarge. For example, there are:\n\n- patches parked there by a committer who will almost certainly do\nsomething about them after we branch\n- patches parked there by a committer who probably won't do something\nabout them after we branch, but maybe they will, or maybe somebody\nelse will, and anyway this way we at least run CI\n- patches parked there by a committer who may well do something about\nthem before we even branch, because they're not actually subject to\nthe feature freeze\n- patches that we've said we don't want but the author thinks we do\n(sometimes i agree with the author, sometimes not)\n- patches that have long-unresolved difficulties which the author\neither doesn't know how to solve or is in no hurry to solve\n- patches that have already been reviewed by multiple people, often\nincluding several committers, and which have been updated multiple\ntimes, but for one reason or another, not committed\n- patches that actually do need to be reviewed\n\nWhat's a bit depressing is that this last category is a relatively\nsmall percentage of the total. If you'd like to sit down and review a\nbunch of patches, you'll probably spend AT LEAST as much time trying\nto identify which CommitFest entries are worth your time as you will\nactually reviewing. I suspect you could easily spend 2 or 3 times as\nmuch time finding things to review as actually reviewing them,\nhonestly. And the chances that you're going to find the things to\nreview that most need your attention are pretty much nil. You could\nhappen just by chance to discover a patch that was worth weeks of your\ntime to review, but you could also miss that patch forever amidst all\nthe clutter.\n\nI think there are a couple of things that have led to this state of\naffairs. First, we got tired of making people mad by booting their\nstuff out of the CommitFest, so we mostly just stopped doing it, even\nif it had 0% chance of being committed this CommitFest, and really\neven if it had a 0% chance of being committed ever. Second, we added\nCI, which means that there is positive value to registering the patch\nin the CommitFest even if committing it is not in the cards. And those\nthings together have created a third problem, which is that the list\nis now so long and so messy that even the CommitFest managers probably\ndon't manage to go through the whole thing thoroughly in a month.\n\nSo, our CommitFest application has turned into a patch tracker. IMHO,\npatch trackers intrinsically tend to suck, because they fill up with\ngarbage that nobody cares about, and nobody wants to do the colossal\namount of work that it takes to maintain them. But our patch tracker\nsucks MORE, because it's not even intended to BE a general-purpose\npatch tracker. I'm not saying that replacing it with (let me show how\nold I am) bugzilla or whatever the hip modern equivalent of that may\nbe these days is the right thing to do, but it's probably worth\nconsidering. If we decide to roll our own, that might be OK too, but\nwe have to come up with some way of organizing this stuff that's\nbetter than what we have today, some way that actually lets you find\nthe stuff that you care about.\n\nTo give just one example that I think highlights the issues pretty\nwell, consider the \"Refactoring\" section of the current CommitFest.\nThere are 24 patches in there, and 13 of them are by committers. Now,\nmaybe some of those patches are things that the committer really wants\nsomeone else to review, e.g.\nhttps://commitfest.postgresql.org/48/3998/ seems like it might be\nthat. On the other hand, that one could also just be an idea Thomas\nhad that he doesn't really intend to pursue even if the reviews are\nabsolutely glowing, so maybe it's not worth spending time on after\nall. Then there are things that are probably 100% likely to get\ncommitted unless somebody objects, so I shouldn't bother looking at\nthem unless I want to object, e.g.\nhttps://commitfest.postgresql.org/48/4939/ seems like it's probably\nthat. And, also, regardless of authorship, some of these patches have\nalready had a great deal of discussion, and some have had none, and\nyou can sort of tell that from looking at the time the patch was\ncreated vs. the last activity, but it's really not that obvious. So\noverall it's just really unclear where to spend time.\n\nI wonder what ideas people have for improving this situation. I doubt\nthat there's any easy answer that just makes the problem go away --\nkeeping large groups of people organized is a tremendously difficult\ntask under pretty much all circumstances, and the fact that, in this\ncontext, nobody's really the boss, makes it a whole lot harder. But I\nalso feel like what we're doing right now can't possibly be the best\nthat we can do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 14:30:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Op 5/16/24 om 20:30 schreef Robert Haas:\n> Hi,\n> \n> The original intent of CommitFests, and of commitfest.postgresql.org\n> by extension, was to provide a place where patches could be registered\n> to indicate that they needed to be reviewed, thus enabling patch\n> authors and patch reviewers to find each other in a reasonably\n> efficient way. I don't think it's working any more. I spent a good\n\nHi,\n\nPerhaps it would be an idea to let patches 'expire' automatically unless \nthey are 'rescued' (=given another year) by committer or commitfest \nmanager (or perhaps a somewhat wider group - but not too many). \nExpiration after, say, one year should force patch-authors to mount a \ncredible defense for his/her patch to either get his work rescued or \nreinstated/resubmitted.\n\nJust a thought that has crossed my mind already a few times. It's not \nvery sympathetic but it might work keep the list smaller.\n\nErik Rijkers\n\n\n\n", "msg_date": "Thu, 16 May 2024 21:11:14 +0200", "msg_from": "Erik Rijkers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 11:30 AM Robert Haas <[email protected]> wrote:\n\n> Hi,\n>\n> The original intent of CommitFests, and of commitfest.postgresql.org\n> by extension, was to provide a place where patches could be registered\n> to indicate that they needed to be reviewed, thus enabling patch\n> authors and patch reviewers to find each other in a reasonably\n> efficient way. I don't think it's working any more. I spent a good\n> deal of time going through the CommitFest this week, and I didn't get\n> through a very large percentage of it, and what I found is that the\n> status of the patches registered there is often much messier than can\n> be captured by a simple \"Needs Review\" or \"Waiting on Author,\" and the\n> number of patches that are actually in need of review is not all that\n> large. For example, there are:\n>\n> - patches parked there by a committer who will almost certainly do\n> something about them after we branch\n> - patches parked there by a committer who probably won't do something\n> about them after we branch, but maybe they will, or maybe somebody\n> else will, and anyway this way we at least run CI\n> - patches parked there by a committer who may well do something about\n> them before we even branch, because they're not actually subject to\n> the feature freeze\n>\n\nIf a committer has a patch in the CF that is going to be committed in the\nfuture unless there is outside action those should be put under a \"Pending\nCommit\" status.\n\n- patches that we've said we don't want but the author thinks we do\n> (sometimes i agree with the author, sometimes not)\n> - patches that have long-unresolved difficulties which the author\n> either doesn't know how to solve or is in no hurry to solve\n> - patches that have already been reviewed by multiple people, often\n> including several committers, and which have been updated multiple\n> times, but for one reason or another, not committed\n>\n\nUse the same software but a different endpoint - Collaboration. Or even\nthe same endpoint just add an always open slot named \"Work In Process\"\n(WIP). If items can be moved from there to another open or future\ncommitfest slot so much the better.\n\n- patches that actually do need to be reviewed\n>\n\nIf we can distinguish between needs to be reviewed by a committer\n(commitfest dated slots - bimonthlies) and reviewed by someone other than\nthe author (work in process slot) that should help everyone. Of course,\nanyone is welcome to review a patch that has been marked ready to commit.\nI suppose \"ready to commit\" already sorta does this without the need for\nWIP but a quick sanity check would be that ready to commit shouldn't (not\nmustn't) be seen in WIP and needs review shouldn't be seen in the\nbimonthlies. A needs review in WIP means that the patch has been seen by a\ncommitter and sent back for more work but that the scope and intent are\nsuch that it will make it into the upcoming major release. Or is something\nlike a bug fix that just goes right into the bimonthly instead of starting\nout as a WIP item.\n\n\n> I think there are a couple of things that have led to this state of\n> affairs. First, we got tired of making people mad by booting their\n> stuff out of the CommitFest, so we mostly just stopped doing it, even\n> if it had 0% chance of being committed this CommitFest, and really\n> even if it had a 0% chance of being committed ever.\n\n\nThose likely never get out of the new WIP slot discussed above. Your patch\ntracker basically. And there should be less angst in moving something in\nthe bimonthly into WIP rather than dropping it outright. There is\ndiscussion to be had regarding WIP/patch tracking should we go this\ndirection but even if it is just movIng clutter from one room to another\nthere seems like a clear benefit and need to tighten up what it means to be\nin the bimonthly slot - to have qualifications laid out for a patch to earn\nits way there, either by effort (authored and reviewed) or need (i.e., bug\nfixes), or, realistically, being authored by a committer and being mostly\ntrivial in nature.\n\n\n> Second, we added\n> CI, which means that there is positive value to registering the patch\n> in the CommitFest even if committing it is not in the cards.\n\n\nThe new slot retains this benefit.\n\nAnd those\n> things together have created a third problem, which is that the list\n> is now so long and so messy that even the CommitFest managers probably\n> don't manage to go through the whole thing thoroughly in a month.\n>\n\nThe new slot wouldn't be subject to this.\n\nWe'll still have a problem with too many WIP patches and not enough ability\nor desire to resolve them. But setting a higher bar to get onto the\nbimonthly slot while still providing a place for collaboration is a step\nforward that configuring technology can help with. As for WIP, maybe\nadding thumbs-up and thumbs-down support tracking widgets will help draw\nattention to more desired things.\n\nDavid J.\n\nOn Thu, May 16, 2024 at 11:30 AM Robert Haas <[email protected]> wrote:Hi,\n\nThe original intent of CommitFests, and of commitfest.postgresql.org\nby extension, was to provide a place where patches could be registered\nto indicate that they needed to be reviewed, thus enabling patch\nauthors and patch reviewers to find each other in a reasonably\nefficient way. I don't think it's working any more. I spent a good\ndeal of time going through the CommitFest this week, and I didn't get\nthrough a very large percentage of it, and what I found is that the\nstatus of the patches registered there is often much messier than can\nbe captured by a simple \"Needs Review\" or \"Waiting on Author,\" and the\nnumber of patches that are actually in need of review is not all that\nlarge. For example, there are:\n\n- patches parked there by a committer who will almost certainly do\nsomething about them after we branch\n- patches parked there by a committer who probably won't do something\nabout them after we branch, but maybe they will, or maybe somebody\nelse will, and anyway this way we at least run CI\n- patches parked there by a committer who may well do something about\nthem before we even branch, because they're not actually subject to\nthe feature freezeIf a committer has a patch in the CF that is going to be committed in the future unless there is outside action those should be put under a \"Pending Commit\" status.\n- patches that we've said we don't want but the author thinks we do\n(sometimes i agree with the author, sometimes not)\n- patches that have long-unresolved difficulties which the author\neither doesn't know how to solve or is in no hurry to solve\n- patches that have already been reviewed by multiple people, often\nincluding several committers, and which have been updated multiple\ntimes, but for one reason or another, not committedUse the same software but a different endpoint - Collaboration.  Or even the same endpoint just add an always open slot named \"Work In Process\" (WIP).  If items can be moved from there to another open or future commitfest slot so much the better.\n- patches that actually do need to be reviewedIf we can distinguish between needs to be reviewed by a committer (commitfest dated slots - bimonthlies) and reviewed by someone other than the author (work in process slot) that should help everyone.  Of course, anyone is welcome to review a patch that has been marked ready to commit.  I suppose \"ready to commit\" already sorta does this without the need for WIP but a quick sanity check would be that ready to commit shouldn't (not mustn't) be seen in WIP and needs review shouldn't be seen in the bimonthlies.  A needs review in WIP means that the patch has been seen by a committer and sent back for more work but that the scope and intent are such that it will make it into the upcoming major release.  Or is something like a bug fix that just goes right into the bimonthly instead of starting out as a WIP item.\nI think there are a couple of things that have led to this state of\naffairs. First, we got tired of making people mad by booting their\nstuff out of the CommitFest, so we mostly just stopped doing it, even\nif it had 0% chance of being committed this CommitFest, and really\neven if it had a 0% chance of being committed ever.Those likely never get out of the new WIP slot discussed above.  Your patch tracker basically.  And there should be less angst in moving something in the bimonthly into WIP rather than dropping it outright.  There is discussion to be had regarding WIP/patch tracking should we go this direction but even if it is just movIng clutter from one room to another there seems like a clear benefit and need to tighten up what it means to be in the bimonthly slot - to have qualifications laid out for a patch to earn its way there, either by effort (authored and reviewed) or need (i.e., bug fixes), or, realistically, being authored by a committer and being mostly trivial in nature.  Second, we added\nCI, which means that there is positive value to registering the patch\nin the CommitFest even if committing it is not in the cards.The new slot retains this benefit. And those\nthings together have created a third problem, which is that the list\nis now so long and so messy that even the CommitFest managers probably\ndon't manage to go through the whole thing thoroughly in a month.The new slot wouldn't be subject to this.We'll still have a problem with too many WIP patches and not enough ability or desire to resolve them.  But setting a higher bar to get onto the bimonthly slot while still providing a place for collaboration is a step forward that configuring technology can help with.  As for WIP, maybe adding thumbs-up and thumbs-down support tracking widgets will help draw attention to more desired things.David J.", "msg_date": "Thu, 16 May 2024 12:13:17 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Thanks for raising this. As someone who is only modestly familiar with\nPostgres internals or even C, but would still like to contribute through\nreview, I find the current process of finding a suitable patch both tedious\nand daunting. The Reviewing a Patch article on the wiki [0] says reviews\nlike mine are still welcome, but it's hard to get started. I'd love to see\nthis be more approachable.\n\nThanks,\nMaciek\n\n[0]: https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n\nThanks for raising this. As someone who is only modestly familiar with Postgres internals or even C, but would still like to contribute through review, I find the current process of finding a suitable patch both tedious and daunting. The Reviewing a Patch article on the wiki [0] says reviews like mine are still welcome, but it's hard to get started. I'd love to see this be more approachable. Thanks, Maciek[0]: https://wiki.postgresql.org/wiki/Reviewing_a_Patch", "msg_date": "Thu, 16 May 2024 12:25:38 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 16 May 2024, at 20:30, Robert Haas <[email protected]> wrote:\n\n> The original intent of CommitFests, and of commitfest.postgresql.org\n> by extension, was to provide a place where patches could be registered\n> to indicate that they needed to be reviewed, thus enabling patch\n> authors and patch reviewers to find each other in a reasonably\n> efficient way. I don't think it's working any more.\n\nBut which part is broken though, the app, our commitfest process and workflow\nand the its intent, or our assumption that we follow said process and workflow\nwhich may or may not be backed by evidence? IMHO, from being CMF many times,\nthere is a fair bit of the latter, which excacerbates the problem. This is\nharder to fix with more or better software though. \n\n> I spent a good deal of time going through the CommitFest this week\n\nAnd you deserve a big Thank You for that.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 16 May 2024 21:30:10 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 16 May 2024, at 20:30, Robert Haas <[email protected]> wrote:\n>> The original intent of CommitFests, and of commitfest.postgresql.org\n>> by extension, was to provide a place where patches could be registered\n>> to indicate that they needed to be reviewed, thus enabling patch\n>> authors and patch reviewers to find each other in a reasonably\n>> efficient way. I don't think it's working any more.\n\n> But which part is broken though, the app, our commitfest process and workflow\n> and the its intent, or our assumption that we follow said process and workflow\n> which may or may not be backed by evidence? IMHO, from being CMF many times,\n> there is a fair bit of the latter, which excacerbates the problem. This is\n> harder to fix with more or better software though. \n\nYeah. I think that Robert put his finger on a big part of the\nproblem, which is that punting a patch to the next CF is a lot\neasier than rejecting it, particularly for less-senior CFMs\nwho may not feel they have the authority to say no (or at\nleast doubt that the patch author would accept it). It's hard\neven for senior people to get patch authors to take no for an\nanswer --- I know I've had little luck at it --- so maybe that\nproblem is inherent. But a CF app full of patches that are\nunlikely ever to go anywhere isn't helpful.\n\nIt's also true that some of us are abusing the process a bit.\nI know I frequently stick things into the CF app even if I intend\nto commit them pretty darn soon, because it's a near-zero-friction\nway to run CI on them, and I'm too lazy to learn how to do that\notherwise. I like David's suggestion of a \"Pending Commit\"\nstatus, or maybe I should just put such patches into RfC state\nimmediately? However, short-lived entries like that don't seem\nlike they're a big problem beyond possibly skewing the CF statistics\na bit. It's the stuff that keeps hanging around that seems like\nthe core of the issue.\n\n>> I spent a good deal of time going through the CommitFest this week\n\n> And you deserve a big Thank You for that.\n\n+ many\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 15:47:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "When I was CFM for a couple cycles I started with the idea that I was going\nto try being a hardass and rejecting or RWF all the patches that had gotten\nnegative reviews and been bounced forward.\n\nExcept when I actually went through them I didn't find many. Mostly like\nRobert I found perfectly reasonable patches that had received generally\npositive reviews and had really complex situations that really needed more\nanalysis.\n\nI also found a lot of patches that were just not getting any reviews at all\n:( and rejecting those didn't feel great....\n\nOn Thu, May 16, 2024, 21:48 Tom Lane <[email protected]> wrote:\n\n> Daniel Gustafsson <[email protected]> writes:\n> >> On 16 May 2024, at 20:30, Robert Haas <[email protected]> wrote:\n> >> The original intent of CommitFests, and of commitfest.postgresql.org\n> >> by extension, was to provide a place where patches could be registered\n> >> to indicate that they needed to be reviewed, thus enabling patch\n> >> authors and patch reviewers to find each other in a reasonably\n> >> efficient way. I don't think it's working any more.\n>\n> > But which part is broken though, the app, our commitfest process and\n> workflow\n> > and the its intent, or our assumption that we follow said process and\n> workflow\n> > which may or may not be backed by evidence? IMHO, from being CMF many\n> times,\n> > there is a fair bit of the latter, which excacerbates the problem. This\n> is\n> > harder to fix with more or better software though.\n>\n> Yeah. I think that Robert put his finger on a big part of the\n> problem, which is that punting a patch to the next CF is a lot\n> easier than rejecting it, particularly for less-senior CFMs\n> who may not feel they have the authority to say no (or at\n> least doubt that the patch author would accept it). It's hard\n> even for senior people to get patch authors to take no for an\n> answer --- I know I've had little luck at it --- so maybe that\n> problem is inherent. But a CF app full of patches that are\n> unlikely ever to go anywhere isn't helpful.\n>\n> It's also true that some of us are abusing the process a bit.\n> I know I frequently stick things into the CF app even if I intend\n> to commit them pretty darn soon, because it's a near-zero-friction\n> way to run CI on them, and I'm too lazy to learn how to do that\n> otherwise. I like David's suggestion of a \"Pending Commit\"\n> status, or maybe I should just put such patches into RfC state\n> immediately? However, short-lived entries like that don't seem\n> like they're a big problem beyond possibly skewing the CF statistics\n> a bit. It's the stuff that keeps hanging around that seems like\n> the core of the issue.\n>\n> >> I spent a good deal of time going through the CommitFest this week\n>\n> > And you deserve a big Thank You for that.\n>\n> + many\n>\n> regards, tom lane\n>\n>\n>\n\nWhen I was CFM for a couple cycles I started with the idea that I was going to try being a hardass and rejecting or RWF all the patches that had gotten negative reviews and been bounced forward.Except when I actually went through them I didn't find many. Mostly like Robert I found perfectly reasonable patches that had received generally positive reviews and had really complex situations that really needed more analysis.I also found a lot of patches that were just not getting any reviews at all :( and rejecting those didn't feel great....On Thu, May 16, 2024, 21:48 Tom Lane <[email protected]> wrote:Daniel Gustafsson <[email protected]> writes:\n>> On 16 May 2024, at 20:30, Robert Haas <[email protected]> wrote:\n>> The original intent of CommitFests, and of commitfest.postgresql.org\n>> by extension, was to provide a place where patches could be registered\n>> to indicate that they needed to be reviewed, thus enabling patch\n>> authors and patch reviewers to find each other in a reasonably\n>> efficient way. I don't think it's working any more.\n\n> But which part is broken though, the app, our commitfest process and workflow\n> and the its intent, or our assumption that we follow said process and workflow\n> which may or may not be backed by evidence?  IMHO, from being CMF many times,\n> there is a fair bit of the latter, which excacerbates the problem.  This is\n> harder to fix with more or better software though. \n\nYeah.  I think that Robert put his finger on a big part of the\nproblem, which is that punting a patch to the next CF is a lot\neasier than rejecting it, particularly for less-senior CFMs\nwho may not feel they have the authority to say no (or at\nleast doubt that the patch author would accept it).  It's hard\neven for senior people to get patch authors to take no for an\nanswer --- I know I've had little luck at it --- so maybe that\nproblem is inherent.  But a CF app full of patches that are\nunlikely ever to go anywhere isn't helpful.\n\nIt's also true that some of us are abusing the process a bit.\nI know I frequently stick things into the CF app even if I intend\nto commit them pretty darn soon, because it's a near-zero-friction\nway to run CI on them, and I'm too lazy to learn how to do that\notherwise.  I like David's suggestion of a \"Pending Commit\"\nstatus, or maybe I should just put such patches into RfC state\nimmediately?  However, short-lived entries like that don't seem\nlike they're a big problem beyond possibly skewing the CF statistics\na bit.  It's the stuff that keeps hanging around that seems like\nthe core of the issue.\n\n>> I spent a good deal of time going through the CommitFest this week\n\n> And you deserve a big Thank You for that.\n\n+ many\n\n                        regards, tom lane", "msg_date": "Thu, 16 May 2024 22:14:07 +0200", "msg_from": "\"Greg Stark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 15:47, Tom Lane wrote:\n> Daniel Gustafsson <[email protected]> writes:\n>>> On 16 May 2024, at 20:30, Robert Haas <[email protected]> wrote:\n>>> The original intent of CommitFests, and of commitfest.postgresql.org\n>>> by extension, was to provide a place where patches could be registered\n>>> to indicate that they needed to be reviewed, thus enabling patch\n>>> authors and patch reviewers to find each other in a reasonably\n>>> efficient way. I don't think it's working any more.\n> \n>> But which part is broken though, the app, our commitfest process and workflow\n>> and the its intent, or our assumption that we follow said process and workflow\n>> which may or may not be backed by evidence? IMHO, from being CMF many times,\n>> there is a fair bit of the latter, which excacerbates the problem. This is\n>> harder to fix with more or better software though. \n> \n> Yeah. I think that Robert put his finger on a big part of the\n> problem, which is that punting a patch to the next CF is a lot\n> easier than rejecting it, particularly for less-senior CFMs\n> who may not feel they have the authority to say no (or at\n> least doubt that the patch author would accept it).\n\nMaybe we should just make it a policy that *nothing* gets moved forward \nfrom commitfest-to-commitfest and therefore the author needs to care \nenough to register for the next one?\n\n>>> I spent a good deal of time going through the CommitFest this week\n> \n>> And you deserve a big Thank You for that.\n> \n> + many\n\n+1 agreed\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 16:31:01 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 2:30 PM Robert Haas <[email protected]> wrote:\n>\n> Hi,\n>\n> The original intent of CommitFests, and of commitfest.postgresql.org\n> by extension, was to provide a place where patches could be registered\n> to indicate that they needed to be reviewed, thus enabling patch\n> authors and patch reviewers to find each other in a reasonably\n> efficient way. I don't think it's working any more. I spent a good\n> deal of time going through the CommitFest this week, and I didn't get\n> through a very large percentage of it, and what I found is that the\n> status of the patches registered there is often much messier than can\n> be captured by a simple \"Needs Review\" or \"Waiting on Author,\" and the\n> number of patches that are actually in need of review is not all that\n> large. For example, there are:\n>\n> - patches parked there by a committer who will almost certainly do\n> something about them after we branch\n> - patches parked there by a committer who probably won't do something\n> about them after we branch, but maybe they will, or maybe somebody\n> else will, and anyway this way we at least run CI\n\n-- snip --\n\n> So, our CommitFest application has turned into a patch tracker. IMHO,\n> patch trackers intrinsically tend to suck, because they fill up with\n> garbage that nobody cares about, and nobody wants to do the colossal\n> amount of work that it takes to maintain them. But our patch tracker\n> sucks MORE, because it's not even intended to BE a general-purpose\n> patch tracker.\n\nI was reflecting on why a general purpose patch tracker sounded\nappealing to me, and I realized that, at least at this time of year, I\nhave a few patches that really count as \"waiting on author\" that I\nknow I need to do additional work on before they need more review but\nwhich aren't currently my top priority. I should probably simply\nwithdraw and re-register them. My justification was that I'll lose\nthem if I don't keep them in the commitfest app. But, I could just,\nyou know, save them somewhere myself instead of polluting the\ncommitfest app with them. I don't know if others are in this\nsituation. Anyway, I'm definitely currently guilty of parking.\n\n- Melanie\n\n\n", "msg_date": "Thu, 16 May 2024 16:46:16 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 10:46 PM Melanie Plageman <[email protected]>\nwrote:\n\n> On Thu, May 16, 2024 at 2:30 PM Robert Haas <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > The original intent of CommitFests, and of commitfest.postgresql.org\n> > by extension, was to provide a place where patches could be registered\n> > to indicate that they needed to be reviewed, thus enabling patch\n> > authors and patch reviewers to find each other in a reasonably\n> > efficient way. I don't think it's working any more. I spent a good\n> > deal of time going through the CommitFest this week, and I didn't get\n> > through a very large percentage of it, and what I found is that the\n> > status of the patches registered there is often much messier than can\n> > be captured by a simple \"Needs Review\" or \"Waiting on Author,\" and the\n> > number of patches that are actually in need of review is not all that\n> > large. For example, there are:\n> >\n> > - patches parked there by a committer who will almost certainly do\n> > something about them after we branch\n> > - patches parked there by a committer who probably won't do something\n> > about them after we branch, but maybe they will, or maybe somebody\n> > else will, and anyway this way we at least run CI\n>\n> -- snip --\n>\n> > So, our CommitFest application has turned into a patch tracker. IMHO,\n> > patch trackers intrinsically tend to suck, because they fill up with\n> > garbage that nobody cares about, and nobody wants to do the colossal\n> > amount of work that it takes to maintain them. But our patch tracker\n> > sucks MORE, because it's not even intended to BE a general-purpose\n> > patch tracker.\n>\n> I was reflecting on why a general purpose patch tracker sounded\n> appealing to me, and I realized that, at least at this time of year, I\n> have a few patches that really count as \"waiting on author\" that I\n> know I need to do additional work on before they need more review but\n> which aren't currently my top priority. I should probably simply\n> withdraw and re-register them. My justification was that I'll lose\n> them if I don't keep them in the commitfest app. But, I could just,\n> you know, save them somewhere myself instead of polluting the\n> commitfest app with them. I don't know if others are in this\n> situation. Anyway, I'm definitely currently guilty of parking.\n>\n\nOne thing I think we've talked about before (but not done) is to basically\nhave a CF called \"parking lot\", where you can park patches that aren't\nactive in a commitfest but you also don't want to be dead. It would\nprobably also be doable to have the cf bot run patches in that commitfest\nas well as the current one, if that's what people are using it for there.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, May 16, 2024 at 10:46 PM Melanie Plageman <[email protected]> wrote:On Thu, May 16, 2024 at 2:30 PM Robert Haas <[email protected]> wrote:\n>\n> Hi,\n>\n> The original intent of CommitFests, and of commitfest.postgresql.org\n> by extension, was to provide a place where patches could be registered\n> to indicate that they needed to be reviewed, thus enabling patch\n> authors and patch reviewers to find each other in a reasonably\n> efficient way. I don't think it's working any more. I spent a good\n> deal of time going through the CommitFest this week, and I didn't get\n> through a very large percentage of it, and what I found is that the\n> status of the patches registered there is often much messier than can\n> be captured by a simple \"Needs Review\" or \"Waiting on Author,\" and the\n> number of patches that are actually in need of review is not all that\n> large. For example, there are:\n>\n> - patches parked there by a committer who will almost certainly do\n> something about them after we branch\n> - patches parked there by a committer who probably won't do something\n> about them after we branch, but maybe they will, or maybe somebody\n> else will, and anyway this way we at least run CI\n\n-- snip --\n\n> So, our CommitFest application has turned into a patch tracker. IMHO,\n> patch trackers intrinsically tend to suck, because they fill up with\n> garbage that nobody cares about, and nobody wants to do the colossal\n> amount of work that it takes to maintain them. But our patch tracker\n> sucks MORE, because it's not even intended to BE a general-purpose\n> patch tracker.\n\nI was reflecting on why a general purpose patch tracker sounded\nappealing to me, and I realized that, at least at this time of year, I\nhave a few patches that really count as \"waiting on author\" that I\nknow I need to do additional work on before they need more review but\nwhich aren't currently my top priority. I should probably simply\nwithdraw and re-register them. My justification was that I'll lose\nthem if I don't keep them in the commitfest app. But, I could just,\nyou know, save them somewhere myself instead of polluting the\ncommitfest app with them. I don't know if others are in this\nsituation. Anyway, I'm definitely currently guilty of parking.One thing I think we've talked about before (but not done) is to basically have a CF called \"parking lot\", where you can park patches that aren't active in a commitfest  but you also don't want to be dead. It would probably also be doable to have the cf bot run patches in that commitfest as well as the current one, if that's what people are using it for there. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 16 May 2024 22:48:28 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 12:14 PM David G. Johnston\n<[email protected]> wrote:\n> Those likely never get out of the new WIP slot discussed above. Your patch tracker basically. And there should be less angst in moving something in the bimonthly into WIP rather than dropping it outright. There is discussion to be had regarding WIP/patch tracking should we go this direction but even if it is just movIng clutter from one room to another there seems like a clear benefit\n\nYeah, IMO we're unlikely to get around the fact that it's a patch\ntracker -- even if patch trackers inherently tend to suck, as Robert\nput it, they tend to have lots of value too. May as well embrace the\nneed for a tracker and make it more helpful.\n\n> We'll still have a problem with too many WIP patches and not enough ability or desire to resolve them. But setting a higher bar to get onto the bimonthly slot while still providing a place for collaboration is a step forward that configuring technology can help with.\n\n+1. I think _any_ way to better communicate \"what the author needs\nright now\" would help a lot.\n\n> As for WIP, maybe adding thumbs-up and thumbs-down support tracking widgets will help draw attention to more desired things.\n\nPersonally I'd like a way to gauge general interest without\nintroducing a voting system. \"Stars\", if you will, rather than\n\"thumbs\". In the context of the CF, it's valuable to me as an author\nthat you care about what I'm trying to do; if you don't like my\nimplementation, that's what reviews on the list are for. And I see no\nway that the meaning of a thumbs-down button wouldn't degrade\nimmediately.\n\nI have noticed that past proposals for incremental changes to the CF\napp (mine and others') are met with a sort of resigned inertia --\nsometimes disagreement, but more often \"meh, sounds all right, maybe\".\nMaybe my suggestions are just meh, and that's fine. But if we can't\ntweak small things as we go -- and be generally okay with trying and\nreverting failed experiments sometimes -- frustrations are likely to\npile up until someone writes another biyearly manifesto thread.\n\n--Jacob\n\n\n", "msg_date": "Thu, 16 May 2024 13:50:22 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Melanie Plageman <[email protected]> writes:\n> I was reflecting on why a general purpose patch tracker sounded\n> appealing to me, and I realized that, at least at this time of year, I\n> have a few patches that really count as \"waiting on author\" that I\n> know I need to do additional work on before they need more review but\n> which aren't currently my top priority. I should probably simply\n> withdraw and re-register them. My justification was that I'll lose\n> them if I don't keep them in the commitfest app. But, I could just,\n> you know, save them somewhere myself instead of polluting the\n> commitfest app with them. I don't know if others are in this\n> situation. Anyway, I'm definitely currently guilty of parking.\n\nIt's also nice that the CF app will run CI for you, so at least\nyou can keep the patch building if you're so inclined.\n\nDavid J. had a suggestion for this too upthread, which was to create a\nseparate slot for WIP patches that isn't one of the dated CF slots.\n\nIt's hard to argue that such patches need to be in \"the CF app\" at\nall, if you're not actively looking for review. But the CI runs\nand the handy per-author patch status list make the app very tempting\ninfrastructure for parked patches. Maybe we could have a not-the-CF\napp that offers those amenities?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 16:54:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 1:46 PM Melanie Plageman\n<[email protected]> wrote:\n> I should probably simply\n> withdraw and re-register them. My justification was that I'll lose\n> them if I don't keep them in the commitfest app. But, I could just,\n> you know, save them somewhere myself instead of polluting the\n> commitfest app with them. I don't know if others are in this\n> situation. Anyway, I'm definitely currently guilty of parking.\n\nMe too -- it's really, really useful. I think there's probably a\nbetter way the app could help us mark it as parked, as opposed to\ngetting rid of parking. Similar to how people currently use the\nReviewer field as a personal TODO list... it might be nice to\nofficially separate the ideas a bit.\n\n--Jacob\n\n\n", "msg_date": "Thu, 16 May 2024 13:54:22 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 1:31 PM Joe Conway <[email protected]> wrote:\n> Maybe we should just make it a policy that *nothing* gets moved forward\n> from commitfest-to-commitfest and therefore the author needs to care\n> enough to register for the next one?\n\nI think that's going to severely disadvantage anyone who doesn't do\nthis as their day job. Maybe I'm bristling a bit too much at the\nwording, but not having time to shepherd a patch is not the same as\nnot caring.\n\n--Jacob\n\n\n", "msg_date": "Thu, 16 May 2024 13:57:05 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Hi,\n\nOn 5/16/24 4:31 PM, Joe Conway wrote:\n>> Yeah.  I think that Robert put his finger on a big part of the\n>> problem, which is that punting a patch to the next CF is a lot\n>> easier than rejecting it, particularly for less-senior CFMs\n>> who may not feel they have the authority to say no (or at\n>> least doubt that the patch author would accept it).\n> \n> Maybe we should just make it a policy that *nothing* gets moved forward \n> from commitfest-to-commitfest and therefore the author needs to care \n> enough to register for the next one?\n>\n\nOr at least nothing get moved forward from March.\n\nSpending time on a patch during a major version doesn't mean that you \nhave time to do the same for the next major version.\n\nThat way July could start \"clean\" with patches people are interested in \nand willing to maintain during the next year.\n\nAlso, it is a bit confusing that f.ex.\n\n https://commitfest.postgresql.org/48/\n\nalready shows 40 patches as Committed...\n\nSo, having some sort of \"End of Development\" state in general would be good.\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:00:57 -0400", "msg_from": "Jesper Pedersen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Jacob Champion <[email protected]> writes:\n> ... Similar to how people currently use the\n> Reviewer field as a personal TODO list... it might be nice to\n> officially separate the ideas a bit.\n\nOh, that's an independent pet peeve of mine. Usually, if I'm\nlooking over the CF list for a patch to review, I skip over ones\nthat already show an assigned reviewer, because I don't want to\nstep on that person's toes. But it seems very common to put\none's name down for review without any immediate intention of\ndoing work. Or to do a review and wander off, leaving the patch\napparently being tended to but not really. (And I confess I'm\noccasionally guilty of both things myself.)\n\nI think it'd be great if we could separate \"I'm actively reviewing\nthis\" from \"I'm interested in this\". As a bonus, adding yourself\nto the \"interested\" list would be a fine proxy for the thumbs-up\nor star markers mentioned upthread.\n\nIf those were separate columns, we could implement some sort of\naging scheme whereby somebody who'd not commented for (say)\na week or two would get quasi-automatically moved from the \"active\nreviewer\" column to the \"interested\" column, whereupon it wouldn't\nbe impolite for someone else to sign up for active review.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 17:04:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 16:57, Jacob Champion wrote:\n> On Thu, May 16, 2024 at 1:31 PM Joe Conway <[email protected]> wrote:\n>> Maybe we should just make it a policy that *nothing* gets moved forward\n>> from commitfest-to-commitfest and therefore the author needs to care\n>> enough to register for the next one?\n> \n> I think that's going to severely disadvantage anyone who doesn't do\n> this as their day job. Maybe I'm bristling a bit too much at the\n> wording, but not having time to shepherd a patch is not the same as\n> not caring.\n\nMaybe the word \"care\" was a poor choice, but forcing authors to think \nabout and decide if they have the \"time to shepherd a patch\" for the \n*next CF* is exactly the point. If they don't, why clutter the CF with it.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:06:56 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Jacob Champion <[email protected]> writes:\n> On Thu, May 16, 2024 at 1:31 PM Joe Conway <[email protected]> wrote:\n>> Maybe we should just make it a policy that *nothing* gets moved forward\n>> from commitfest-to-commitfest and therefore the author needs to care\n>> enough to register for the next one?\n\n> I think that's going to severely disadvantage anyone who doesn't do\n> this as their day job. Maybe I'm bristling a bit too much at the\n> wording, but not having time to shepherd a patch is not the same as\n> not caring.\n\nAlso, I doubt that there are all that many patches that have simply\nbeen abandoned by their authors. Our problem is the same as it's\nbeen for many years: not enough review person-power, rather than\nnot enough patches. So I think authors would just jump through that\nhoop, or enough of them would that it wouldn't improve matters.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 17:08:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 16:48, Magnus Hagander wrote:\n> On Thu, May 16, 2024 at 10:46 PM Melanie Plageman \n> I was reflecting on why a general purpose patch tracker sounded\n> appealing to me, and I realized that, at least at this time of year, I\n> have a few patches that really count as \"waiting on author\" that I\n> know I need to do additional work on before they need more review but\n> which aren't currently my top priority. I should probably simply\n> withdraw and re-register them. My justification was that I'll lose\n> them if I don't keep them in the commitfest app. But, I could just,\n> you know, save them somewhere myself instead of polluting the\n> commitfest app with them. I don't know if others are in this\n> situation. Anyway, I'm definitely currently guilty of parking.\n> \n> \n> One thing I think we've talked about before (but not done) is to \n> basically have a CF called \"parking lot\", where you can park patches \n> that aren't active in a commitfest  but you also don't want to be dead. \n> It would probably also be doable to have the cf bot run patches in that \n> commitfest as well as the current one, if that's what people are using \n> it for there.\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:10:23 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 1:46 PM Melanie Plageman <[email protected]>\nwrote:\n\n>\n> I should probably simply\n> withdraw and re-register them. My justification was that I'll lose\n> them if I don't keep them in the commitfest app. But, I could just,\n> you know, save them somewhere myself instead of polluting the\n> commitfest app with them. I don't know if others are in this\n> situation. Anyway, I'm definitely currently guilty of parking.\n>\n>\nI use a personal JIRA to track the stuff that I hope makes it into the\ncodebase, as well as just starring the corresponding emails in the\nshort-term. Every patch ever submitted sits in the mailing list archive so\nI have no real need to preserve git branches with my submitted work on\nthem. At lot of my work comes down to lucky timing so I'm mostly content\nwith just pinging my draft patches on the email thread once in a while\nhoping there is interest and time from someone. For stuff that I would be\nOK committing as submitted I'll add it to the commitfest and wait for\nsomeone to either agree or point out where I can improve things.\n\nAdding both these kinds to WIP appeals to me, particularly with something\nakin to a \"Collaboration Wanted\" category in addition to \"Needs Review\" for\nwhen I think it is ready, and \"Waiting on Author\" for stuff that has\npending feedback to resolve - or the author isn't currently fishing for\nreviewer time for whatever reason. Ideally there would be no rejections,\nonly constructive feedback that convinces the author that, whether for now\nor forever, the proposed patch should be withdrawn pending some change in\ncircumstances that suggests the world is ready for it.\n\nDavid J.\n\nOn Thu, May 16, 2024 at 1:46 PM Melanie Plageman <[email protected]> wrote: I should probably simply\nwithdraw and re-register them. My justification was that I'll lose\nthem if I don't keep them in the commitfest app. But, I could just,\nyou know, save them somewhere myself instead of polluting the\ncommitfest app with them. I don't know if others are in this\nsituation. Anyway, I'm definitely currently guilty of parking.I use a personal JIRA to track the stuff that I hope makes it into the codebase, as well as just starring the corresponding emails in the short-term.  Every patch ever submitted sits in the mailing list archive so I have no real need to preserve git branches with my submitted work on them.  At lot of my work comes down to lucky timing so I'm mostly content with just pinging my draft patches on the email thread once in a while hoping there is interest and time from someone.  For stuff that I would be OK committing as submitted I'll add it to the commitfest and wait for someone to either agree or point out where I can improve things.Adding both these kinds to WIP appeals to me, particularly with something akin to a \"Collaboration Wanted\" category in addition to \"Needs Review\" for when I think it is ready, and \"Waiting on Author\" for stuff that has pending feedback to resolve - or the author isn't currently fishing for reviewer time for whatever reason.  Ideally there would be no rejections, only constructive feedback that convinces the author that, whether for now or forever, the proposed patch should be withdrawn pending some change in circumstances that suggests the world is ready for it.David J.", "msg_date": "Thu, 16 May 2024 14:15:39 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 2:06 PM Joe Conway <[email protected]> wrote:\n> Maybe the word \"care\" was a poor choice, but forcing authors to think\n> about and decide if they have the \"time to shepherd a patch\" for the\n> *next CF* is exactly the point. If they don't, why clutter the CF with it.\n\nBecause the community regularly encourages new patch contributors to\npark their stuff in it, without first asking them to sign on the\ndotted line and commit to the next X months of their free time. If\nthat's not appropriate, then I think we should decide what those\ncontributors need to do instead, rather than making a new bar for them\nto clear.\n\n--Jacob\n\n\n", "msg_date": "Thu, 16 May 2024 14:24:20 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 17:24, Jacob Champion wrote:\n> On Thu, May 16, 2024 at 2:06 PM Joe Conway <[email protected]> wrote:\n>> Maybe the word \"care\" was a poor choice, but forcing authors to think\n>> about and decide if they have the \"time to shepherd a patch\" for the\n>> *next CF* is exactly the point. If they don't, why clutter the CF with it.\n> \n> Because the community regularly encourages new patch contributors to\n> park their stuff in it, without first asking them to sign on the\n> dotted line and commit to the next X months of their free time. If\n> that's not appropriate, then I think we should decide what those\n> contributors need to do instead, rather than making a new bar for them\n> to clear.\n\nIf no one, including the author (new or otherwise) is interested in \nshepherding a particular patch, what chance does it have of ever getting \ncommitted?\n\nIMHO the probability is indistinguishable from zero anyway.\n\nPerhaps we should be more explicit to new contributors that they need to \neither own their patch through the process, or convince someone to do it \nfor them.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:29:23 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 16.05.24 22:46, Melanie Plageman wrote:\n> I was reflecting on why a general purpose patch tracker sounded\n> appealing to me, and I realized that, at least at this time of year, I\n> have a few patches that really count as \"waiting on author\" that I\n> know I need to do additional work on before they need more review but\n> which aren't currently my top priority. I should probably simply\n> withdraw and re-register them. My justification was that I'll lose\n> them if I don't keep them in the commitfest app. But, I could just,\n> you know, save them somewhere myself instead of polluting the\n> commitfest app with them. I don't know if others are in this\n> situation. Anyway, I'm definitely currently guilty of parking.\n\nI don't have a problem with that at all. It's pretty understandable \nthat many patches are parked between say April and July.\n\nThe key is the keep the status up to date.\n\nIf we have 890 patches in Waiting for Author and 100 patches in Ready \nfor Committer and 10 patches in Needs Review, and those 10 patches are \nactually reviewable, then that's good. There might need to be a \n\"background process\" to make sure those 890 waiting patches are not \nparked forever and those 100 patches actually get committed sometime, \nbut in principle this is the system working.\n\nThe problem is if we have 180 patches in Needs Review, and only 20 are \nreally actually ready to be reviewed. And a second-order problem is \nthat if you already know that this will be the case, you give up before \neven looking.\n\n\n\n", "msg_date": "Thu, 16 May 2024 23:35:45 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 2:29 PM Joe Conway <[email protected]> wrote:\n> If no one, including the author (new or otherwise) is interested in\n> shepherding a particular patch, what chance does it have of ever getting\n> committed?\n\nThat's a very different thing from what I think will actually happen, which is\n\n- new author posts patch\n- community member says \"use commitfest!\"\n- new author registers patch\n- no one reviews it\n- patch gets automatically booted\n- community member says \"register it again!\"\n- new author says ಠ_ಠ\n\nLike Tom said upthread, the issue isn't really that new authors are\nsomehow uninterested in their own patches.\n\n--Jacob\n\n\n", "msg_date": "Thu, 16 May 2024 14:36:53 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 16.05.24 23:04, Tom Lane wrote:\n> I think it'd be great if we could separate \"I'm actively reviewing\n> this\" from \"I'm interested in this\". As a bonus, adding yourself\n> to the \"interested\" list would be a fine proxy for the thumbs-up\n> or star markers mentioned upthread.\n> \n> If those were separate columns, we could implement some sort of\n> aging scheme whereby somebody who'd not commented for (say)\n> a week or two would get quasi-automatically moved from the \"active\n> reviewer\" column to the \"interested\" column, whereupon it wouldn't\n> be impolite for someone else to sign up for active review.\n\nYes, I think some variant of this could be quite useful.\n\n\n", "msg_date": "Thu, 16 May 2024 23:38:08 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 16.05.24 23:06, Joe Conway wrote:\n> On 5/16/24 16:57, Jacob Champion wrote:\n>> On Thu, May 16, 2024 at 1:31 PM Joe Conway <[email protected]> wrote:\n>>> Maybe we should just make it a policy that *nothing* gets moved forward\n>>> from commitfest-to-commitfest and therefore the author needs to care\n>>> enough to register for the next one?\n>>\n>> I think that's going to severely disadvantage anyone who doesn't do\n>> this as their day job. Maybe I'm bristling a bit too much at the\n>> wording, but not having time to shepherd a patch is not the same as\n>> not caring.\n> \n> Maybe the word \"care\" was a poor choice, but forcing authors to think \n> about and decide if they have the \"time to shepherd a patch\" for the \n> *next CF* is exactly the point. If they don't, why clutter the CF with it.\n\nObjectively, I think this could be quite effective. You need to prove \nyour continued interest in your project by pressing a button every two \nmonths.\n\nBut there is a high risk that this will double the annoyance for \ncontributors whose patches aren't getting reviews. Now, not only are \nyou being ignored, but you need to prove that you're still there every \ntwo months.\n\n\n\n", "msg_date": "Thu, 16 May 2024 23:43:19 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 2:04 PM Tom Lane <[email protected]> wrote:\n> Oh, that's an independent pet peeve of mine. Usually, if I'm\n> looking over the CF list for a patch to review, I skip over ones\n> that already show an assigned reviewer, because I don't want to\n> step on that person's toes. But it seems very common to put\n> one's name down for review without any immediate intention of\n> doing work. Or to do a review and wander off, leaving the patch\n> apparently being tended to but not really. (And I confess I'm\n> occasionally guilty of both things myself.)\n\nYep, I do the same thing (and have the same pet peeve).\n\n> I think it'd be great if we could separate \"I'm actively reviewing\n> this\" from \"I'm interested in this\". As a bonus, adding yourself\n> to the \"interested\" list would be a fine proxy for the thumbs-up\n> or star markers mentioned upthread.\n>\n> If those were separate columns, we could implement some sort of\n> aging scheme whereby somebody who'd not commented for (say)\n> a week or two would get quasi-automatically moved from the \"active\n> reviewer\" column to the \"interested\" column, whereupon it wouldn't\n> be impolite for someone else to sign up for active review.\n\n+1!\n\n--Jacob\n\n\n", "msg_date": "Thu, 16 May 2024 14:43:47 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> The problem is if we have 180 patches in Needs Review, and only 20 are \n> really actually ready to be reviewed. And a second-order problem is \n> that if you already know that this will be the case, you give up before \n> even looking.\n\nRight, so what can we do about that? Does Needs Review state need to\nbe subdivided, and if so how?\n\nIf it's just that a patch should be in some other state altogether,\nwe should simply encourage people to change the state as soon as they\ndiscover that. I think the problem is not so much \"90% are in the\nwrong state\" as \"each potential reviewer has to rediscover that\".\n\nAt this point it seems like there's consensus to have a \"parking\"\nsection of the CF app, separate from the time-boxed CFs, and I hope\nsomebody will go make that happen. But I don't think that's our only\nissue, so we need to keep thinking about what should be improved.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 17:46:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 16 May 2024, at 23:46, Tom Lane <[email protected]> wrote:\n\n> At this point it seems like there's consensus to have a \"parking\"\n> section of the CF app, separate from the time-boxed CFs, and I hope\n> somebody will go make that happen. But I don't think that's our only\n> issue, so we need to keep thinking about what should be improved.\n\nPulling in the information from the CFBot regarding test failures and whether\nthe patch applies at all, and automatically altering the state to WOA for at\nleast the latter category would be nice. There's likely to be more analysis we\ncan do on the thread to measure \"activity/hotness\", but starting with the\nsimple boolean data we already have could be a good start.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 16 May 2024 23:53:17 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 17:36, Jacob Champion wrote:\n> On Thu, May 16, 2024 at 2:29 PM Joe Conway <[email protected]> wrote:\n>> If no one, including the author (new or otherwise) is interested in\n>> shepherding a particular patch, what chance does it have of ever getting\n>> committed?\n> \n> That's a very different thing from what I think will actually happen, which is\n> \n> - new author posts patch\n> - community member says \"use commitfest!\"\n\nHere is where we should point them at something that explains the care \nand feeding requirements to successfully grow a patch into a commit.\n\n> - new author registers patch\n> - no one reviews it\n> - patch gets automatically booted\n\nPart of the care and feeding instructions should be a warning regarding \nwhat happens if you are unsuccessful in the first CF and still want to \nsee it through.\n\n> - community member says \"register it again!\"\n> - new author says ಠ_ಠ\n\nAs long as this is not a surprise ending, I don't see the issue.\n\n> Like Tom said upthread, the issue isn't really that new authors are\n> somehow uninterested in their own patches.\n\nFirst, some of them objectively are uninterested in doing more than \ndropping a patch over the wall and never looking back. But admittedly \nthat is not too often.\n\nSecond, I don't think a once every two months effort in order to \nregister continuing interest is too much to ask.\n\nAnd third, if we did something like Magnus' suggestion about a CF \nparking lot, the process would be even more simple.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Thu, 16 May 2024 18:00:23 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> Pulling in the information from the CFBot regarding test failures and whether\n> the patch applies at all, and automatically altering the state to WOA for at\n> least the latter category would be nice.\n\n+1. There are enough intermittent test failures that I don't think\nchanging state for that would be good, but patch apply failure is\nusually persistent.\n\nI gather that the CFBot is still a kluge that is web-scraping the CF\napp rather than being actually integrated with it, and that there are\nplans to make that better that haven't been moving fast. Probably\nthat would have to happen first, but there would be a lot of benefit\nfrom having the info flow be two-way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 18:03:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 16.05.24 23:46, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> The problem is if we have 180 patches in Needs Review, and only 20 are\n>> really actually ready to be reviewed. And a second-order problem is\n>> that if you already know that this will be the case, you give up before\n>> even looking.\n> \n> Right, so what can we do about that? Does Needs Review state need to\n> be subdivided, and if so how?\n\nMaybe a new state \"Unclear\". Then a commitfest manager, or someone like \nRobert just now, can more easily power through the list and set \neverything that's doubtful to Unclear, with the understanding that the \nauthor can set it back to Needs Review to signal that they actually \nthink it's ready. Or, as a variant of what someone else was saying, \njust set all patches that carry over from March to July as Unclear. Or \nsomething like that.\n\nI think, if we consider the core mission of the commitfest app, we need \nto be more protective of the Needs Review state.\n\nI have been, from time to time, when I'm in commitfest management mode, \naggressive in setting entries back to Waiting on Author, but that's not \nalways optimal.\n\nSo a third status that encompasses the various other situations like \nmaybe forgotten by author, disagreements between author and reviewer, \nprocess difficulties, needs some senior developer intervention, etc. \ncould be helpful.\n\nThis could also help scale the commitfest manager role, because then the \nhead CFM could have like three helpers and just tell them, look at all \nthe \"Unclear\" ones and help resolve them.\n\n\n", "msg_date": "Fri, 17 May 2024 00:03:30 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 16.05.24 23:46, Tom Lane wrote:\n>> Right, so what can we do about that? Does Needs Review state need to\n>> be subdivided, and if so how?\n\n> Maybe a new state \"Unclear\". ...\n\n> I think, if we consider the core mission of the commitfest app, we need \n> to be more protective of the Needs Review state.\n\nYeah, makes sense.\n\n> So a third status that encompasses the various other situations like \n> maybe forgotten by author, disagreements between author and reviewer, \n> process difficulties, needs some senior developer intervention, etc. \n> could be helpful.\n\nHmm, \"forgotten by author\" seems to generally turn into \"this has been\nin WOA state a long time\". Not sure we have a problem representing\nthat, only with a process for eventually retiring such entries.\nYour other three examples all sound like \"needs senior developer\nattention\", which could be a helpful state that's distinct from \"ready\nfor committer\". It's definitely not the same as \"Unclear\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 18:13:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 5:04 PM Tom Lane <[email protected]> wrote:\n>\n> Jacob Champion <[email protected]> writes:\n> > ... Similar to how people currently use the\n> > Reviewer field as a personal TODO list... it might be nice to\n> > officially separate the ideas a bit.\n>\n> Oh, that's an independent pet peeve of mine. Usually, if I'm\n> looking over the CF list for a patch to review, I skip over ones\n> that already show an assigned reviewer, because I don't want to\n> step on that person's toes. But it seems very common to put\n> one's name down for review without any immediate intention of\n> doing work. Or to do a review and wander off, leaving the patch\n> apparently being tended to but not really. (And I confess I'm\n> occasionally guilty of both things myself.)\n>\n> I think it'd be great if we could separate \"I'm actively reviewing\n> this\" from \"I'm interested in this\". As a bonus, adding yourself\n> to the \"interested\" list would be a fine proxy for the thumbs-up\n> or star markers mentioned upthread.\n>\n> If those were separate columns, we could implement some sort of\n> aging scheme whereby somebody who'd not commented for (say)\n> a week or two would get quasi-automatically moved from the \"active\n> reviewer\" column to the \"interested\" column, whereupon it wouldn't\n> be impolite for someone else to sign up for active review.\n\nI really like the idea of an \"interested\" column of some sort. I think\nhaving some idea of what patches have interest is independently\nvaluable and helps us distinguish patches that no committer was\ninterested enough to work on and patches that no one thinks is a good\nidea at all.\n\nAs for having multiple categories of reviewer, it's almost like we\nneed someone to take responsibility for shifting the patch to the next\nstate -- where the next state isn't necessarily \"committed\". We tend\nto wait and assign a committer if the patch is actually committable\nand will get committed. Lots of people review a patch without claiming\nthat were the author to address all of the feedback, the patch would\nbe committable. It might be helpful if someone could sign up to\nshepherd the patch to its next state -- regardless of what that state\nis.\n\n- Melanie\n\n\n", "msg_date": "Thu, 16 May 2024 18:24:00 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 5:46 PM Tom Lane <[email protected]> wrote:\n> Right, so what can we do about that? Does Needs Review state need to\n> be subdivided, and if so how?\n\nIt doesn't really matter how many states we have if they're not set accurately.\n\n> At this point it seems like there's consensus to have a \"parking\"\n> section of the CF app, separate from the time-boxed CFs, and I hope\n> somebody will go make that happen. But I don't think that's our only\n> issue, so we need to keep thinking about what should be improved.\n\nI do *emphatically* think we need a parking lot. And a better\nintegration between commitfest.postgresql.org and the cfbot, too. It's\njust ridiculous that more work hasn't been put into this. I'm not\nfaulting Thomas or anyone else -- I mean, it's not his job to build\nthe infrastructure the project needs any more than it is anyone else's\n-- but for a project of the size and importance of PostgreSQL to take\nyears to make minor improvements to this kind of critical\ninfrastructure is kind of nuts. If we don't have enough volunteers,\nlet's go recruit some more and promise them cookies.\n\nI think you're overestimating the extent to which the problem is \"we\ndon't reject enough patches\". That *is* a problem, but it seems we\nhave also started rejecting some patches that just never really got\nmuch attention for no particularly good reason, while letting other\nthings linger that didn't really get that much more attention, or\nwhich were objectively much worse ideas. I think that one place where\nthe process is breaking down is in the tacit assumption that the\nperson who wrote the patch wants to get it committed. In some cases,\npeople park things in the CF for CI runs without a strong intention of\npursuing them; in other cases, the patch author is in no kind of rush.\n\nI think we need to move to a system where patches exist independent of\na CommitFest, and get CI run on them unless they get explicitly closed\nor are too inactive or some other criterion that boils down to nobody\ncares any more. Then, we need to get whatever subset of those patches\nneed to be reviewed in a particular CommitFest added to that\nCommitFest. For example, imagine that the CommitFest is FORCIBLY empty\nuntil a week before it starts. You can still register patches in the\nsystem generally, but that just means they get CI runs, not that\nthey're scheduled to be reviewed. A week before the CommitFest,\neveryone who has a patch registered in the system that still applies\ngets an email saying \"click here if you think this patch should be\nreviewed in the upcoming CommitFest -- if you don't care about the\npatch any more or it needs more work before other people review it,\ndon't click here\". Then, the CommitFest ends up containing only the\nthings where the patch author clicked there during that week.\n\nFor bonus points, suppose we make it so that when you click the link,\nit takes you to a box where you can type in a text comment that will\ndisplay in the app, explaining why your patch needs review: \"Robert\nsays the wire protocol changes in this patch are wrong, but I think\nhe's full of hot garbage and want a second opinion!\" (a purely\nhypothetical example, I'm sure) If you put in a comment like this when\nyou register your patch for the CommitFest, it gets a sparkly gold\nborder and floats to the top of the list, or we mail you a Kit-Kat\nbar, or something. I don't know.\n\nThe point is - we need a much better signal to noise ratio here. I bet\nthe number of patches in the CommitFest that actually need review is\nsomething like 25% of the total. The rest are things that are just\nparked there by a committer, or that the author doesn't care about\nright now, or that are already being actively discussed, or where\nthere's not a clear way forward. We could create new statuses for all\nof those states - \"Parked\", \"In Hibernation,\" \"Under Discussion,\" and\n\"Unclear\" - but I think that's missing the point. What we really want\nis to not see that stuff in the first place. It's a CommitFest, not\nonce-upon-a-time-I-wrote-a-patch-Fest.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 16 May 2024 22:26:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17.05.24 00:13, Tom Lane wrote:\n>> So a third status that encompasses the various other situations like\n>> maybe forgotten by author, disagreements between author and reviewer,\n>> process difficulties, needs some senior developer intervention, etc.\n>> could be helpful.\n> \n> Hmm, \"forgotten by author\" seems to generally turn into \"this has been\n> in WOA state a long time\". Not sure we have a problem representing\n> that, only with a process for eventually retiring such entries.\n> Your other three examples all sound like \"needs senior developer\n> attention\", which could be a helpful state that's distinct from \"ready\n> for committer\". It's definitely not the same as \"Unclear\".\n\nYeah, some fine-tuning might be required. But I would be wary of \nover-designing too many new states at this point. I think the key idea \nis that there ought to be a state that communicates \"needs attention \nfrom someone other than author, reviewer, or committer\".\n\n\n\n", "msg_date": "Fri, 17 May 2024 06:58:09 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17.05.24 04:26, Robert Haas wrote:\n> I do*emphatically* think we need a parking lot.\n\nPeople seem to like this idea; I'm not quite sure I follow it. If you \njust want the automated patch testing, you can do that on your own by \nsetting up github/cirrus for your own account. If you keep emailing the \npublic your patches just because you don't want to set up your private \ntesting tooling, that seems a bit abusive?\n\nAre there other reasons why developers might want their patches \nregistered in a parking lot?\n\nWe also need to consider that the cfbot/cirrus resources are limited. \nWhatever changes we make, we should make sure that they are prioritized \nwell.\n\n\n\n", "msg_date": "Fri, 17 May 2024 07:05:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17/05/2024 08:05, Peter Eisentraut wrote:\n> On 17.05.24 04:26, Robert Haas wrote:\n>> I do*emphatically* think we need a parking lot.\n> \n> People seem to like this idea; I'm not quite sure I follow it. If you\n> just want the automated patch testing, you can do that on your own by\n> setting up github/cirrus for your own account. If you keep emailing the\n> public your patches just because you don't want to set up your private\n> testing tooling, that seems a bit abusive?\n\nAgreed. Also, if you do want to park a patch in the commitfest, setting \nit to \"Waiting on Author\" is effectively that.\n\nI used to add patches to the commitfest to run CFBot on them, but some \ntime back I bit the bullet and set up github/cirrus to run on my own \ngithub repository. I highly recommend that. It only takes a few clicks, \nand the user experience is much better: push a branch to my own github \nrepository, and cirrus CI runs automatically.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 17 May 2024 10:03:51 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 12:25 AM Maciek Sakrejda <[email protected]>\nwrote:\n\n> Thanks for raising this. As someone who is only modestly familiar with\n> Postgres internals or even C, but would still like to contribute through\n> review, I find the current process of finding a suitable patch both tedious\n> and daunting. The Reviewing a Patch article on the wiki [0] says reviews\n> like mine are still welcome, but it's hard to get started. I'd love to see\n> this be more approachable.\n>\n>\nI totally agreed with Maciek. Its really hard for even an experience\ndeveloper to become a PG contributor or be able to contribute effectively.\n\n\n> Thanks,\n> Maciek\n>\n> [0]: https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n>\n\nOn Fri, May 17, 2024 at 12:25 AM Maciek Sakrejda <[email protected]> wrote:Thanks for raising this. As someone who is only modestly familiar with Postgres internals or even C, but would still like to contribute through review, I find the current process of finding a suitable patch both tedious and daunting. The Reviewing a Patch article on the wiki [0] says reviews like mine are still welcome, but it's hard to get started. I'd love to see this be more approachable.  I totally agreed with Maciek. Its really hard for even an experience developer to become a PG contributor or be able to contribute effectively.  Thanks, Maciek[0]: https://wiki.postgresql.org/wiki/Reviewing_a_Patch", "msg_date": "Fri, 17 May 2024 12:19:29 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17/05/2024 05:26, Robert Haas wrote:\n> For bonus points, suppose we make it so that when you click the link,\n> it takes you to a box where you can type in a text comment that will\n> display in the app, explaining why your patch needs review: \"Robert\n> says the wire protocol changes in this patch are wrong, but I think\n> he's full of hot garbage and want a second opinion!\" (a purely\n> hypothetical example, I'm sure) If you put in a comment like this when\n> you register your patch for the CommitFest, it gets a sparkly gold\n> border and floats to the top of the list, or we mail you a Kit-Kat\n> bar, or something. I don't know.\n\nDunno about having to click a link or sparkly gold borders, but +1 on \nhaving a free-form text box for notes like that. Things like \"cfbot says \nthis has bitrotted\" or \"Will review this after other patch this depends \non\". On the mailing list, notes like that are both noisy and easily lost \nin the threads. But as a little free-form text box on the commitfest, it \nwould be handy.\n\nOne risk is that if we start to rely too much on that, or on the other \nfields in the commitfest app for that matter, we de-value the mailing \nlist archives. I'm not too worried about it, the idea is that the \nsummary box just summarizes what's already been said on the mailing \nlist, or is transient information like \"I'll get to this tomorrow\" \nthat's not interesting to archive.\n\n> The point is - we need a much better signal to noise ratio here. I bet\n> the number of patches in the CommitFest that actually need review is\n> something like 25% of the total. The rest are things that are just\n> parked there by a committer, or that the author doesn't care about\n> right now, or that are already being actively discussed, or where\n> there's not a clear way forward. We could create new statuses for all\n> of those states - \"Parked\", \"In Hibernation,\" \"Under Discussion,\" and\n> \"Unclear\" - but I think that's missing the point. What we really want\n> is to not see that stuff in the first place. It's a CommitFest, not\n> once-upon-a-time-I-wrote-a-patch-Fest.\n\nYeah, I'm also skeptical of adding new categories or statuses to the \ncommitfest.\n\nI sometimes add patches to the commitfest that are not ready to be \ncommitted, because I want review on the general idea or approach, before \npolishing the patch to final state. That's also a fine use of the \ncommitfest app. The expected workflow is to get some review on the \npatch, and then move it back to Waiting on Author.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 17 May 2024 10:32:54 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 06:58, Peter Eisentraut <[email protected]> wrote:\n> Yeah, some fine-tuning might be required. But I would be wary of\n> over-designing too many new states at this point. I think the key idea\n> is that there ought to be a state that communicates \"needs attention\n> from someone other than author, reviewer, or committer\".\n\n+1 on adding a new state like this. I think it would make sense for\nthe author to request additional input, but I think it would need two\nstates, something like \"Request for new reviewer\" and \"Request for new\ncommitter\". Just like authors disappear sometimes after a driveby\npatch submission, it's fairly common too imho for reviewers or\ncommitters to disappear after a driveby review. Having a way for an\nauthor to mark a patch as such would be helpful, both to the author,\nand to reviewers/committers looking to do help some patch out.\n\nOn Fri, 17 May 2024 at 09:33, Heikki Linnakangas <[email protected]> wrote:\n> Things like \"cfbot says\n> this has bitrotted\" or \"Will review this after other patch this depends\n> on\". On the mailing list, notes like that are both noisy and easily lost\n> in the threads. But as a little free-form text box on the commitfest, it\n> would be handy.\n\n+many on the free form text box\n\n\n", "msg_date": "Fri, 17 May 2024 10:15:39 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 17 May 2024, at 09:32, Heikki Linnakangas <[email protected]> wrote:\n\n> On the mailing list, notes like that are both noisy and easily lost in the threads. But as a little free-form text box on the commitfest, it would be handy.\n\nOn a similar note, I have in the past suggested adding a free-form textfield to\nthe patch submission form for the author to give a short summary of what the\npatch does/adds/requires etc. While the thread contains all of this, it's\nlikely quite overwhelming for many in general and new contributors in\nparticular. A short note, on purpose limited to ~500 chars or so to not allow\nmailinglist post copy/paste, could be helpful there I think (I've certainly\nwanted one, many times over, especially when doing CFM).\n\n> One risk is that if we start to rely too much on that, or on the other fields in the commitfest app for that matter, we de-value the mailing list archives. I'm not too worried about it, the idea is that the summary box just summarizes what's already been said on the mailing list, or is transient information like \"I'll get to this tomorrow\" that's not interesting to archive.\n\nOne way to ensure we capture detail could be if the system would send an\nautomated email to the thread summarizing the entry when it's marked as\n\"committed\"?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 17 May 2024 10:47:08 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 17 May 2024, at 00:03, Peter Eisentraut <[email protected]> wrote:\n\n> I think, if we consider the core mission of the commitfest app, we need to be more protective of the Needs Review state.\n\nIMHO this is a very very good summary of what we should focus on with this\nwork.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 17 May 2024 10:58:08 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 10:47, Daniel Gustafsson <[email protected]> wrote:\n> One way to ensure we capture detail could be if the system would send an\n> automated email to the thread summarizing the entry when it's marked as\n> \"committed\"?\n\nThis sounds great! Especially if Going back from an archived thread\nto the commitfest entry or git commit is currently very hard. If I'll\njust be able to Ctrl+F on [email protected] that would be so\nhelpful. I think it would even be useful to have an email be sent\nwhenever a patch gets initially added to the commitfest, so that\nthere's a back link to and it's easy to mark yourself as reviewer.\nRight now, I almost never take the time to do that because if I look\nat my inbox, I have no clue what the interesting email thread is\ncalled in the commitfest app.\n\n\n", "msg_date": "Fri, 17 May 2024 11:02:45 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 11:02, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 17 May 2024 at 10:47, Daniel Gustafsson <[email protected]> wrote:\n> > One way to ensure we capture detail could be if the system would send an\n> > automated email to the thread summarizing the entry when it's marked as\n> > \"committed\"?\n>\n> This sounds great! Especially if\n\n(oops pressed send too early)\n**Especially if it contains the git commit hash**\n\n> Going back from an archived thread\n> to the commitfest entry or git commit is currently very hard. If I'll\n> just be able to Ctrl+F on [email protected] that would be so\n> helpful. I think it would even be useful to have an email be sent\n> whenever a patch gets initially added to the commitfest, so that\n> there's a back link to and it's easy to mark yourself as reviewer.\n> Right now, I almost never take the time to do that because if I look\n> at my inbox, I have no clue what the interesting email thread is\n> called in the commitfest app.\n\n\n", "msg_date": "Fri, 17 May 2024 11:03:54 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\n\n> On 16 May 2024, at 23:30, Robert Haas <[email protected]> wrote:\n> \n\nI think we just need 10x CFMs. Let’s have a CFM for each CF section. I’d happily take \"Replication and Recovery” or “System Administration” for July. “Miscellaneous” or “Performance” look monstrous.\nI feel I can easily track ~20 threads (besides threads I’m interested in). But it’s too tedious to spread attention among ~200 items.\n\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Fri, 17 May 2024 14:11:21 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "I think there are actually a number of factors that make this much harder.\n\nOn Fri, May 17, 2024 at 2:33 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 17/05/2024 05:26, Robert Haas wrote:\n> > For bonus points, suppose we make it so that when you click the link,\n> > it takes you to a box where you can type in a text comment that will\n> > display in the app, explaining why your patch needs review: \"Robert\n> > says the wire protocol changes in this patch are wrong, but I think\n> > he's full of hot garbage and want a second opinion!\" (a purely\n> > hypothetical example, I'm sure) If you put in a comment like this when\n> > you register your patch for the CommitFest, it gets a sparkly gold\n> > border and floats to the top of the list, or we mail you a Kit-Kat\n> > bar, or something. I don't know.\n>\n> Dunno about having to click a link or sparkly gold borders, but +1 on\n> having a free-form text box for notes like that. Things like \"cfbot says\n> this has bitrotted\" or \"Will review this after other patch this depends\n> on\". On the mailing list, notes like that are both noisy and easily lost\n> in the threads. But as a little free-form text box on the commitfest, it\n> would be handy.\n>\n> One risk is that if we start to rely too much on that, or on the other\n> fields in the commitfest app for that matter, we de-value the mailing\n> list archives. I'm not too worried about it, the idea is that the\n> summary box just summarizes what's already been said on the mailing\n> list, or is transient information like \"I'll get to this tomorrow\"\n> that's not interesting to archive.\n>\n> > The point is - we need a much better signal to noise ratio here. I bet\n> > the number of patches in the CommitFest that actually need review is\n> > something like 25% of the total. The rest are things that are just\n> > parked there by a committer, or that the author doesn't care about\n> > right now, or that are already being actively discussed, or where\n> > there's not a clear way forward. We could create new statuses for all\n> > of those states - \"Parked\", \"In Hibernation,\" \"Under Discussion,\" and\n> > \"Unclear\" - but I think that's missing the point. What we really want\n> > is to not see that stuff in the first place. It's a CommitFest, not\n> > once-upon-a-time-I-wrote-a-patch-Fest.\n\nYeah this is a problem.\n\nI think in cases here something is in hibernation or unclear it really\nshould be \"returned with feedback.\" There's really nothing stopping\nsomeone from learning from the experience and resubmitting an improved\nversion.\n\nI think under discussion is also rather unclear. The current statuses\nalready cover this sort of thing (waiting for author and waiting for\nreview). Maybe we could improve the categories here but it is\nimportant to note whether the author or a reviewer is expected to take\nthe next step.\n\nIf the author doesn't respond within a period of time (let's say 30\ndays), I think we can just say \"returned with feedback.\"\n\nSince you can already attach older threads to a commitfest entry, it\nseems to me that we ought to be more aggressive with \"returned with\nfeedback\" and note that this doesn't necessarily mean \"rejected\" which\nis a separate status which we rarely use.\n>\n> Yeah, I'm also skeptical of adding new categories or statuses to the\n> commitfest.\n>\n> I sometimes add patches to the commitfest that are not ready to be\n> committed, because I want review on the general idea or approach, before\n> polishing the patch to final state. That's also a fine use of the\n> commitfest app. The expected workflow is to get some review on the\n> patch, and then move it back to Waiting on Author.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\n>\n\n\n", "msg_date": "Fri, 17 May 2024 16:19:01 +0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 23:43, Peter Eisentraut wrote:\n> On 16.05.24 23:06, Joe Conway wrote:\n>> On 5/16/24 16:57, Jacob Champion wrote:\n>>> On Thu, May 16, 2024 at 1:31 PM Joe Conway <[email protected]> wrote:\n>>>> Maybe we should just make it a policy that *nothing* gets moved forward\n>>>> from commitfest-to-commitfest and therefore the author needs to care\n>>>> enough to register for the next one?\n>>>\n>>> I think that's going to severely disadvantage anyone who doesn't do\n>>> this as their day job. Maybe I'm bristling a bit too much at the\n>>> wording, but not having time to shepherd a patch is not the same as\n>>> not caring.\n>>\n>> Maybe the word \"care\" was a poor choice, but forcing authors to think\n>> about and decide if they have the \"time to shepherd a patch\" for the\n>> *next CF* is exactly the point. If they don't, why clutter the CF with\n>> it.\n> \n> Objectively, I think this could be quite effective.  You need to prove\n> your continued interest in your project by pressing a button every two\n> months.\n> \n> But there is a high risk that this will double the annoyance for\n> contributors whose patches aren't getting reviews.  Now, not only are\n> you being ignored, but you need to prove that you're still there every\n> two months.\n> \n\nYeah, I 100% agree with this. If a patch bitrots and no one cares enough\nto rebase it once in a while, then sure - it's probably fine to mark it\nRwF. But forcing all contributors to do a dance every 2 months just to\nhave a chance someone might take a look, seems ... not great.\n\nI try to see this from the contributors' PoV, and with this process I'm\nsure I'd start questioning if I even want to submit patches.\n\nThat is not to say we don't have a problem with patches that just move\nto the next CF, and that we don't need to do something about that ...\n\nIncidentally, I've been preparing some git+CF stats because of a talk\nI'm expected to do, and it's very obvious the number of committed (and\nrejected) CF entries is more or very stable over time, while the number\nof patches that move to the next CF just snowballs.\n\nMy impression is a lot of these contributions/patches just never get the\nreview & attention that would allow them to move forward. Sure, some do\nbitrot and/or get abandoned, and let's RwF those. But forcing everyone\nto re-register the patches over and over seems like \"reject by default\".\nI'd expect a lot of people to stop bothering and give up, and in a way\nthat would \"solve\" the bottleneck. But I'm not sure it's the solution\nwe'd like ...\n\nIt does seem to me a part of the solution needs to be helping to get\nthose patches reviewed. I don't know how to do that, but perhaps there's\na way to encourage people to review more stuff, or review stuff from a\nwider range of contributors. Say by treating reviews more like proper\ncontributions.\n\nLong time ago there was a \"rule\" that people submitting patches are\nexpected to do reviews. Perhaps we should be more strict this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 May 2024 13:11:05 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 1:05 AM Peter Eisentraut <[email protected]> wrote:\n> On 17.05.24 04:26, Robert Haas wrote:\n> > I do*emphatically* think we need a parking lot.\n>\n> People seem to like this idea; I'm not quite sure I follow it. If you\n> just want the automated patch testing, you can do that on your own by\n> setting up github/cirrus for your own account. If you keep emailing the\n> public your patches just because you don't want to set up your private\n> testing tooling, that seems a bit abusive?\n>\n> Are there other reasons why developers might want their patches\n> registered in a parking lot?\n\nIt's easier to say what's happening than it is to say why it's\nhappening. The CF contains a lot of stuff that appears to just be\nparked there, so evidently reasons exist.\n\nBut if we are to guess what those reasons might be, Tom has already\nadmitted he does that for CI, and I do the same, so probably other\npeople also do it. I also suspect that some people are essentially\nusing the CF app as a personal todo list. By sticking patches in there\nthat they intend to commit next cycle, they both (1) feel virtuous,\nbecause they give at least the appearance of following the community\nprocess and inviting review before they commit and (2) avoid losing\ntrack of the stuff they plan to commit.\n\nThere may be other reasons, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 07:13:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 17 May 2024, at 13:13, Robert Haas <[email protected]> wrote:\n\n> But if we are to guess what those reasons might be, Tom has already\n> admitted he does that for CI, and I do the same, so probably other\n> people also do it. I also suspect that some people are essentially\n> using the CF app as a personal todo list. By sticking patches in there\n> that they intend to commit next cycle, they both (1) feel virtuous,\n> because they give at least the appearance of following the community\n> process and inviting review before they commit and (2) avoid losing\n> track of the stuff they plan to commit.\n> \n> There may be other reasons, too.\n\nI think there is one more which is important: 3) Giving visibility into \"this\nis what I intend to commit\". Few can follow -hackers to the level where they\ncan have an overview of ongoing and/or finished work which will go in. The CF\napp does however provide that overview. This is essentially the TODO list\naspect, but sharing one's TODO isn't all bad, especially for maintainers.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 17 May 2024 13:36:34 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 7:11 AM Tomas Vondra\n<[email protected]> wrote:\n> Yeah, I 100% agree with this. If a patch bitrots and no one cares enough\n> to rebase it once in a while, then sure - it's probably fine to mark it\n> RwF. But forcing all contributors to do a dance every 2 months just to\n> have a chance someone might take a look, seems ... not great.\n\nI don't think that clicking on a link that someone sends you that asks\n\"hey, is this ready to be reviewed' qualifies as a dance.\n\nI'm open to other proposals. But I think that if the proposal is\nessentially \"Hey, let's have the CFMs try harder to do the thing that\nwe've already been asking them to try to do for the last N years,\"\nthen we might as well just not bother. It's obviously not working, and\nit's not going to start working because we repeat the same things\nabout bouncing patches more aggressively. I just spent 2 days on it\nand moved a handful of entries forward. To make a single pass over the\nwhole CommitFest at the rate I was going would take at least two\nweeks, maybe three. And I am a highly experienced committer and\nCommitFest manager with good facility in English and a lot of\nexperience arguing on this mailing list. I'm in the best possible\nposition to be able to do this well and efficiently and I can't. So\nwhere are we going to find the people who can?\n\nI think Andrey Borodin's nearby suggestion of having a separate CfM\nfor each section of the CommitFest does a good job revealing just how\nbad the current situation is. I agree with him: that would actually\nwork. Asking somebody, for a one-month period, to be responsible for\nshepherding one-tenth or one-twentieth of the entries in the\nCommitFest would be a reasonable amount of work for somebody. But we\nwill not find 10 or 20 highly motivated, well-qualified volunteers\nevery other month to do that work; it's a struggle to find one or two\nhighly motivated, well-qualified CommitFest managers, let alone ten or\ntwenty. So I think the right interpretation of his comment is that\nmanaging the CommitFest has become about an order of magnitude more\ndifficult than what it needs to be for the task to be done well.\n\n> It does seem to me a part of the solution needs to be helping to get\n> those patches reviewed. I don't know how to do that, but perhaps there's\n> a way to encourage people to review more stuff, or review stuff from a\n> wider range of contributors. Say by treating reviews more like proper\n> contributions.\n\nWell, I know that what would encourage *me* to do that is if I could\nfind the patches that fall into this category easily. I'm still not\ngoing to spend all of my time on it, but when I do have time to spend\non it, I'd rather spend it on stuff that matters than on trying to\ndrain the CommitFest swamp. And right now every time I think \"oh, I\nshould spend some time reviewing other people's patches,\" that time\npromptly evaporates trying to find the patches that actually need\nattention. I rarely get beyond the \"Bug fixes\" section of the\nCommitFest application before I've used up all my available time, not\nleast because some people have figured out that labelling something\nthey don't like as a Bug fix gets it closer to the top of the CF list,\nwhich is alphabetical by section.\n\n> Long time ago there was a \"rule\" that people submitting patches are\n> expected to do reviews. Perhaps we should be more strict this.\n\nThis sounds like it's just generating more manual work to add to a\nsystem that's already suffering from needing too much manual work. Who\nwould keep track of how many reviews each person is doing, and how\nmany patches they're submitting, and whether those reviews were\nactually any good, and what would they do about it?\n\nOne patch that comes to mind here is Thomas Munro's patch for\n\"unaccent: understand ancient Greek \"oxia\" and other codepoints merged\nby Unicode\". Somebody submitted a bug report and Thomas wrote a patch\nand added it to the CommitFest. But there are open questions that need\nto be researched, and this isn't really a priority for Thomas: he was\njust trying to be nice and put somebody's bug report on a track to\nresolution. Now, Thomas has many patches in the CommitFest, so if you\nask \"does he review as much stuff as he has signed up to be reviewed,\"\nhe clearly doesn't. Let's reject all of his patches, including this\none! And if on this specific patch you ask whether the author is\nstaying on top of it, he clearly isn't, so let's reject this one a\nsecond time, just for that. Now, what have we accomplished by doing\nall of that?\n\nNot a whole lot, in general, because Thomas is a committer, so he can\nstill commit those patches if he wants, barring objections. However,\nwe have succeeded in kicking them out of our CI system, so if he does\ncommit them, they'll be more likely to break the buildfarm. And in the\ncase of this specific patch, what we've done is punish Thomas for\ntrying to help out somebody who submitted a bug report, and at the\nsame time, made the patch he submitted less visible to anyone who\nmight want to help with it.\n\nWahoo!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 07:39:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/16/24 22:26, Robert Haas wrote:\n> For example, imagine that the CommitFest is FORCIBLY empty\n> until a week before it starts. You can still register patches in the\n> system generally, but that just means they get CI runs, not that\n> they're scheduled to be reviewed. A week before the CommitFest,\n> everyone who has a patch registered in the system that still applies\n> gets an email saying \"click here if you think this patch should be\n> reviewed in the upcoming CommitFest -- if you don't care about the\n> patch any more or it needs more work before other people review it,\n> don't click here\". Then, the CommitFest ends up containing only the\n> things where the patch author clicked there during that week.\n\n100% agree. This is in line with what I suggested on an adjacent part of \nthe thread.\n\n> The point is - we need a much better signal to noise ratio here. I bet\n> the number of patches in the CommitFest that actually need review is\n> something like 25% of the total. The rest are things that are just\n> parked there by a committer, or that the author doesn't care about\n> right now, or that are already being actively discussed, or where\n> there's not a clear way forward.\n\nI think there is another case that no one talks about, but I'm sure \nexists, and that I am not the only one guilty of thinking this way.\n\nNamely, the week before commitfest I don't actually know if I will have \nthe time during that month, but I will make sure my patch is in the \ncommitfest just in case I get a few clear days to work on it. Because if \nit isn't there, I can't take advantage of those \"found\" hours.\n\n> We could create new statuses for all of those states - \"Parked\", \"In \n> Hibernation,\" \"Under Discussion,\" and \"Unclear\" - but I think that's \n> missing the point. What we really want is to not see that stuff in \n> the first place. It's a CommitFest, not\n> once-upon-a-time-I-wrote-a-patch-Fest.\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 17 May 2024 08:19:41 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 13:39, Robert Haas <[email protected]> wrote:\n>\n> On Fri, May 17, 2024 at 7:11 AM Tomas Vondra\n> <[email protected]> wrote:\n> > Yeah, I 100% agree with this. If a patch bitrots and no one cares enough\n> > to rebase it once in a while, then sure - it's probably fine to mark it\n> > RwF. But forcing all contributors to do a dance every 2 months just to\n> > have a chance someone might take a look, seems ... not great.\n>\n> I don't think that clicking on a link that someone sends you that asks\n> \"hey, is this ready to be reviewed' qualifies as a dance.\n\nIf there's been any useful response to the patch since the last time\nyou pressed this button, then it might be okay. But it's definitely\nnot uncommon for items on the commitfest app to get no actual response\nat all for half a year, i.e. multiple commits fests (except for the\nodd request for a rebase that an author does within a week). I'd most\ncertainly get very annoyed if for those patches where it already seems\nas if I'm screaming into the void I'd also be required to press a\nbutton every two months, for it to even have a chance at receiving a\nresponse.\n\n> So I think the right interpretation of his comment is that\n> managing the CommitFest has become about an order of magnitude more\n> difficult than what it needs to be for the task to be done well.\n\n+1\n\n> > Long time ago there was a \"rule\" that people submitting patches are\n> > expected to do reviews. Perhaps we should be more strict this.\n>\n> This sounds like it's just generating more manual work to add to a\n> system that's already suffering from needing too much manual work. Who\n> would keep track of how many reviews each person is doing, and how\n> many patches they're submitting, and whether those reviews were\n> actually any good, and what would they do about it?\n\n+1\n\n\n", "msg_date": "Fri, 17 May 2024 14:23:42 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 14:19, Joe Conway <[email protected]> wrote:\n>\n> On 5/16/24 22:26, Robert Haas wrote:\n> > For example, imagine that the CommitFest is FORCIBLY empty\n> > until a week before it starts. You can still register patches in the\n> > system generally, but that just means they get CI runs, not that\n> > they're scheduled to be reviewed. A week before the CommitFest,\n> > everyone who has a patch registered in the system that still applies\n> > gets an email saying \"click here if you think this patch should be\n> > reviewed in the upcoming CommitFest -- if you don't care about the\n> > patch any more or it needs more work before other people review it,\n> > don't click here\". Then, the CommitFest ends up containing only the\n> > things where the patch author clicked there during that week.\n>\n> 100% agree. This is in line with what I suggested on an adjacent part of\n> the thread.\n\nSuch a proposal would basically mean that no-one that cares about\ntheir patches getting reviews can go on holiday and leave work behind\nduring the week before a commit fest. That seems quite undesirable to\nme.\n\n\n", "msg_date": "Fri, 17 May 2024 14:31:43 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/17/24 08:31, Jelte Fennema-Nio wrote:\n> On Fri, 17 May 2024 at 14:19, Joe Conway <[email protected]> wrote:\n>>\n>> On 5/16/24 22:26, Robert Haas wrote:\n>> > For example, imagine that the CommitFest is FORCIBLY empty\n>> > until a week before it starts. You can still register patches in the\n>> > system generally, but that just means they get CI runs, not that\n>> > they're scheduled to be reviewed. A week before the CommitFest,\n>> > everyone who has a patch registered in the system that still applies\n>> > gets an email saying \"click here if you think this patch should be\n>> > reviewed in the upcoming CommitFest -- if you don't care about the\n>> > patch any more or it needs more work before other people review it,\n>> > don't click here\". Then, the CommitFest ends up containing only the\n>> > things where the patch author clicked there during that week.\n>>\n>> 100% agree. This is in line with what I suggested on an adjacent part of\n>> the thread.\n> \n> Such a proposal would basically mean that no-one that cares about\n> their patches getting reviews can go on holiday and leave work behind\n> during the week before a commit fest. That seems quite undesirable to\n> me.\n\nWell, I'm sure I'll get flamed for this suggestion, be here goes anyway...\n\nI wrote:\n> Namely, the week before commitfest I don't actually know if I will have \n> the time during that month, but I will make sure my patch is in the \n> commitfest just in case I get a few clear days to work on it. Because if \n> it isn't there, I can't take advantage of those \"found\" hours.\n\nA solution to both of these issues (yours and mine) would be to allow \nthings to be added *during* the CF month. What is the point of having a \n\"freeze\" before every CF anyway? Especially if they start out clean. If \nsomething is ready for review on day 8 of the CF, why not let it be \nadded for review?\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 17 May 2024 08:42:04 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17.05.24 13:36, Daniel Gustafsson wrote:\n>> On 17 May 2024, at 13:13, Robert Haas <[email protected]> wrote:\n> \n>> But if we are to guess what those reasons might be, Tom has already\n>> admitted he does that for CI, and I do the same, so probably other\n>> people also do it. I also suspect that some people are essentially\n>> using the CF app as a personal todo list. By sticking patches in there\n>> that they intend to commit next cycle, they both (1) feel virtuous,\n>> because they give at least the appearance of following the community\n>> process and inviting review before they commit and (2) avoid losing\n>> track of the stuff they plan to commit.\n>>\n>> There may be other reasons, too.\n> \n> I think there is one more which is important: 3) Giving visibility into \"this\n> is what I intend to commit\". Few can follow -hackers to the level where they\n> can have an overview of ongoing and/or finished work which will go in. The CF\n> app does however provide that overview. This is essentially the TODO list\n> aspect, but sharing one's TODO isn't all bad, especially for maintainers.\n\nOk, but these cases shouldn't use a separate \"parking lot\". They should \nuse the same statuses and flow diagram as the rest. (Maybe with more \ndotted lines, sure.)\n\n\n\n", "msg_date": "Fri, 17 May 2024 14:49:54 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\n\n> On 17 May 2024, at 16:39, Robert Haas <[email protected]> wrote:\n> \n> I think Andrey Borodin's nearby suggestion of having a separate CfM\n> for each section of the CommitFest does a good job revealing just how\n> bad the current situation is. I agree with him: that would actually\n> work. Asking somebody, for a one-month period, to be responsible for\n> shepherding one-tenth or one-twentieth of the entries in the\n> CommitFest would be a reasonable amount of work for somebody. But we\n> will not find 10 or 20 highly motivated, well-qualified volunteers\n> every other month to do that work;\n\nWhy do you think so? Let’s just try to find more CFMs for July.\nWhen I felt that I’m overwhelmed, I asked for help and Alexander Alekseev promptly agreed. That helped a lot.\nIf I was in that position again, I would just ask 10 times on a 1st day :)\n\n> it's a struggle to find one or two\n> highly motivated, well-qualified CommitFest managers, let alone ten or\n> twenty.\n\nBecause we are looking for one person to do a job for 10.\n\n> So I think the right interpretation of his comment is that\n> managing the CommitFest has become about an order of magnitude more\n> difficult than what it needs to be for the task to be done well.\n\nLet’s scale the process. Reduce responsibility area of a CFM, define it clearer.\nAnd maybe even explicitly ask CFM to summarize patch status of each entry at least once a CF.\n\n\nCan I do a small poll among those who is on this thread? Would you volunteer to summarize a status of 20 patches in July’s CF? 5 each week or so. One per day.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 17 May 2024 17:51:05 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Friday, May 17, 2024, Joe Conway <[email protected]> wrote:\n\n>\n> I wrote:\n>\n>> Namely, the week before commitfest I don't actually know if I will have\n>> the time during that month, but I will make sure my patch is in the\n>> commitfest just in case I get a few clear days to work on it. Because if it\n>> isn't there, I can't take advantage of those \"found\" hours.\n>>\n>\n> A solution to both of these issues (yours and mine) would be to allow\n> things to be added *during* the CF month. What is the point of having a\n> \"freeze\" before every CF anyway? Especially if they start out clean. If\n> something is ready for review on day 8 of the CF, why not let it be added\n> for review?\n>\n\nIn conjunction with WIP removing this limitation on the bimonthlies makes\nsense to me.\n\nDavid J.\n\nOn Friday, May 17, 2024, Joe Conway <[email protected]> wrote:\nI wrote:\n\nNamely, the week before commitfest I don't actually know if I will have the time during that month, but I will make sure my patch is in the commitfest just in case I get a few clear days to work on it. Because if it isn't there, I can't take advantage of those \"found\" hours.\n\n\nA solution to both of these issues (yours and mine) would be to allow things to be added *during* the CF month. What is the point of having a \"freeze\" before every CF anyway? Especially if they start out clean. If something is ready for review on day 8 of the CF, why not let it be added for review?\nIn conjunction with WIP removing this limitation on the bimonthlies makes sense to me.David J.", "msg_date": "Fri, 17 May 2024 05:51:22 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17.05.24 09:32, Heikki Linnakangas wrote:\n> Dunno about having to click a link or sparkly gold borders, but +1 on \n> having a free-form text box for notes like that. Things like \"cfbot says \n> this has bitrotted\" or \"Will review this after other patch this depends \n> on\". On the mailing list, notes like that are both noisy and easily lost \n> in the threads. But as a little free-form text box on the commitfest, it \n> would be handy.\n> \n> One risk is that if we start to rely too much on that, or on the other \n> fields in the commitfest app for that matter, we de-value the mailing \n> list archives. I'm not too worried about it, the idea is that the \n> summary box just summarizes what's already been said on the mailing \n> list, or is transient information like \"I'll get to this tomorrow\" \n> that's not interesting to archive.\n\nWe already have the annotations feature, which is kind of this.\n\n\n", "msg_date": "Fri, 17 May 2024 15:02:10 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 8:31 AM Jelte Fennema-Nio <[email protected]> wrote:\n> Such a proposal would basically mean that no-one that cares about\n> their patches getting reviews can go on holiday and leave work behind\n> during the week before a commit fest. That seems quite undesirable to\n> me.\n\nWell, then we make it ten days instead of seven, or give a few days\ngrace after the CF starts to play catchup, or allow the CfM to make\nexceptions.\n\nTo be fair, I'm not sure that forcing people to do something like this\nis going to solve our problem. I'm very open to other ideas. But one\nidea that I'm not open to is to just keep doing what we're doing. It\nclearly and obviously does not work.\n\nI just tried scrolling through the CommitFest to a more or less random\nspot by flicking the mouse up and down, and then clicked on whatever\nended up in the middle of my screen. I did this four times. Two of\nthose landed on patches that had extremely long discussion threads\nalready. One hit a patch from a non-committer that hasn't been\nreviewed and needs to be. And the fourth hit a patch from a committer\nwhich maybe could benefit from review but I can already guess that the\npatch works fine and unless somebody can find some architectural\ndownside to the approach taken, there's not really a whole lot to talk\nabout.\n\nI don't entirely know how to think about that result, but it seems\npretty clear that the unreviewed non-committer patch ought to get\npriority, especially if we're talking about the possibility of\nnon-committers or even junior committers doing drive-by reviews. The\nhigh-quality committer patch might be worth a comment from me, pro or\ncon or whatever, but it's probably not a great use of time for a more\ncasual contributor: they probably aren't going to find too much wrong\nwith it. And the threads with extremely long threads already, well, I\ndon't know if there's something useful that can be done with those\nthreads or not, but those patches certainly haven't been ignored.\n\nI'm not sure that any of these should be evicted from the CommitFest,\nbut we need to think about how to impose some structure on the chaos.\nJust classifying all four of those entries as either \"Needs Review\" or\n\"Waiting on Author\" is pretty useless; then they all look the same,\nand they're not. And please don't suggest adding a bunch more status\nvalues that the CfM has to manually police as the solution. We need to\nfind some way to create a system that does the right thing more often\nby default.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 09:03:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 9:02 AM Peter Eisentraut <[email protected]> wrote:\n> We already have the annotations feature, which is kind of this.\n\nI didn't realize we had that feature. When was it added, and how is it\nsupposed to be used?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 09:05:18 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 17.05.24 14:42, Joe Conway wrote:\n>> Namely, the week before commitfest I don't actually know if I will \n>> have the time during that month, but I will make sure my patch is in \n>> the commitfest just in case I get a few clear days to work on it. \n>> Because if it isn't there, I can't take advantage of those \"found\" hours.\n> \n> A solution to both of these issues (yours and mine) would be to allow \n> things to be added *during* the CF month. What is the point of having a \n> \"freeze\" before every CF anyway? Especially if they start out clean. If \n> something is ready for review on day 8 of the CF, why not let it be \n> added for review?\n\nMaybe this all indicates that the idea of bimonthly commitfests has run \nits course. The original idea might have been, if we have like 50 \npatches, we can process all of them within a month. We're clearly not \ndoing that anymore. How would the development process look like if we \njust had one commitfest per dev cycle that runs from July 1st to March 31st?\n\n\n\n", "msg_date": "Fri, 17 May 2024 15:08:53 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\n\nOn 5/17/24 14:51, Andrey M. Borodin wrote:\n> \n> \n>> On 17 May 2024, at 16:39, Robert Haas <[email protected]> wrote:\n>>\n>> I think Andrey Borodin's nearby suggestion of having a separate CfM\n>> for each section of the CommitFest does a good job revealing just how\n>> bad the current situation is. I agree with him: that would actually\n>> work. Asking somebody, for a one-month period, to be responsible for\n>> shepherding one-tenth or one-twentieth of the entries in the\n>> CommitFest would be a reasonable amount of work for somebody. But we\n>> will not find 10 or 20 highly motivated, well-qualified volunteers\n>> every other month to do that work;\n> \n> Why do you think so? Let’s just try to find more CFMs for July.\n> When I felt that I’m overwhelmed, I asked for help and Alexander Alekseev promptly agreed. That helped a lot.\n> If I was in that position again, I would just ask 10 times on a 1st day :)\n> \n>> it's a struggle to find one or two\n>> highly motivated, well-qualified CommitFest managers, let alone ten or\n>> twenty.\n> \n> Because we are looking for one person to do a job for 10.\n> \n\nYes. It's probably easier to find more CF managers doing much less work.\n\n>> So I think the right interpretation of his comment is that\n>> managing the CommitFest has become about an order of magnitude more\n>> difficult than what it needs to be for the task to be done well.\n> \n> Let’s scale the process. Reduce responsibility area of a CFM, define it clearer.\n> And maybe even explicitly ask CFM to summarize patch status of each entry at least once a CF.\n> \n\nShould it even be up to the CFM to write the summary, or should he/she\nbe able to request an update from the patch author? Of at least have the\nchoice to do so.\n\nI think we'll always struggle with the massive threads, because it's\nreally difficult to find the right balance between brevity and including\nall the relevant details. Or rather impossible. I did try writing such\nsummaries for a couple of my long-running patches, and while it might\nhave helped, the challenge was to also explain why stuff *not* done in\nsome alternative way, which is one of the things usually discussed. But\nthe summary gets very long, because there are many alternatives.\n\n> \n> Can I do a small poll among those who is on this thread? Would you\nvolunteer to summarize a status of 20 patches in July’s CF? 5 each week\nor so. One per day.\n> \n\nNot sure. For many patches it'll be trivial. And for a bunch it'll be\nvery very time-consuming.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 May 2024 15:11:42 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 17 May 2024, at 15:05, Robert Haas <[email protected]> wrote:\n> \n> On Fri, May 17, 2024 at 9:02 AM Peter Eisentraut <[email protected]> wrote:\n>> We already have the annotations feature, which is kind of this.\n> \n> I didn't realize we had that feature. When was it added, and how is it\n> supposed to be used?\n\nA while back.\n\ncommit 27cba025a501c9dbcfb08da0c4db95dc6111d647\nAuthor: Magnus Hagander <[email protected]>\nDate: Sat Feb 14 13:07:48 2015 +0100\n\n Implement simple message annotations\n\n This feature makes it possible to \"pull in\" a message in a thread and highlight\n it with an annotation (free text format). This will list the message in a table\n along with the annotation and who made it.\n\n Annotations have to be attached to a specific message - for a \"generic\" one it\n makes sense to attach it to the latest message available, as that will put it\n at the correct place in time.\n\nMagnus' commitmessage explains it well. The way I've used it (albeit\ninfrequently) is to point to a specific mail in the thread where a significant\nchange was proposed, like the patch changhing direction or something similar.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 17 May 2024 15:12:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/17/24 09:08, Peter Eisentraut wrote:\n> On 17.05.24 14:42, Joe Conway wrote:\n>>> Namely, the week before commitfest I don't actually know if I will \n>>> have the time during that month, but I will make sure my patch is in \n>>> the commitfest just in case I get a few clear days to work on it. \n>>> Because if it isn't there, I can't take advantage of those \"found\" hours.\n>> \n>> A solution to both of these issues (yours and mine) would be to allow \n>> things to be added *during* the CF month. What is the point of having a \n>> \"freeze\" before every CF anyway? Especially if they start out clean. If \n>> something is ready for review on day 8 of the CF, why not let it be \n>> added for review?\n> \n> Maybe this all indicates that the idea of bimonthly commitfests has run\n> its course. The original idea might have been, if we have like 50\n> patches, we can process all of them within a month. We're clearly not\n> doing that anymore. How would the development process look like if we\n> just had one commitfest per dev cycle that runs from July 1st to March 31st?\n\nWhat's old is new again? ;-)\n\nI agree with you. Before commitfests were a thing, we had no structure \nat all as I recall. The dates for the dev cycle were more fluid as I \nrecall, and we had no CF app to track things. Maybe retaining the \nstructure but going back to the continuous development would be just the \nthing, with the CF app tracking just the currently (as of this \nweek/month/???) active stuff.\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 17 May 2024 09:54:08 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 9:54 AM Joe Conway <[email protected]> wrote:\n> I agree with you. Before commitfests were a thing, we had no structure\n> at all as I recall. The dates for the dev cycle were more fluid as I\n> recall, and we had no CF app to track things. Maybe retaining the\n> structure but going back to the continuous development would be just the\n> thing, with the CF app tracking just the currently (as of this\n> week/month/???) active stuff.\n\nThe main thing that we'd gain from that is to avoid all the work of\npushing lots of things forward to the next CommitFest at the end of\nevery CommitFest. While I agree that we need to find a better way to\nhandle that, I don't think it's really the biggest problem. The core\nproblems here are (1) keeping stuff out of CommitFests that don't\nbelong there and (2) labelling stuff that does belong in the\nCommitFest in useful ways. We should shape the solution around those\nproblems. Maybe that will solve this problem along the way, but if it\ndoesn't, that's easy enough to fix afterward.\n\nLike, we could also just have a button that says \"move everything\nthat's left to the next CommitFest\". That, too, would avoid the manual\nwork that this would avoid. But it wouldn't solve any other problems,\nso it's not really worth much consideration.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 10:10:05 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 9:09 AM Peter Eisentraut <[email protected]> wrote:\n> How would the development process look like if we\n> just had one commitfest per dev cycle that runs from July 1st to March 31st?\n\nExactly the same?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 May 2024 10:15:31 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, May 17, 2024 at 9:54 AM Joe Conway <[email protected]> wrote:\n>> I agree with you. Before commitfests were a thing, we had no structure\n>> at all as I recall. The dates for the dev cycle were more fluid as I\n>> recall, and we had no CF app to track things. Maybe retaining the\n>> structure but going back to the continuous development would be just the\n>> thing, with the CF app tracking just the currently (as of this\n>> week/month/???) active stuff.\n\n> The main thing that we'd gain from that is to avoid all the work of\n> pushing lots of things forward to the next CommitFest at the end of\n> every CommitFest.\n\nTo my mind, the point of the time-boxed commitfests is to provide\na structure wherein people will (hopefully) pay some actual attention\nto other peoples' patches. Conversely, the fact that we don't have\none running all the time gives committers some defined intervals\nwhere they can work on their own stuff without feeling guilty that\nthey're not handling other people's patches.\n\nIf we go back to the old its-development-mode-all-the-time approach,\nwhat is likely to happen is that the commit rate for not-your-own-\npatches goes to zero, because it's always possible to rationalize\nyour own stuff as being more important.\n\n> Like, we could also just have a button that says \"move everything\n> that's left to the next CommitFest\".\n\nClearly, CFMs would appreciate some more tooling to make that sort\nof thing easier. Perhaps we omitted it in the original CF app\ncoding because we expected the end-of-CF backlog to be minimal,\nbut it's now obvious that it generally isn't.\n\nBTW, I was reminded while trawling old email yesterday that\nwe used to freeze the content of a CF at its start and then\nhold the CF open until the backlog actually went to zero,\nwhich resulted in multi-month death-march CFs and no clarity\nat all as to release timing. Let's not go back to that.\nBut the CF app was probably built with that model in mind.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 10:31:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 10:31 AM Tom Lane <[email protected]> wrote:\n> To my mind, the point of the time-boxed commitfests is to provide\n> a structure wherein people will (hopefully) pay some actual attention\n> to other peoples' patches. Conversely, the fact that we don't have\n> one running all the time gives committers some defined intervals\n> where they can work on their own stuff without feeling guilty that\n> they're not handling other people's patches.\n>\n> If we go back to the old its-development-mode-all-the-time approach,\n> what is likely to happen is that the commit rate for not-your-own-\n> patches goes to zero, because it's always possible to rationalize\n> your own stuff as being more important.\n\nWe already have gone back to that model. We just haven't admitted it\nyet. And we're never going to get out of it until we find a way to get\nthe contents of the CommitFest application down to a more reasonable\nsize and level of complexity. There's just no way everyone's up for\nthat level of pain. I'm not sure not up for that level of pain.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 10:40:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, May 17, 2024 at 10:31 AM Tom Lane <[email protected]> wrote:\n>> If we go back to the old its-development-mode-all-the-time approach,\n>> what is likely to happen is that the commit rate for not-your-own-\n>> patches goes to zero, because it's always possible to rationalize\n>> your own stuff as being more important.\n\n> We already have gone back to that model. We just haven't admitted it\n> yet. And we're never going to get out of it until we find a way to get\n> the contents of the CommitFest application down to a more reasonable\n> size and level of complexity. There's just no way everyone's up for\n> that level of pain. I'm not sure not up for that level of pain.\n\nYeah, we clearly need to get the patch list to a point of\nmanageability, but I don't agree that abandoning time-boxed CFs\nwill improve anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 11:05:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 7:11 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 5/16/24 23:43, Peter Eisentraut wrote:\n> > On 16.05.24 23:06, Joe Conway wrote:\n> >> On 5/16/24 16:57, Jacob Champion wrote:\n> >>> On Thu, May 16, 2024 at 1:31 PM Joe Conway <[email protected]> wrote:\n> >>>> Maybe we should just make it a policy that *nothing* gets moved forward\n> >>>> from commitfest-to-commitfest and therefore the author needs to care\n> >>>> enough to register for the next one?\n> >>>\n> >>> I think that's going to severely disadvantage anyone who doesn't do\n> >>> this as their day job. Maybe I'm bristling a bit too much at the\n> >>> wording, but not having time to shepherd a patch is not the same as\n> >>> not caring.\n> >>\n> >> Maybe the word \"care\" was a poor choice, but forcing authors to think\n> >> about and decide if they have the \"time to shepherd a patch\" for the\n> >> *next CF* is exactly the point. If they don't, why clutter the CF with\n> >> it.\n> >\n> > Objectively, I think this could be quite effective. You need to prove\n> > your continued interest in your project by pressing a button every two\n> > months.\n> >\n> > But there is a high risk that this will double the annoyance for\n> > contributors whose patches aren't getting reviews. Now, not only are\n> > you being ignored, but you need to prove that you're still there every\n> > two months.\n> >\n>\n> Yeah, I 100% agree with this. If a patch bitrots and no one cares enough\n> to rebase it once in a while, then sure - it's probably fine to mark it\n> RwF. But forcing all contributors to do a dance every 2 months just to\n> have a chance someone might take a look, seems ... not great.\n>\n> I try to see this from the contributors' PoV, and with this process I'm\n> sure I'd start questioning if I even want to submit patches.\n\nAgreed.\n\n> That is not to say we don't have a problem with patches that just move\n> to the next CF, and that we don't need to do something about that ...\n>\n> Incidentally, I've been preparing some git+CF stats because of a talk\n> I'm expected to do, and it's very obvious the number of committed (and\n> rejected) CF entries is more or very stable over time, while the number\n> of patches that move to the next CF just snowballs.\n>\n> My impression is a lot of these contributions/patches just never get the\n> review & attention that would allow them to move forward. Sure, some do\n> bitrot and/or get abandoned, and let's RwF those. But forcing everyone\n> to re-register the patches over and over seems like \"reject by default\".\n> I'd expect a lot of people to stop bothering and give up, and in a way\n> that would \"solve\" the bottleneck. But I'm not sure it's the solution\n> we'd like ...\n\nI don't think we should reject by default. It is discouraging and it\nis already hard enough as it is to contribute.\n\n> It does seem to me a part of the solution needs to be helping to get\n> those patches reviewed. I don't know how to do that, but perhaps there's\n> a way to encourage people to review more stuff, or review stuff from a\n> wider range of contributors. Say by treating reviews more like proper\n> contributions.\n\nOne reason I support the parking lot idea is for patches like the one\nin [1]. EXPLAIN for parallel bitmap heap scan is just plain broken.\nThe patch in this commitfest entry is functionally 80% of the way\nthere. It just needs someone to do the rest of the work to make it\ncommittable. I actually think it is unreasonable of us to expect the\noriginal author to do this. I have had it on my list for weeks to get\nback around to helping with this patch. However, I spent the better\npart of my coding time in the last two weeks trying to reproduce and\nfix a bug on stable branches that causes vacuum processes to\ninfinitely loop. Arguably that is a bigger problem. Because I knew\nthis EXPLAIN patch was slipping down my TODO list, I changed the patch\nto \"waiting on author\", but I honestly don't think the original author\nshould have to do the rest of the work.\n\nShould I spend more time on this patch reviewing it and moving it\nforward? Yes. Maybe I'm just too slow at writing postgres code or I\nhave bad time management or I should spend less time doing things\nlike figuring out how many lavalier mics we need in each room for\nPGConf.dev. I don't know. But it is hard for me to figure out how to\ndo more review work and guarantee that this kind of thing won't\nhappen.\n\nSo, anyway, I'd argue that we need a parking lot for patches which we\nall agree are important and have a path forward but need someone to do\nthe last 20-80% of the work. To avoid this being a dumping ground,\npatches should _only_ be allowed in the parking lot if they have a\nclear path forward. Patches which haven't gotten any interest don't go\nthere. Patches in which the author has clearly not addressed feedback\nthat is reasonable for them to address don't go there. These are\neffectively community TODOs which we agree need to be done -- if only\nsomeone had the time.\n\n- Melanie\n\n[1] https://commitfest.postgresql.org/48/4248/\n\n\n", "msg_date": "Fri, 17 May 2024 11:44:40 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 7:40 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, May 17, 2024 at 10:31 AM Tom Lane <[email protected]> wrote:\n> > To my mind, the point of the time-boxed commitfests is to provide\n> > a structure wherein people will (hopefully) pay some actual attention\n> > to other peoples' patches. Conversely, the fact that we don't have\n> > one running all the time gives committers some defined intervals\n> > where they can work on their own stuff without feeling guilty that\n> > they're not handling other people's patches.\n> >\n> > If we go back to the old its-development-mode-all-the-time approach,\n> > what is likely to happen is that the commit rate for not-your-own-\n> > patches goes to zero, because it's always possible to rationalize\n> > your own stuff as being more important.\n>\n> We already have gone back to that model. We just haven't admitted it\n> yet.\n\nI've worked on teams that used the short-timebox CF calendar to\norganize community work, like Tom describes. That was a really\npositive thing for us.\n\nMaybe it feels different from the committer point of view, but I don't\nthink all of the community is operating on the long-timebox model, and\nI really wouldn't want to see us lengthen the cycles to try to get\naround the lack of review/organization that's being complained about.\n(But maybe you're not arguing for that in the first place.)\n\n--Jacob\n\n\n", "msg_date": "Fri, 17 May 2024 08:56:10 -0700", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Melanie Plageman <[email protected]> writes:\n> So, anyway, I'd argue that we need a parking lot for patches which we\n> all agree are important and have a path forward but need someone to do\n> the last 20-80% of the work. To avoid this being a dumping ground,\n> patches should _only_ be allowed in the parking lot if they have a\n> clear path forward. Patches which haven't gotten any interest don't go\n> there. Patches in which the author has clearly not addressed feedback\n> that is reasonable for them to address don't go there. These are\n> effectively community TODOs which we agree need to be done -- if only\n> someone had the time.\n\nHmm. I was envisioning \"parking lot\" as meaning \"this is on my\npersonal TODO list, and I'd like CI support for it, but I'm not\nexpecting anyone else to pay attention to it yet\". I think what\nyou are describing is valuable but different. Maybe call it\n\"pending\" or such? Or invent a different name for the other thing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 11:57:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 11:05 AM Tom Lane <[email protected]> wrote:\n> > We already have gone back to that model. We just haven't admitted it\n> > yet. And we're never going to get out of it until we find a way to get\n> > the contents of the CommitFest application down to a more reasonable\n> > size and level of complexity. There's just no way everyone's up for\n> > that level of pain. I'm not sure not up for that level of pain.\n>\n> Yeah, we clearly need to get the patch list to a point of\n> manageability, but I don't agree that abandoning time-boxed CFs\n> will improve anything.\n\nI'm not sure. Suppose we plotted commits generally, or commits of\nnon-committer patches, or reviews on-list, vs. time. Would we see any\nuptick in activity during CommitFests? Would it vary by committer? I\ndon't know. I bet the difference wouldn't be as much as Tom Lane would\nlike to see. Realistically, we can't manage how contributors spend\ntheir time all that much, and trying to do so is largely tilting at\nwindmills.\n\nTo me, the value of time-based CommitFests is as a vehicle to ensure\nfreshness, or cleanup, or whatever word you want to do. If you just\nmake a list of things that need attention and keep incrementally\nupdating it, eventually for various reasons that list no longer\nreflects your current list of priorities. At some point, you have to\nthrow it out and make a new list, or at least that's what always\nhappens to me. We've fallen into a system where the default treatment\nof a patch is to be carried over to the next CommitFest and CfMs are\nexpected to exert effort to keep patches from getting that default\ntreatment when they shouldn't. But that does not scale. We need a\nsystem where things drop off the list unless somebody makes an effort\nto keep them on the list, and the easiest way to do that is to\nperiodically make a *fresh* list that *doesn't* just clone some\nprevious list.\n\nI realize that many people here are (rightly!) concerned with\nburdening patch authors with more steps that they have to follow. But\nthe current system is serving new patch authors very poorly. If they\nget attention, it's much more likely to be because somebody saw their\nemail and wrote back than it is to be because somebody went through\nthe CommitFest and found their entry and was like \"oh, I should review\nthis\". Honestly, if we get to a situation where a patch author is sad\nbecause they have to click a link every 2 months to say \"yeah, I'm\nstill here, please review my patch,\" we've already lost the game. That\nperson isn't sad because we asked them to click a link. They're sad\nit's already been N * 2 months and nothing has happened.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 11:58:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 11:57 AM Tom Lane <[email protected]> wrote:\n> Melanie Plageman <[email protected]> writes:\n> > So, anyway, I'd argue that we need a parking lot for patches which we\n> > all agree are important and have a path forward but need someone to do\n> > the last 20-80% of the work. To avoid this being a dumping ground,\n> > patches should _only_ be allowed in the parking lot if they have a\n> > clear path forward. Patches which haven't gotten any interest don't go\n> > there. Patches in which the author has clearly not addressed feedback\n> > that is reasonable for them to address don't go there. These are\n> > effectively community TODOs which we agree need to be done -- if only\n> > someone had the time.\n>\n> Hmm. I was envisioning \"parking lot\" as meaning \"this is on my\n> personal TODO list, and I'd like CI support for it, but I'm not\n> expecting anyone else to pay attention to it yet\".\n\n+1.\n\n> I think what\n> you are describing is valuable but different. Maybe call it\n> \"pending\" or such? Or invent a different name for the other thing.\n\nYeah, there should be someplace that we keep a list of things that are\nthought to be important but we haven't gotten around to doing anything\nabout yet, but I think that's different from the parking lot CF.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 12:00:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/17/24 11:58, Robert Haas wrote:\n> I realize that many people here are (rightly!) concerned with\n> burdening patch authors with more steps that they have to follow. But\n> the current system is serving new patch authors very poorly. If they\n> get attention, it's much more likely to be because somebody saw their\n> email and wrote back than it is to be because somebody went through\n> the CommitFest and found their entry and was like \"oh, I should review\n> this\". Honestly, if we get to a situation where a patch author is sad\n> because they have to click a link every 2 months to say \"yeah, I'm\n> still here, please review my patch,\" we've already lost the game. That\n> person isn't sad because we asked them to click a link. They're sad\n> it's already been N * 2 months and nothing has happened.\n\n+many\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 17 May 2024 12:10:20 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 17:59, Robert Haas <[email protected]> wrote:\n> If they\n> get attention, it's much more likely to be because somebody saw their\n> email and wrote back than it is to be because somebody went through\n> the CommitFest and found their entry and was like \"oh, I should review\n> this\".\n\nI think this is an important insight. I used the commitfest app to\nfind patches to review when I was just starting out in postgres\ndevelopment, since I didn't subscribe to all af pgsql-hackers yet. I\ndo subscribe nowadays, but now I rarely look at the commitfest app,\ninstead I skim the titles of emails that go into my \"Postgres\" folder\nin my mailbox. Going from such an email to a commitfest entry is near\nimpossible (at least I don't know how to do this efficiently). I guess\nI'm not the only one doing this.\n\n> Honestly, if we get to a situation where a patch author is sad\n> because they have to click a link every 2 months to say \"yeah, I'm\n> still here, please review my patch,\" we've already lost the game. That\n> person isn't sad because we asked them to click a link. They're sad\n> it's already been N * 2 months and nothing has happened.\n\nMaybe it wouldn't be so bad for an author to click the 2 months\nbutton, if it would actually give their patch some higher chance of\nbeing reviewed by doing that. And given the previous insight, that\npeople don't look at the commitfest app that often, it might be good\nif pressing this button would also bump the item in people's\nmailboxes.\n\n\n", "msg_date": "Fri, 17 May 2024 18:10:37 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Friday, May 17, 2024, Joe Conway <[email protected]> wrote:\n>> A solution to both of these issues (yours and mine) would be to allow\n>> things to be added *during* the CF month. What is the point of having a\n>> \"freeze\" before every CF anyway? Especially if they start out clean. If\n>> something is ready for review on day 8 of the CF, why not let it be added\n>> for review?\n\n> In conjunction with WIP removing this limitation on the bimonthlies makes\n> sense to me.\n\nI think that the original motivation for this was two-fold:\n\n1. A notion of fairness, that you shouldn't get your patch reviewed\nahead of others that have been waiting (much?) longer. I'm not sure\nhow much this is really worth. In particular, even if we do delay\nan incoming patch by one CF, it's still going to compete with the\nolder stuff in future CFs. There's already a sort feature in the CF\ndashboard whereby patches that have lingered for more CFs appear ahead\nof patches that are newer, so maybe just ensuring that late-arriving\npatches sort as \"been around for 0 CFs\" is sufficient.\n\n2. As I mentioned a bit ago, the original idea was that we didn't exit\na CF until it was empty of un-handled patches, so obviously allowing\nnew patches to come in would mean we'd never get to empty. That idea\ndidn't work and we don't think that way anymore.\n\nSo yeah, I'm okay with abandoning the must-submit-to-a-future-CF\nrestriction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 12:29:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 7:11 AM Tomas Vondra <[email protected]>\nwrote:\n\n> It does seem to me a part of the solution needs to be helping to get\n> those patches reviewed. I don't know how to do that, but perhaps there's\n> a way to encourage people to review more stuff, or review stuff from a\n> wider range of contributors. Say by treating reviews more like proper\n> contributions.\n\n\nThis is a huge problem. I've been in the situation before where I had some\ncycles to do a review, but actually finding one to review is\nsuper-difficult. You simply cannot tell without clicking on the link and\nwading through the email thread. Granted, it's easy as an\noccasional reviewer to simply disregard potential patches if the email\nthread is over a certain size, but it's still a lot of work. Having some\nsort of summary/status field would be great, even if not everything was\nlabelled. It would also be nice if simpler patches were NOT picked up by\nexperienced hackers, as we want to encourage new/inexperienced people, and\nhaving some \"easy to review\" patches available will help people gain\nconfidence and grow.\n\n\n> Long time ago there was a \"rule\" that people submitting patches are\n> expected to do reviews. Perhaps we should be more strict this.\n>\n\nBig -1. How would we even be more strict about this? Public shaming?\nWithholding a commit?\n\nCheers,\nGreg\n\nOn Fri, May 17, 2024 at 7:11 AM Tomas Vondra <[email protected]> wrote:It does seem to me a part of the solution needs to be helping to get\nthose patches reviewed. I don't know how to do that, but perhaps there's\na way to encourage people to review more stuff, or review stuff from a\nwider range of contributors. Say by treating reviews more like proper\ncontributions.This is a huge problem. I've been in the situation before where I had some cycles to do a review, but actually finding one to review is super-difficult. You simply cannot tell without clicking on the link and wading through the email thread. Granted, it's easy as an occasional reviewer to simply disregard potential patches if the email thread is over a certain size, but it's still a lot of work. Having some sort of summary/status field would be great, even if not everything was labelled. It would also be nice if simpler patches were NOT picked up by experienced hackers, as we want to encourage new/inexperienced people, and having some \"easy to review\" patches available will help people gain confidence and grow. \nLong time ago there was a \"rule\" that people submitting patches are expected to do reviews. Perhaps we should be more strict this.Big -1. How would we even be more strict about this? Public shaming? Withholding a commit?Cheers,Greg", "msg_date": "Fri, 17 May 2024 13:12:59 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 2024-05-17 at 13:12 -0400, Greg Sabino Mullane wrote:\n> > Long time ago there was a \"rule\" that people submitting patches are expected\n> > to do reviews. Perhaps we should be more strict this.\n> \n> Big -1. How would we even be more strict about this? Public shaming? Withholding a commit?\n\nI think it is a good rule. I don't think that it shouldn't lead to putting\npeople on the pillory or kicking their patches, but I imagine that a committer\nlooking for somebody else's patch to work on could prefer patches by people\nwho are doing their share of reviews.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 17 May 2024 21:51:49 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 3:51 PM Laurenz Albe <[email protected]> wrote:\n> On Fri, 2024-05-17 at 13:12 -0400, Greg Sabino Mullane wrote:\n> > > Long time ago there was a \"rule\" that people submitting patches are expected\n> > > to do reviews. Perhaps we should be more strict this.\n> >\n> > Big -1. How would we even be more strict about this? Public shaming? Withholding a commit?\n>\n> I think it is a good rule. I don't think that it shouldn't lead to putting\n> people on the pillory or kicking their patches, but I imagine that a committer\n> looking for somebody else's patch to work on could prefer patches by people\n> who are doing their share of reviews.\n\nIf you give me an automated way to find that out, I'll consider paying\nsome attention to it. However, in order to sort the list of patches\nneeding review by the amount of review done by the patch author, we'd\nfirst need to have a list of patches needing review.\n\nAnd right now we don't, or at least not in any usable way.\ncommitfest.postgresql.org is supposed to give us that, but it doesn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 15:59:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\nOn 5/17/24 21:59, Robert Haas wrote:\n> On Fri, May 17, 2024 at 3:51 PM Laurenz Albe <[email protected]> wrote:\n>> On Fri, 2024-05-17 at 13:12 -0400, Greg Sabino Mullane wrote:\n>>>> Long time ago there was a \"rule\" that people submitting patches are expected\n>>>> to do reviews. Perhaps we should be more strict this.\n>>>\n>>> Big -1. How would we even be more strict about this? Public shaming? Withholding a commit?\n>>\n>> I think it is a good rule. I don't think that it shouldn't lead to putting\n>> people on the pillory or kicking their patches, but I imagine that a committer\n>> looking for somebody else's patch to work on could prefer patches by people\n>> who are doing their share of reviews.\n> \n\nYeah, I don't have any particular idea how should the rule be \"enforced\"\nand I certainly did not imagine public shaming or anything like that. My\nthoughts were more about reminding people the reviews are part of the\ndeal, that's it ... maybe \"more strict\" was not quite what I meant.\n\n> If you give me an automated way to find that out, I'll consider paying\n> some attention to it. However, in order to sort the list of patches\n> needing review by the amount of review done by the patch author, we'd\n> first need to have a list of patches needing review.\n> \n> And right now we don't, or at least not in any usable way.\n> commitfest.postgresql.org is supposed to give us that, but it doesn't.\n> \n\nIt'd certainly help to know which patches to consider for review, but I\nguess I'd still look at patches from people doing more reviews first,\neven if I had to find out in what shape the patch is.\n\nI'm far more skeptical about \"automated way\" to track this, though. I'm\nnot sure it's quite possible - reviews can have a lot of very different\nforms, and deciding what is or is not a review is pretty subjective. So\nit's not clear how would we quantify that. Not to mention I'm sure we'd\npromptly find ways to game that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 May 2024 22:27:47 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 9:29 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Friday, May 17, 2024, Joe Conway <[email protected]> wrote:\n> >> A solution to both of these issues (yours and mine) would be to allow\n> >> things to be added *during* the CF month. What is the point of having a\n> >> \"freeze\" before every CF anyway? Especially if they start out clean. If\n> >> something is ready for review on day 8 of the CF, why not let it be\n> added\n> >> for review?\n>\n> > In conjunction with WIP removing this limitation on the bimonthlies makes\n> > sense to me.\n>\n> 2. As I mentioned a bit ago, the original idea was that we didn't exit\n> a CF until it was empty of un-handled patches, so obviously allowing\n> new patches to come in would mean we'd never get to empty. That idea\n> didn't work and we don't think that way anymore.\n>\n> So yeah, I'm okay with abandoning the must-submit-to-a-future-CF\n> restriction.\n>\n>\nConcretely I'm thinking of modifying our patch flow state diagram to this:\n\nstateDiagram-v2\nstate \"Work In Process (Not Timeboxed)\" as WIP {\n [*] --> CollaboratorsNeeded : Functional but\\nFeedback Needed\n [*] --> NeedsReview : Simple Enough for\\nSign-Off and Send\n [*] --> ReworkInProgress : via Returned With Feedback\n CollaboratorsNeeded --> NeedsReview : Collaboration Done\\nReady for\nSign-Off\n CollaboratorsNeeded --> WaitingOnAuthor : Feedback Given\\nBack with\nAuthors\n ReworkInProgress --> ReworkCompleted : Rework Ready\\nfor Inspection\n ReworkCompleted --> ReworkInProgress : More Changes Needed\n ReworkCompleted --> ReadyForCommitter : Requested Rework Confirmed\\nTry\nAgain to Commit\n NeedsReview --> ReadyForCommitter : Reviewer and Author\\nDeem\nSubmission Worthy\n NeedsReview --> WaitingOnAuthor : Changes Needed\n WaitingOnAuthor --> NeedsReview : Changes Made\n WaitingOnAuthor --> CollaboratorsNeeded : Need Help Making Changes\n WaitingOnAuthor --> Withdrawn : Not Going to Make Changes\n Withdrawn --> [*]\n}\n\nstate \"Bi-Monthly Timeboxing\" as BIM {\n [*] --> CommitPending : Simple Committer Patch\n CommitPending --> Committed : Done!\n CommitPending --> ChangesNeeded : Minor Feedback Given\n CommitPending --> ReturnedWithFeedback : Really should have been WIP\nfirst\n ReadyForCommitter --> ChangesNeeded : Able to be fixed\\nwithin the\ncurrent cycle\n ReadyForCommitter --> Committed : Done!\n ReadyForCommitter --> ReturnedWithFeedback : Not doable in the current\ncycle\\nSuitable for rejections as well\n ChangesNeeded --> ReadyForCommitter\n Committed --> [*]\n ReturnedWithFeedback --> [*]\n}\n\nThis allows for usage of WIP as a collaboration area with the side benefit\nof CI.\n\nPatches that have gotten commit cycle feedback don't get lumped back into\nNeeds Review\n\nThere is a short-term parking spot for committer-reviewed patches that just\ncannot be pushed at the moment. That should be low volume enough to cover\nboth quick-fixes and freeze-driven waits.\n\nCollaboration Needed should include a description of what kind of feedback\nor help is sought. Even if that is just \"first timer seeking guidance\".\n\nThe above details 5 new categories:\n\nCollaborators Needed - Specialized Needs Review for truly WIP work\n\nRework Completed - Specialized Needs Review to ensure that patches that got\ninto the bi-monthly once get priority for getting committed\n\nCommit Pending - Specialized Needs Review for easily fast-tracked patches;\nand a parking lot for frozen out patches\n\nRework in Progress - Parking lot for patches already through the bi-monthly\nand currently being reworked most likely for the next bi-monthly\n\nChanges Needed - Specialized Waiting on Author but the expected time period\nor effort to perform the changes is low; or the patch is just a high\npriority one that deserves to remain in the bi-monthly in order to keep\nattention on it. When the author is done the committer is waiting for the\nrevisions and so it goes right back into ReadyForCommitter.\n\nI removed Rejected here but it could be kept. It seems reasonably rolled\ninto Returned with Feedback and we shouldn't be rejecting without feedback\nanyway. Not enough rejection volume to warrant its own category.\n\nDavid J.\n\nhttps://mermaid.live/edit#pako:eNp9VU1z2jAQ_Stbn9pOufToQ2cS0qQcSJiQGQ51D7K12Cr2iuoDwmTy37uyYkzADicj73t-u3pPekkKLTFJE-uEwxslSiOaye57Ru0CZMlKmw3MCBZGF2gtfL7XDp5Ug7l-RvklS0BYWM0W8JIR8O_31z8wmfyAqa5rkWsjnDb2HlGihBRuPRVOaRI15N5lGd3ym1wUG4gl7znCmn3EncI9Y5eq2dYIP0n7soK1NgxfqpImD-s1CJKwRDojeMQ9y58Riy9NUJ_CTgledt4QC1opV0EnIUKHdF9q6au4GbjRhCzmEYU8BGHQyfqYciWUU1Q-0JV3FaPSoxS4UzskprwOf_ZBZayxkfGir77ZqQ5Tcu204wq0upgsKJuR3WK7BadMPWhkanNtEKaVoBLtu60axvP3brXh1UY5h6aV8s-jDRVvoqaa1so0zJPRkznAVSkUgdMQUZH9dOjjzOE1PwYLxCkx5Q1iA0ufN8rasEVsY1cdhlkv92Go0_OqAVO8oeZC4jhmOBjhAX5hvWX0hiEd2ThP8K40Yk8BzZm80wHGA2QCPIMfSwOQw5HRa0Z9xq_VZK7JVfWhSzZTxWRfz-aXyQ7DX3DYwgePuez3ZCFcUXXWP63t0dGfITefRgtjB8cJzRVx52cJGcF2AQ-NHxHBKqLmHm2lfS2hEjuEHJHa44vNaF3n6XOXDem5yrlpHneOjH1ufRyCGkxcIRTeGCQHxaGo8UPa4XkMF4_0FbZfahEEDX6ez0mvXPs-nAEG_8YjwIYd3mNdv83xXYeDeTsdeBf31k9R84C6U8cl35IGTSOU5BundVWWsNoGsyTlRynMJku4kuuEd3p5oCJJnfH4LfFb2V9QSboWteVVlIpTNI9XWHuTvf4HuH5UYg\n\n[image: image.png]\n\n[image: image.png]", "msg_date": "Fri, 17 May 2024 13:28:48 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, May 17, 2024 at 3:51 PM Laurenz Albe <[email protected]> wrote:\n>> I think it is a good rule. I don't think that it shouldn't lead to putting\n>> people on the pillory or kicking their patches, but I imagine that a committer\n>> looking for somebody else's patch to work on could prefer patches by people\n>> who are doing their share of reviews.\n\n> If you give me an automated way to find that out, I'll consider paying\n> some attention to it.\n\nYeah, I can't imagine that any committer (or reviewer, really) is\ndoing any such thing, because it would take far too much effort to\nfigure out how much work anyone else is doing. I see CFMs reminding\neverybody that this rule exists, but I don't think they ever try to\ncheck it either. It's pretty much the honor system, and I'm sure\nsome folk ignore it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 17:05:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 11:44:40AM -0400, Melanie Plageman wrote:\n> So, anyway, I'd argue that we need a parking lot for patches which we\n> all agree are important and have a path forward but need someone to do\n> the last 20-80% of the work. To avoid this being a dumping ground,\n> patches should _only_ be allowed in the parking lot if they have a\n> clear path forward. Patches which haven't gotten any interest don't go\n> there. Patches in which the author has clearly not addressed feedback\n> that is reasonable for them to address don't go there. These are\n> effectively community TODOs which we agree need to be done -- if only\n> someone had the time.\n\nWhen I am looking to commit something, I have to consider:\n\n* do we want the change\n* are there concerns\n* are the docs good\n* does it need tests\n* is it only a proof-of-concept\n\nWhen people review commit fest entries, they have to figure out what is\nholding the patch back from being complete, so they have to read the\nthread from the beginning. Should there be a clearer way in the commit\nfest app to specify what is missing?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 18 May 2024 18:49:31 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On Thu, May 16, 2024 at 02:30:03PM -0400, Robert Haas wrote:\n>\n> I wonder what ideas people have for improving this situation. I doubt\n> that there's any easy answer that just makes the problem go away --\n> keeping large groups of people organized is a tremendously difficult\n> task under pretty much all circumstances, and the fact that, in this\n> context, nobody's really the boss, makes it a whole lot harder. But I\n> also feel like what we're doing right now can't possibly be the best\n> that we can do.\n\nThere are lots of good takes on this in the thread. It also makes clear what's\nat stake -- as Melanie pointed out with the patch about EXPLAIN for parallel\nbitmap heap scan, we're loosing potential contributors for no reasons. But I'm\na bit concerned about what are the next steps: if memory serves, every couple\nof years there is a discussion about everything what goes wrong with the review\nprocess, commitfests, etc. Yet to my (admittedly limited) insight into the\ncommunity, not many things have changed due to those discussions. How do we\nmake sure this time it will be different?\n\nIt is indeed tremendously difficult to self organize, so maybe it's worth to\nvolunteer a group of people to work out details of one or two proposals,\nanswering the question \"how to make it better?\". As far as I understand, the\ncommunity already has a similar experience. Summarizing this thread,\nthere seems to be following dimensions to look at:\n\n* What is the purpose of CF and how to align it better with the community\n goals.\n\n \"CommitFest\" here means both the CF tool and the process behind it. So far\n the discussion was evolving around the state machine for each individual CF\n item as well as the whole CF cycle. At the end of the day perhaps a list of\n pairs (item, status) is not the best representation, probably more filters\n have to be considered (e.g. implementing a workflow \"give me all the items,\n updated in the last month with the last reply being from the patch author\").\n\n* How to synchronize the mailing list with CF content.\n\n The entropy of CF content grows over time, making it less efficient. For\n especially old threads it's even more visible. How to reduce the entropy\n without scaring new contributors away?\n\n* How to deal with review scalability bottleneck.\n\n An evergreen question. PostgreSQL is getting more popular and, as stated in\n diverse research whitepapers, the amount of contribution grows as a power\n law, where the number of reviewers grows at best sub-linearly (limited by the\n velocity of knowledge sharing). I agree with Andrey on this, the only\n way I see to handle this is to scale CF management efforts.\n\n* What are the UX gaps of CF tool?\n\n There seems to be some number of improvements that could make work with CF\n tool more frictionless.\n\nWhat I think wasn't discussed yet in details is the question of motivation.\nSurely, it would be great to have a process that will introduce as less burden\nas possible. But giving more motivation to follow the process / use the tool is\nas important. What motivates folks to review patches, figure out status of a\ncomplicated patch thread, maintain a list of open items, etc?\n\n\n", "msg_date": "Sun, 19 May 2024 11:37:19 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/19/24 05:37, Dmitry Dolgov wrote:\n> * How to deal with review scalability bottleneck.\n> \n> An evergreen question. PostgreSQL is getting more popular and, as stated in\n> diverse research whitepapers, the amount of contribution grows as a power\n> law, where the number of reviewers grows at best sub-linearly (limited by the\n> velocity of knowledge sharing). I agree with Andrey on this, the only\n> way I see to handle this is to scale CF management efforts.\n\n\nThe number of items tracked are surely growing, but I am not sure I \nwould call it exponential -- see attached\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 19 May 2024 10:42:01 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Dmitry Dolgov <[email protected]> writes:\n> There are lots of good takes on this in the thread. It also makes clear what's\n> at stake -- as Melanie pointed out with the patch about EXPLAIN for parallel\n> bitmap heap scan, we're loosing potential contributors for no reasons. But I'm\n> a bit concerned about what are the next steps: if memory serves, every couple\n> of years there is a discussion about everything what goes wrong with the review\n> process, commitfests, etc. Yet to my (admittedly limited) insight into the\n> community, not many things have changed due to those discussions. How do we\n> make sure this time it will be different?\n\nThings *have* changed, if you take the long view. We didn't have\ncommitfests at all until around 2007, and we've changed the ground\nrules for them a couple of times since then. We didn't have the CF\napp at all until, well, I don't recall when, but the first few CFs\nwere managed by keeping patch lists on a wiki page. It's not that\npeople are unwilling to change this stuff, but that it's hard to\nidentify what will make things better.\n\nIMV one really fundamental problem is the same as it's been for a\ncouple of decades: too many patch submissions, too few committers.\nWe can't fix that by just appointing a ton more committers, at least\nnot if we want to keep the project's quality up. We have to grow\nqualified committers. IIRC, one of the main reasons for instituting\nthe commitfests at all was the hope that if we got more people to\nspend time reading the code and reviewing patches, some of them would\nlearn enough to become good committers. I think that's worked, again\ntaking a long view. I just did some quick statistics on the commit\nhistory, and I see that we were hovering at somewhere around ten\nactive committers from 1999 to 2012, but since then it's slowly crept\nup to about two dozen today. (I'm counting \"active\" as \"at least 10\ncommits per year\", which is an arbitrary cutoff --- feel free to slice\nthe data for yourself.) Meanwhile the number of submissions has also\ngrown, so I'm not sure how much better the load ratio is.\n\nMy point here is not that things are great, but just that we are\nindeed improving, and I hope we can continue to. Let's not be\ndefeatist about it.\n\nI think this thread has already identified a few things we have\nconsensus to improve in the CF app, and I hope somebody goes off\nand makes those happen (I lack the web skillz to help myself).\nHowever, the app itself is just a tool; what counts more is our\nprocess around it. I have a couple of thoughts about that:\n\n* Patches that sit in the queue a long time tend to be ones that lack\nconsensus, either about the goal or the details of how to achieve it.\nSometimes \"lacks consensus\" really means \"nobody but the author thinks\nthis is a good idea, but we haven't mustered the will to say no\".\nBut I think it's more usually the case that there are plausible\ncompeting opinions about what the patch should do or how it should\ndo it. How can we resolve such differences and get something done?\n\n* Another reason for things sitting a long time is that they're too\nbig to review without an unreasonable amount of effort. We should\nencourage authors to break large patches into smaller stepwise\nrefinements. It may seem like that will result in taking forever\nto reach the end goal, but dropping a huge patchset on the community\nisn't going to give speedy results either.\n\n* Before starting this thread, Robert did a lot of very valuable\nreview of some individual patches. I think what prompted him to\nstart the thread was the realization that he'd only made a small\ndent in the problem. Maybe we could divide and conquer: get a\ndozen-or-so senior contributors to split up the list of pending\npatches and then look at each one with an eye to what needs to\nhappen to move it along (*not* to commit it right away, although\nin some cases maybe that's the thing to do). It'd be great if\nthat could happen just before each commitfest, but that's probably\nnot practical with the current patch volume. What I'm thinking\nfor the moment is to try to make that happen once a year or so.\n\n> What I think wasn't discussed yet in details is the question of motivation.\n> Surely, it would be great to have a process that will introduce as less burden\n> as possible. But giving more motivation to follow the process / use the tool is\n> as important. What motivates folks to review patches, figure out status of a\n> complicated patch thread, maintain a list of open items, etc?\n\nYeah, all this stuff ultimately gets done \"for the good of the\nproject\", which isn't the most reliable motivation perhaps.\nI don't see a better one...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 May 2024 15:18:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Sun, May 19, 2024 at 03:18:11PM -0400, Tom Lane wrote:\n> * Another reason for things sitting a long time is that they're too\n> big to review without an unreasonable amount of effort. We should\n> encourage authors to break large patches into smaller stepwise\n> refinements. It may seem like that will result in taking forever\n> to reach the end goal, but dropping a huge patchset on the community\n> isn't going to give speedy results either.\n\nI think it is sometimes hard to incrementally apply patches if the\nlong-term goal isn't agreed or know to be achievable.\n\n> * Before starting this thread, Robert did a lot of very valuable\n> review of some individual patches. I think what prompted him to\n> start the thread was the realization that he'd only made a small\n> dent in the problem. Maybe we could divide and conquer: get a\n> dozen-or-so senior contributors to split up the list of pending\n> patches and then look at each one with an eye to what needs to\n> happen to move it along (*not* to commit it right away, although\n> in some cases maybe that's the thing to do). It'd be great if\n> that could happen just before each commitfest, but that's probably\n> not practical with the current patch volume. What I'm thinking\n> for the moment is to try to make that happen once a year or so.\n\nFor me, if someone already knows what the blocker is, it saves me a lot\nof time if they can state that somewhere.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sun, 19 May 2024 20:20:52 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Sun, May 19, 2024 at 03:18:11PM -0400, Tom Lane wrote:\n>> * Another reason for things sitting a long time is that they're too\n>> big to review without an unreasonable amount of effort. We should\n>> encourage authors to break large patches into smaller stepwise\n>> refinements. It may seem like that will result in taking forever\n>> to reach the end goal, but dropping a huge patchset on the community\n>> isn't going to give speedy results either.\n\n> I think it is sometimes hard to incrementally apply patches if the\n> long-term goal isn't agreed or know to be achievable.\n\nTrue. The value of the earlier patches in the series can be unclear\nif you don't understand what the end goal is. But I think people\ncould post a \"road map\" of how they intend a patch series to go.\n\nAnother way of looking at this is that sometimes people do post large\nchunks of work in long many-patch sets, but we tend to look at the\nwhole series as something to review and commit as one (or I do, at\nleast). We should be more willing to bite off and push the earlier\npatches in such a series even when the later ones aren't entirely\ndone.\n\n(The cfbot tends to discourage this, since as soon as one of the\npatches is committed it no longer knows how to apply the rest.\nCan we improve on that tooling somehow?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 19 May 2024 21:09:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Mon, May 20, 2024 at 1:09 PM Tom Lane <[email protected]> wrote:\n> (The cfbot tends to discourage this, since as soon as one of the\n> patches is committed it no longer knows how to apply the rest.\n> Can we improve on that tooling somehow?)\n\nCfbot currently applies patches with (GNU) patch\n--no-backup-if-mismatch -p1 -V none -f. The -f means that any\nquestions of the form \"Reversed (or previously applied) patch\ndetected! Assume -R? [y]\" is answered with \"yes\", and the operation\nfails. I wondered if --forward would be better. It's the right idea,\nbut it seems to be useless in practice for this purpose because the\ncommand still fails:\n\ntmunro@phonebox postgresql % patch --forward -p1 < x.patch || echo XXX\nfailed $?\npatching file 'src/backend/postmaster/postmaster.c'\nIgnoring previously applied (or reversed) patch.\n1 out of 1 hunks ignored--saving rejects to\n'src/backend/postmaster/postmaster.c.rej'\nXXX failed 1\n\nI wondered if it might be distinguishable from other kinds of failure\nthat should stop progress, but nope:\n\n patch's exit status is 0 if all hunks are applied successfully, 1\n if some hunks cannot be applied or there were merge conflicts,\n and 2 if there is more serious trouble. When applying a set of\n patches in a loop it behooves you to check this exit status so\n you don't apply a later patch to a partially patched file.\n\nI guess I could parse stdout or whatever that is and detect\nall-hunks-ignored condition, but that doesn't sound like fun...\n\nPerhaps cfbot should test explicitly for patches that have already\nbeen applied with something like \"git apply --reverse --check\", and\nskip them. That would work for exact patches, but of course it would\nbe confused by any tweaks made before committing. If going that way,\nit might make sense to switch to git apply/am (instead of GNU patch),\nto avoid contradictory conclusions.\n\nThe reason I was using GNU patch in the first place is that it is/was\na little more tolerant of some of the patches people used to send a\nfew years back, but now I expect everyone uses git format-patch and\nwould be prepared to change their ways if not. In the past we had a\ncouple of cases of the reverse, that is, GNU patch couldn't apply\nsomething that format-patch produced (some edge case of renaming,\nIIRC) and I'm sorry that I never got around to changing that.\n\nSometimes I question the sanity of the whole thing. Considering\ncfbot's original \"zero-effort CI\" goal (or I guess \"zero-extra-effort\"\nwould be better), I was curious about what other projects had the same\nidea, or whether we're really just starting at the \"wrong end\", and\ncame up with:\n\nhttps://github.com/getpatchwork/patchwork\nhttp://vger.kernel.org/bpfconf2022_material/lsfmmbpf2022-bpf-ci.pdf\n<-- example user\nhttps://github.com/patchew-project/patchew\n\nActually cfbot requires more effort than those, because it's driven\nfirst by Commitfest app registration. Those projects are extremists\nIIUC: just write to a mailing list, no other bureaucracy at all (at\nleast for most participants, presumably administrators can adjust the\nstatus in some database when things go wrong?). We're actually\nhalfway to Gitlab et al already, with a web account and interaction\nrequired to start the process of submitting a patch for consideration.\nWhat I'm less clear on is who else has come up with the \"bitrot\" test\nidea, either at the mailing list or web extremist ends of the scale.\nThose are also generic tools, and cfbot obviously knows lots of things\nabout PostgreSQL, like the \"highlights\" and probably more things I'm\nforgetting.\n\n\n", "msg_date": "Mon, 20 May 2024 17:49:40 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "> On 20 May 2024, at 07:49, Thomas Munro <[email protected]> wrote:\n\n> We're actually\n> halfway to Gitlab et al already, with a web account and interaction\n> required to start the process of submitting a patch for consideration.\n\nAnother Web<->Mailinglist extreme is Git themselves who have a Github bridge\nfor integration with their usual patch-on-mailinglist workflow.\n\nhttps://github.com/gitgitgadget/gitgitgadget\n\n> What I'm less clear on is who else has come up with the \"bitrot\" test\n> idea, either at the mailing list or web extremist ends of the scale.\n\nMost web based platforms like Github register the patch against the tree at the\ntime of submitting, and won't refresh unless the user does so. Github does\ndetect bitrot and show a \"this cannot be merged\" error message, but it doesn't\nalter any state on the PR etc.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 20 May 2024 10:16:24 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 20/05/2024 02:09, Tom Lane wrote:\n\n> Bruce Momjian <[email protected]> writes:\n>> On Sun, May 19, 2024 at 03:18:11PM -0400, Tom Lane wrote:\n>>> * Another reason for things sitting a long time is that they're too\n>>> big to review without an unreasonable amount of effort. We should\n>>> encourage authors to break large patches into smaller stepwise\n>>> refinements. It may seem like that will result in taking forever\n>>> to reach the end goal, but dropping a huge patchset on the community\n>>> isn't going to give speedy results either.\n> \n>> I think it is sometimes hard to incrementally apply patches if the\n>> long-term goal isn't agreed or know to be achievable.\n> \n> True. The value of the earlier patches in the series can be unclear\n> if you don't understand what the end goal is. But I think people\n> could post a \"road map\" of how they intend a patch series to go.\n> \n> Another way of looking at this is that sometimes people do post large\n> chunks of work in long many-patch sets, but we tend to look at the\n> whole series as something to review and commit as one (or I do, at\n> least). We should be more willing to bite off and push the earlier\n> patches in such a series even when the later ones aren't entirely\n> done.\n\n[resend due to DKIM header failure]\n\nRight. As an observation from someone who used to dabble in PostgreSQL internals a \nnumber of years ago (and who now spends a lot of time working on other well-known \nprojects), this is something that really stands out with the current PostgreSQL workflow.\n\nIn general you find that a series consists of 2 parts: 1) a set of refactorings to \nenable a new feature and 2) the new feature itself. Even if the details of 2) are \nstill under discussion, often it is possible to merge 1) fairly quickly which also \nhas the knock-on effect of reducing the size of later iterations of the series. This \nalso helps with new contributors since having parts of the series merged sooner helps \nthem feel valued and helps to provide immediate feedback.\n\nThe other issue I mentioned last time this discussion arose is that I really miss the \nstandard email-based git workflow for PostgreSQL: writing a versioned cover letter \nhelps reviewers as the summary provides a list of changes since the last iteration, \nand having separate emails with a PATCH prefix allows patches to be located quickly.\n\nFinally as a reviewer I find that having contributors use git format-patch and \nsend-email makes it easier for me to contribute, since I can simply hit \"Reply\" and \nadd in-line comments for the parts of the patch I feel I can review. At the moment I \nhave to locate the emails that contain patches and save the attachments before I can \neven get to starting the review process, making the initial review barrier that \nlittle bit higher.\n\n\nATB,\n\nMark.\n\n\n", "msg_date": "Mon, 20 May 2024 10:19:05 +0100", "msg_from": "Mark Cave-Ayland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 20/05/2024 06:56, Mark Cave-Ayland wrote:\n\n> In general you find that a series consists of 2 parts: 1) a set of refactorings to \n> enable a new feature and 2) the new feature itself. Even if the details of 2) are \n> still under discussion, often it is possible to merge 1) fairly quickly which also \n> has the knock-on effect of reducing the size of later iterations of the series. This \n> also helps with new contributors since having parts of the series merged sooner helps \n> them feel valued and helps to provide immediate feedback.\n\n[resend due to DKIM header failure]\n\nSomething else I also notice is that PostgreSQL doesn't have a MAINTAINERS or \nequivalent file, so when submitting patches it's difficult to know who is expected to \nreview and/or commit changes to a particular part of the codebase (this is true both \nwith and without the CF system).\n\n\nATB,\n\nMark.\n\n\n", "msg_date": "Mon, 20 May 2024 10:20:09 +0100", "msg_from": "Mark Cave-Ayland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 2024-May-19, Tom Lane wrote:\n\n> (The cfbot tends to discourage this, since as soon as one of the\n> patches is committed it no longer knows how to apply the rest.\n> Can we improve on that tooling somehow?)\n\nI think a necessary next step to further improve the cfbot is to get it\nintegrated in pginfra. Right now it runs somewhere in Thomas's servers\nor something, and there's no real integration with the commitfest proper\nexcept by scraping.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n", "msg_date": "Mon, 20 May 2024 13:41:13 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Sun, May 19, 2024 at 3:18 PM Tom Lane <[email protected]> wrote:\n> Dmitry Dolgov <[email protected]> writes:\n> > How do we make sure this time it will be different?\n>\n> Things *have* changed, if you take the long view.\n\nThat's true, but I think Dmitry's point is well-taken all the same: we\nhaven't really made a significant process improvement in many years,\nand in some ways, I think things have been slowly degrading. I don't\nbelieve it's necessary or possible to solve all of the accumulated\nproblems overnight, but we need to get serious about admitting that\nthere is a problem which, in my opinion, is an existential threat to\nthe project.\n\n> IMV one really fundamental problem is the same as it's been for a\n> couple of decades: too many patch submissions, too few committers.\n> We can't fix that by just appointing a ton more committers, at least\n> not if we want to keep the project's quality up. We have to grow\n> qualified committers. IIRC, one of the main reasons for instituting\n> the commitfests at all was the hope that if we got more people to\n> spend time reading the code and reviewing patches, some of them would\n> learn enough to become good committers. I think that's worked, again\n> taking a long view. I just did some quick statistics on the commit\n> history, and I see that we were hovering at somewhere around ten\n> active committers from 1999 to 2012, but since then it's slowly crept\n> up to about two dozen today. (I'm counting \"active\" as \"at least 10\n> commits per year\", which is an arbitrary cutoff --- feel free to slice\n> the data for yourself.) Meanwhile the number of submissions has also\n> grown, so I'm not sure how much better the load ratio is.\n\nThat's an interesting statistic. I had not realized that the numbers\nhad actually grown significantly. However, I think that the\nnew-patch-submitter experience has not improved; if anything, I think\nit may have gotten worse. It's hard to compare the subjective\nexperience between 2008 when I first got involved and now, especially\nsince at that time I was a rank newcomer experiencing things as a\nnewcomer, and now I'm a long-time committer trying to judge the\nnewcomer experience. But it seems to me that when I started, there was\nmore of a \"middle tier\" of people who were not committers but could do\nmeaningful review of patches and help you push them in the direction\nof being something that a committer might not loathe. Now, I feel like\nthere are a lot of non-committer reviews that aren't actually adding a\nlot of value: people come along and say the patch doesn't apply, or a\nword is spelled wrong, and we don't get meaningful review of whether\nthe design makes sense, or if we do, it's wrong. Perhaps this is just\nviewing the past with rose-colored glasses: I wasn't in as good a\nposition to judge the value of reviews that I gave and received at\nthat point as I am now. But what I do think is happening today is that\na lot of committer energy gets focused on the promising non-committers\nwho someone thinks might be able to become committers, and other\nnon-committers struggle to make any headway.\n\nOn the plus side, I think we make more of an effort not to be a jerk\nto newcomers than we used to. I also have a strong hunch that it may\nnot be as good as it needs to be.\n\n> * Patches that sit in the queue a long time tend to be ones that lack\n> consensus, either about the goal or the details of how to achieve it.\n> Sometimes \"lacks consensus\" really means \"nobody but the author thinks\n> this is a good idea, but we haven't mustered the will to say no\".\n> But I think it's more usually the case that there are plausible\n> competing opinions about what the patch should do or how it should\n> do it. How can we resolve such differences and get something done?\n\nThis is a great question. We need to do better with that.\n\nAlso, it would be helpful to have better ways of handling it when the\nauthor has gotten to a certain point with it but doesn't necessarily\nhave the time/skills/whatever to drive it forward. Such patches are\nquite often a good idea, but it's not clear what we can do with them\nprocedurally other than hit the reject button, which doesn't feel\ngreat.\n\n> * Another reason for things sitting a long time is that they're too\n> big to review without an unreasonable amount of effort. We should\n> encourage authors to break large patches into smaller stepwise\n> refinements. It may seem like that will result in taking forever\n> to reach the end goal, but dropping a huge patchset on the community\n> isn't going to give speedy results either.\n\nEspecially if it has a high rate of subtle defects, which is common.\n\n> * Before starting this thread, Robert did a lot of very valuable\n> review of some individual patches. I think what prompted him to\n> start the thread was the realization that he'd only made a small\n> dent in the problem. Maybe we could divide and conquer: get a\n> dozen-or-so senior contributors to split up the list of pending\n> patches and then look at each one with an eye to what needs to\n> happen to move it along (*not* to commit it right away, although\n> in some cases maybe that's the thing to do). It'd be great if\n> that could happen just before each commitfest, but that's probably\n> not practical with the current patch volume. What I'm thinking\n> for the moment is to try to make that happen once a year or so.\n\nI like this idea.\n\n> Yeah, all this stuff ultimately gets done \"for the good of the\n> project\", which isn't the most reliable motivation perhaps.\n> I don't see a better one...\n\nThis is the really hard part.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 May 2024 10:16:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Mon, May 20, 2024 at 7:49 AM Alvaro Herrera <[email protected]> wrote:\n> On 2024-May-19, Tom Lane wrote:\n> > (The cfbot tends to discourage this, since as soon as one of the\n> > patches is committed it no longer knows how to apply the rest.\n> > Can we improve on that tooling somehow?)\n>\n> I think a necessary next step to further improve the cfbot is to get it\n> integrated in pginfra. Right now it runs somewhere in Thomas's servers\n> or something, and there's no real integration with the commitfest proper\n> except by scraping.\n\nYes, I think we really need to fix this. Also, there's a bunch of\nmechanical work that could be done to make cfbot better, like making\nthe navigation not reset the scroll every time you drill down one\nlevel through the build products.\n\nI would also like to see the buildfarm and CI converged in some way.\nI'm not sure how. I understand that the buildfarm tests more different\nconfigurations than we can reasonably afford to do in CI, but there is\nno sense in pretending that having two different systems doing similar\njobs has no cost.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 May 2024 10:18:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, 17 May 2024 at 15:02, Peter Eisentraut <[email protected]> wrote:\n>\n> On 17.05.24 09:32, Heikki Linnakangas wrote:\n> > Dunno about having to click a link or sparkly gold borders, but +1 on\n> > having a free-form text box for notes like that. Things like \"cfbot says\n> > this has bitrotted\" or \"Will review this after other patch this depends\n> > on\". On the mailing list, notes like that are both noisy and easily lost\n> > in the threads. But as a little free-form text box on the commitfest, it\n> > would be handy.\n> >\n> > One risk is that if we start to rely too much on that, or on the other\n> > fields in the commitfest app for that matter, we de-value the mailing\n> > list archives. I'm not too worried about it, the idea is that the\n> > summary box just summarizes what's already been said on the mailing\n> > list, or is transient information like \"I'll get to this tomorrow\"\n> > that's not interesting to archive.\n>\n> We already have the annotations feature, which is kind of this.\n\nBut annotations are bound to mails in attached mail threads, rather\nthan a generic text input at the CF entry level. There isn't always an\nappropriate link between (mail or in-person) conversations about the\npatch, and a summary of that conversation.\n\n----\n\nThe CommitFest App has several features, but contains little\ninformation about how we're expected to use it. To start addressing\nthis limitation, I've just created a wiki page about the CFA [0], with\na handbook section. Feel free to extend or update the information as\nappropriate; I've only added that information the best of my\nknowledge, so it may contain wrong, incomplete and/or inaccurate\ninformation.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://wiki.postgresql.org/wiki/CommitFest_App\n\n\n", "msg_date": "Mon, 20 May 2024 17:55:41 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Hi everyone!\n\nI would like to share another perspective on this as a relatively new user\nof the commitfest app. I really like the concept behind the commitfest app\nbut while using it, sometimes I feel that we can do a better job at having\nsome sort of a 'metainfo' for the patch.\nAlthough in some cases the patch title is enough to understand what it is\ndoing but for new contributors and reviewers it would be really helpful if\nwe can have something more explanatory instead of just having topics like\n'bug fix', 'features' etc. Some sort of a small summarised description for\na patch explaining its history and need in brief would be really helpful\nfor people to get started instead of trying to make sense of a very large\nemail thread. This is a small addition but it would definitely make it\neasier for new reviewers and contributors.\n\nRegards,\nAkshat Jaimini\n\nOn Mon, 20 May, 2024, 21:26 Matthias van de Meent, <\[email protected]> wrote:\n\n> On Fri, 17 May 2024 at 15:02, Peter Eisentraut <[email protected]>\n> wrote:\n> >\n> > On 17.05.24 09:32, Heikki Linnakangas wrote:\n> > > Dunno about having to click a link or sparkly gold borders, but +1 on\n> > > having a free-form text box for notes like that. Things like \"cfbot\n> says\n> > > this has bitrotted\" or \"Will review this after other patch this depends\n> > > on\". On the mailing list, notes like that are both noisy and easily\n> lost\n> > > in the threads. But as a little free-form text box on the commitfest,\n> it\n> > > would be handy.\n> > >\n> > > One risk is that if we start to rely too much on that, or on the other\n> > > fields in the commitfest app for that matter, we de-value the mailing\n> > > list archives. I'm not too worried about it, the idea is that the\n> > > summary box just summarizes what's already been said on the mailing\n> > > list, or is transient information like \"I'll get to this tomorrow\"\n> > > that's not interesting to archive.\n> >\n> > We already have the annotations feature, which is kind of this.\n>\n> But annotations are bound to mails in attached mail threads, rather\n> than a generic text input at the CF entry level. There isn't always an\n> appropriate link between (mail or in-person) conversations about the\n> patch, and a summary of that conversation.\n>\n> ----\n>\n> The CommitFest App has several features, but contains little\n> information about how we're expected to use it. To start addressing\n> this limitation, I've just created a wiki page about the CFA [0], with\n> a handbook section. Feel free to extend or update the information as\n> appropriate; I've only added that information the best of my\n> knowledge, so it may contain wrong, incomplete and/or inaccurate\n> information.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n>\n> [0] https://wiki.postgresql.org/wiki/CommitFest_App\n>\n>\n>\n\nHi everyone!I would like to share another perspective on this as a relatively new user of the commitfest app. I really like the concept behind the commitfest app but while using it, sometimes I feel that we can do a better job at having some sort of a 'metainfo' for the patch.Although in some cases the patch title is enough to understand what it is doing but for new contributors and reviewers it would be really helpful if we can have something more explanatory instead of just having topics like 'bug fix', 'features' etc. Some sort of a small summarised description for a patch explaining its history and need in brief would be really helpful for people to get started instead of trying to make sense of a very large email thread. This is a small addition but it would definitely make it easier for new reviewers and contributors. Regards,Akshat JaiminiOn Mon, 20 May, 2024, 21:26 Matthias van de Meent, <[email protected]> wrote:On Fri, 17 May 2024 at 15:02, Peter Eisentraut <[email protected]> wrote:\n>\n> On 17.05.24 09:32, Heikki Linnakangas wrote:\n> > Dunno about having to click a link or sparkly gold borders, but +1 on\n> > having a free-form text box for notes like that. Things like \"cfbot says\n> > this has bitrotted\" or \"Will review this after other patch this depends\n> > on\". On the mailing list, notes like that are both noisy and easily lost\n> > in the threads. But as a little free-form text box on the commitfest, it\n> > would be handy.\n> >\n> > One risk is that if we start to rely too much on that, or on the other\n> > fields in the commitfest app for that matter, we de-value the mailing\n> > list archives. I'm not too worried about it, the idea is that the\n> > summary box just summarizes what's already been said on the mailing\n> > list, or is transient information like \"I'll get to this tomorrow\"\n> > that's not interesting to archive.\n>\n> We already have the annotations feature, which is kind of this.\n\nBut annotations are bound to mails in attached mail threads, rather\nthan a generic text input at the CF entry level. There isn't always an\nappropriate link between (mail or in-person) conversations about the\npatch, and a summary of that conversation.\n\n----\n\nThe CommitFest App has several features, but contains little\ninformation about how we're expected to use it. To start addressing\nthis limitation, I've just created a wiki page about the CFA [0], with\na handbook section. Feel free to extend or update the information as\nappropriate; I've only added that information the best of my\nknowledge, so it may contain wrong, incomplete and/or inaccurate\ninformation.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://wiki.postgresql.org/wiki/CommitFest_App", "msg_date": "Tue, 21 May 2024 02:29:00 +0530", "msg_from": "Akshat Jaimini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Sun, May 19, 2024 at 10:50 PM Thomas Munro <[email protected]> wrote:\n> Sometimes I question the sanity of the whole thing. Considering\n> cfbot's original \"zero-effort CI\" goal (or I guess \"zero-extra-effort\"\n> would be better), I was curious about what other projects had the same\n> idea, or whether we're really just starting at the \"wrong end\", and\n> came up with:\n>\n> https://github.com/getpatchwork/patchwork\n> http://vger.kernel.org/bpfconf2022_material/lsfmmbpf2022-bpf-ci.pdf\n> <-- example user\n> https://github.com/patchew-project/patchew\n>\n> Actually cfbot requires more effort than those, because it's driven\n> first by Commitfest app registration. Those projects are extremists\n> IIUC: just write to a mailing list, no other bureaucracy at all (at\n> least for most participants, presumably administrators can adjust the\n> status in some database when things go wrong?). We're actually\n> halfway to Gitlab et al already, with a web account and interaction\n> required to start the process of submitting a patch for consideration.\n> What I'm less clear on is who else has come up with the \"bitrot\" test\n> idea, either at the mailing list or web extremist ends of the scale.\n> Those are also generic tools, and cfbot obviously knows lots of things\n> about PostgreSQL, like the \"highlights\" and probably more things I'm\n> forgetting.\n\nFor what it's worth, a few years before cfbot, I had privately\nattempted a similar idea for Postgres [1]. The project here is\nbasically a very simple API and infrastructure for running builds and\nmake check. A previous version [2] subscribed to the mailing lists and\nused Travis CI (and accidentally spammed some Postgres committers\n[3]). The project petered out as my work responsibilities shifted (and\nto be honest, after I felt sheepish about the spamming).\n\nI think cfbot is way, way ahead of where my project got at this point.\nBut since you asked about other similar projects, I'm happy to discuss\nfurther if it's helpful to bounce ideas off someone who's thought\nabout the same problem (though not for a while now, I admit).\n\nThanks,\nMaciek\n\n[1]: https://github.com/msakrejda/pg-quilter\n[2]: https://github.com/msakrejda/pg-quilter/blob/2038d9493f9aa7d43d3eb0aec1d299b94624602e/lib/pg-quilter/git_harness.rb\n[3]: https://www.postgresql.org/message-id/flat/CAM3SWZQboGoVYAJNoPMx%3DuDLE%2BZh5k2MQa4dWk91YPGDxuY-gQ%40mail.gmail.com#e24bf57b77cfb6c440c999c018c46e92\n\n\n", "msg_date": "Mon, 20 May 2024 14:33:27 -0700", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\n\nOn 5/20/24 16:16, Robert Haas wrote:\n> On Sun, May 19, 2024 at 3:18 PM Tom Lane <[email protected]> wrote:\n>\n>...\n> \n>> * Another reason for things sitting a long time is that they're too\n>> big to review without an unreasonable amount of effort. We should\n>> encourage authors to break large patches into smaller stepwise\n>> refinements. It may seem like that will result in taking forever\n>> to reach the end goal, but dropping a huge patchset on the community\n>> isn't going to give speedy results either.\n> \n> Especially if it has a high rate of subtle defects, which is common.\n> \n\nI think breaking large patches into reasonably small parts is a very\nimportant thing. Not only is it really difficult (or perhaps even\npractically impossible) to review patches over a certain size, because\nyou have to grok and review everything at once. But it's also not great\nfrom the cost/benefit POV - the improvement may be very beneficial, but\nif it's one huge lump of code that never gets committable as a whole,\nthere's no benefit in practice. Which makes it less likely I'll even\nstart looking at the patch very closely, because there's a risk it'd be\njust a waste of time in the end.\n\nSo I think this is another important reason to advise people to split\npatches into smaller parts - not only it makes it easier to review, it\nmakes it possible to review and commit the parts incrementally, getting\nat least some benefits early.\n\n>> * Before starting this thread, Robert did a lot of very valuable\n>> review of some individual patches. I think what prompted him to\n>> start the thread was the realization that he'd only made a small\n>> dent in the problem. Maybe we could divide and conquer: get a\n>> dozen-or-so senior contributors to split up the list of pending\n>> patches and then look at each one with an eye to what needs to\n>> happen to move it along (*not* to commit it right away, although\n>> in some cases maybe that's the thing to do). It'd be great if\n>> that could happen just before each commitfest, but that's probably\n>> not practical with the current patch volume. What I'm thinking\n>> for the moment is to try to make that happen once a year or so.\n> \n> I like this idea.\n> \n\nMe too. Do you think it'd happen throughout the whole year, or at some\nparticular moment?\n\nWe used to do a \"triage\" at the FOSDEM PGDay meeting, but that used to\nbe more of an ad hoc thing to use the remaining time, with only a small\nsubset of people. But that seems pretty late in the dev cycle, I guess\nwe'd want it to happen early, perhaps during the first CF?\n\nFWIW this reminds me the \"CAN reports\" tracking the current \"conditions,\nactions and needs\" of a ticket. I do that for internal stuff, and I find\nthat quite helpful (as long as it's kept up to date).\n\n>> Yeah, all this stuff ultimately gets done \"for the good of the\n>> project\", which isn't the most reliable motivation perhaps.\n>> I don't see a better one...\n> \n> This is the really hard part.\n> \n\nI think we have plenty of motivated people with good intentions. Some\nare motivated by personal interest, some by trying to achieve stuff\nrelated to their work/research/... I don't think the exact reasons\nmatter too much, and it's often a combination.\n\nIMHO we should look at this from the other end - people are motivated to\nget a patch reviewed & committed, and if we introduce a process that's\nmore likely to lead to that result, people will mostly adopt that.\n\nAnd if we could make that process more convenient by improving the CF\napp to support it, that'd be even better ... I'm mostly used to the\nmailing list idea, but with the volume it's a constant struggle to keep\ntrack of new patch versions that I wanted/promised to review, etc. The\nCF app helps with that a little bit, because I can \"become a reviewer\"\nbut I still don't get notifications or anything like that :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 May 2024 17:40:41 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 5/20/24 16:16, Robert Haas wrote:\n>> On Sun, May 19, 2024 at 3:18 PM Tom Lane <[email protected]> wrote:\n>>> * Before starting this thread, Robert did a lot of very valuable\n>>> review of some individual patches. I think what prompted him to\n>>> start the thread was the realization that he'd only made a small\n>>> dent in the problem. Maybe we could divide and conquer: get a\n>>> dozen-or-so senior contributors to split up the list of pending\n>>> patches and then look at each one with an eye to what needs to\n>>> happen to move it along (*not* to commit it right away, although\n>>> in some cases maybe that's the thing to do). It'd be great if\n>>> that could happen just before each commitfest, but that's probably\n>>> not practical with the current patch volume. What I'm thinking\n>>> for the moment is to try to make that happen once a year or so.\n\n>> I like this idea.\n\n> Me too. Do you think it'd happen throughout the whole year, or at some\n> particular moment?\n\nI was imagining a focused community effort spanning a few days to\na week. It seems more likely to actually happen if we attack it\nthat way than if it's spread out as something people will do when\nthey get around to it. Of course the downside is finding a week\nwhen everybody is available.\n\n> We used to do a \"triage\" at the FOSDEM PGDay meeting, but that used to\n> be more of an ad hoc thing to use the remaining time, with only a small\n> subset of people. But that seems pretty late in the dev cycle, I guess\n> we'd want it to happen early, perhaps during the first CF?\n\nYeah, early in the cycle seems more useful, although the summer might\nbe the worst time for peoples' availability.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 13:09:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\n\nOn 5/24/24 19:09, Tom Lane wrote:\n> Tomas Vondra <[email protected]> writes:\n>> On 5/20/24 16:16, Robert Haas wrote:\n>>> On Sun, May 19, 2024 at 3:18 PM Tom Lane <[email protected]> wrote:\n>>>> * Before starting this thread, Robert did a lot of very valuable\n>>>> review of some individual patches. I think what prompted him to\n>>>> start the thread was the realization that he'd only made a small\n>>>> dent in the problem. Maybe we could divide and conquer: get a\n>>>> dozen-or-so senior contributors to split up the list of pending\n>>>> patches and then look at each one with an eye to what needs to\n>>>> happen to move it along (*not* to commit it right away, although\n>>>> in some cases maybe that's the thing to do). It'd be great if\n>>>> that could happen just before each commitfest, but that's probably\n>>>> not practical with the current patch volume. What I'm thinking\n>>>> for the moment is to try to make that happen once a year or so.\n> \n>>> I like this idea.\n> \n>> Me too. Do you think it'd happen throughout the whole year, or at some\n>> particular moment?\n> \n> I was imagining a focused community effort spanning a few days to\n> a week. It seems more likely to actually happen if we attack it\n> that way than if it's spread out as something people will do when\n> they get around to it. Of course the downside is finding a week\n> when everybody is available.\n> \n>> We used to do a \"triage\" at the FOSDEM PGDay meeting, but that used to\n>> be more of an ad hoc thing to use the remaining time, with only a small\n>> subset of people. But that seems pretty late in the dev cycle, I guess\n>> we'd want it to happen early, perhaps during the first CF?\n> \n> Yeah, early in the cycle seems more useful, although the summer might\n> be the worst time for peoples' availability.\n> \n\nI think meeting all these conditions - a week early in the cycle, but\nnot in the summer, when everyone can focus on this - will be difficult.\n\nIf we give up on everyone doing it at the same time, summer would be a\ngood time to do this - it's early in the cycle, and it tends to be a\nquieter period (customers are on vacation too, so fewer incidents).\n\nSo maybe it'd be better to just set some deadline by which this needs to\nbe done, and make sure every pending patch has someone expected to look\nat it? IMHO we're not in position to assign stuff to people, so I guess\npeople would just volunteer anyway - the CF app might track this.\n\nIt's not entirely clear to me if this would effectively mean doing a\nregular review of those patches, or something less time consuming.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 May 2024 21:17:15 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 5/24/24 19:09, Tom Lane wrote:\n>>>> ... Maybe we could divide and conquer: get a\n>>>> dozen-or-so senior contributors to split up the list of pending\n>>>> patches and then look at each one with an eye to what needs to\n>>>> happen to move it along (*not* to commit it right away, although\n>>>> in some cases maybe that's the thing to do).\n\n> I think meeting all these conditions - a week early in the cycle, but\n> not in the summer, when everyone can focus on this - will be difficult.\n\nTrue. Perhaps in the fall there'd be a better chance?\n\n> So maybe it'd be better to just set some deadline by which this needs to\n> be done, and make sure every pending patch has someone expected to look\n> at it? IMHO we're not in position to assign stuff to people, so I guess\n> people would just volunteer anyway - the CF app might track this.\n\nOne problem with a time-extended process is that the set of CF entries\nis not static, so a predetermined division of labor will result in\nmissing some newly-arrived entries. Maybe that's not a problem\nthough; anything newly-arrived is by definition not \"stuck\". But we\nwould definitely need some support for keeping track of what's been\nlooked at and what remains, whereas if it happens over just a few\ndays that's probably not so essential.\n\n> It's not entirely clear to me if this would effectively mean doing a\n> regular review of those patches, or something less time consuming.\n\nI was *not* proposing doing a regular review, unless of course\nsomebody really wants to do that. What I am thinking about is\nsuggesting how to make progress on patches that are stuck, or in some\ncases delivering the bad news that this patch seems unlikely to ever\nget accepted and it's time to cut our losses. (Patches that seem to\nbe moving along in good order probably don't need any attention in\nthis process, beyond determining that that's the case.) That's why\nI think we need some senior people doing this, as their opinions are\nmore likely to be taken seriously.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 15:45:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On 5/24/24 15:45, Tom Lane wrote:\n> I was *not* proposing doing a regular review, unless of course\n> somebody really wants to do that. What I am thinking about is\n> suggesting how to make progress on patches that are stuck, or in some\n> cases delivering the bad news that this patch seems unlikely to ever\n> get accepted and it's time to cut our losses. (Patches that seem to\n> be moving along in good order probably don't need any attention in\n> this process, beyond determining that that's the case.) That's why\n> I think we need some senior people doing this, as their opinions are\n> more likely to be taken seriously.\n\nMaybe do a FOSDEM-style dev meeting with triage review at PG.EU would at \nleast move us forward? Granted it is less early and perhaps less often \nthan the thread seems to indicate, but has been tossed around before and \nseems doable.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 24 May 2024 16:23:24 -0400", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> On 5/24/24 15:45, Tom Lane wrote:\n>> I was *not* proposing doing a regular review, unless of course\n>> somebody really wants to do that. What I am thinking about is\n>> suggesting how to make progress on patches that are stuck, or in some\n>> cases delivering the bad news that this patch seems unlikely to ever\n>> get accepted and it's time to cut our losses. (Patches that seem to\n>> be moving along in good order probably don't need any attention in\n>> this process, beyond determining that that's the case.) That's why\n>> I think we need some senior people doing this, as their opinions are\n>> more likely to be taken seriously.\n\n> Maybe do a FOSDEM-style dev meeting with triage review at PG.EU would at \n> least move us forward? Granted it is less early and perhaps less often \n> than the thread seems to indicate, but has been tossed around before and \n> seems doable.\n\nPerhaps. The throughput of an N-person meeting is (at least) a factor\nof N less than the same N people looking at patches individually.\nOn the other hand, the consensus of a meeting is more likely to be\ntaken seriously than a single person's opinion, senior or not.\nSo it could work, but I think we'd need some prefiltering so that\nthe meeting only spends time on those patches already identified as\nneeding help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 16:44:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "\n\nOn 5/24/24 22:44, Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n>> On 5/24/24 15:45, Tom Lane wrote:\n>>> I was *not* proposing doing a regular review, unless of course\n>>> somebody really wants to do that. What I am thinking about is\n>>> suggesting how to make progress on patches that are stuck, or in some\n>>> cases delivering the bad news that this patch seems unlikely to ever\n>>> get accepted and it's time to cut our losses. (Patches that seem to\n>>> be moving along in good order probably don't need any attention in\n>>> this process, beyond determining that that's the case.) That's why\n>>> I think we need some senior people doing this, as their opinions are\n>>> more likely to be taken seriously.\n> \n>> Maybe do a FOSDEM-style dev meeting with triage review at PG.EU would at \n>> least move us forward? Granted it is less early and perhaps less often \n>> than the thread seems to indicate, but has been tossed around before and \n>> seems doable.\n> \n> Perhaps. The throughput of an N-person meeting is (at least) a factor\n> of N less than the same N people looking at patches individually.\n> On the other hand, the consensus of a meeting is more likely to be\n> taken seriously than a single person's opinion, senior or not.\n> So it could work, but I think we'd need some prefiltering so that\n> the meeting only spends time on those patches already identified as\n> needing help.\n> \n\nI personally don't think the FOSDEM triage is a very productive use of\nour time - we go through patches top to bottom, often with little idea\nwhat the current state of the patch is. We always ran out of time after\nlooking at maybe 1/10 of the list.\n\nHaving an in-person discussion about patches would be good, but I think\nwe should split the meeting into much smaller groups for that, each\nlooking at a different subset. And maybe it should be determined in\nadvance, so that people can look at those patches in advance ...\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 24 May 2024 23:23:24 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> I personally don't think the FOSDEM triage is a very productive use of\n> our time - we go through patches top to bottom, often with little idea\n> what the current state of the patch is. We always ran out of time after\n> looking at maybe 1/10 of the list.\n\n> Having an in-person discussion about patches would be good, but I think\n> we should split the meeting into much smaller groups for that, each\n> looking at a different subset. And maybe it should be determined in\n> advance, so that people can look at those patches in advance ...\n\nYeah, subgroups of 3 or 4 people sounds about right. And definitely\nsome advance looking to see which patches need discussion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 17:38:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Thu, May 16, 2024 at 4:00 PM Joe Conway <[email protected]> wrote:\n>\n> On 5/16/24 17:36, Jacob Champion wrote:\n> > On Thu, May 16, 2024 at 2:29 PM Joe Conway <[email protected]> wrote:\n> >> If no one, including the author (new or otherwise) is interested in\n> >> shepherding a particular patch, what chance does it have of ever getting\n> >> committed?\n> >\n> > That's a very different thing from what I think will actually happen, which is\n> >\n> > - new author posts patch\n> > - community member says \"use commitfest!\"\n>\n> Here is where we should point them at something that explains the care\n> and feeding requirements to successfully grow a patch into a commit.\n>\n> > - new author registers patch\n> > - no one reviews it\n> > - patch gets automatically booted\n>\n> Part of the care and feeding instructions should be a warning regarding\n> what happens if you are unsuccessful in the first CF and still want to\n> see it through.\n>\n> > - community member says \"register it again!\"\n> > - new author says ಠ_ಠ\n>\n> As long as this is not a surprise ending, I don't see the issue.\n\nI've experienced this in another large open-source project that runs\non Github, not mailing lists, and here's how it goes:\n\n1. I open a PR with a small bugfix (test case included).\n2. PR is completely ignored by committers (presumably because they all\nmostly work on their own projects they're getting paid to do).\n3. <3 months goes by>\n4. I may get a comment with \"please rebase!\", or, more frequently, a\nbot closes the issue.\n\nThat cycle is _infuriating_ as a contributor. As much as I don't like\nto hear \"we're not doing this\", I'd far prefer to have that outcome\nthen some automated process closing out my submission without my input\nwhen, as far as I can tell, the real problem is not my lack of\nactivity by the required reviewers simply not looking at it.\n\nSo I'm genuinely confused by you say \"As long as this is not a\nsurprise ending, I don't see the issue.\". Perhaps we're imagining\nsomething different here reading between the lines?\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Tue, 28 May 2024 08:38:53 -0600", "msg_from": "James Coleman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" }, { "msg_contents": "On Fri, May 17, 2024 at 9:59 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, May 17, 2024 at 11:05 AM Tom Lane <[email protected]> wrote:\n> > > We already have gone back to that model. We just haven't admitted it\n> > > yet. And we're never going to get out of it until we find a way to get\n> > > the contents of the CommitFest application down to a more reasonable\n> > > size and level of complexity. There's just no way everyone's up for\n> > > that level of pain. I'm not sure not up for that level of pain.\n> >\n> > Yeah, we clearly need to get the patch list to a point of\n> > manageability, but I don't agree that abandoning time-boxed CFs\n> > will improve anything.\n>\n> I'm not sure. Suppose we plotted commits generally, or commits of\n> non-committer patches, or reviews on-list, vs. time. Would we see any\n> uptick in activity during CommitFests? Would it vary by committer? I\n> don't know. I bet the difference wouldn't be as much as Tom Lane would\n> like to see. Realistically, we can't manage how contributors spend\n> their time all that much, and trying to do so is largely tilting at\n> windmills.\n>\n> To me, the value of time-based CommitFests is as a vehicle to ensure\n> freshness, or cleanup, or whatever word you want to do. If you just\n> make a list of things that need attention and keep incrementally\n> updating it, eventually for various reasons that list no longer\n> reflects your current list of priorities. At some point, you have to\n> throw it out and make a new list, or at least that's what always\n> happens to me. We've fallen into a system where the default treatment\n> of a patch is to be carried over to the next CommitFest and CfMs are\n> expected to exert effort to keep patches from getting that default\n> treatment when they shouldn't. But that does not scale. We need a\n> system where things drop off the list unless somebody makes an effort\n> to keep them on the list, and the easiest way to do that is to\n> periodically make a *fresh* list that *doesn't* just clone some\n> previous list.\n>\n> I realize that many people here are (rightly!) concerned with\n> burdening patch authors with more steps that they have to follow. But\n> the current system is serving new patch authors very poorly. If they\n> get attention, it's much more likely to be because somebody saw their\n> email and wrote back than it is to be because somebody went through\n> the CommitFest and found their entry and was like \"oh, I should review\n> this\". Honestly, if we get to a situation where a patch author is sad\n> because they have to click a link every 2 months to say \"yeah, I'm\n> still here, please review my patch,\" we've already lost the game. That\n> person isn't sad because we asked them to click a link. They're sad\n> it's already been N * 2 months and nothing has happened.\n\nYes, this is exactly right.\n\nThis is an off-the-wall idea, but what if the inverse is actually what\nwe need? Suppose there's been a decent amount of activity previously\non the thread, but no new patch version or CF app activity (e.g.,\nstatus changes moving it forward) or maybe even just the emails died\noff: perhaps that should trigger a question to the author to see what\nthey want the status to be -- i.e., \"is this still 'needs review', or\nis it really 'waiting on author' or 'not my priority right now'?\"\n\nIt seems possible to me that that would actually remove a lot of the\npatches from the current CF when a author simply hasn't had time to\nrespond yet (I know this is the case for me because the time I have to\nwork on patches fluctuates significantly), but it might also serve to\nhighlight patches that simply haven't had any review at all.\n\nI'd like to add a feature to the CF app that shows me my current\npatches by status, and I'd also like to have the option to have the CF\napp notify me when someone changes the status (I've noticed before\nthat often a status gets changed without notification on list, and\nthen I get surprised months later when it's stuck in \"waiting on\nauthor\"). Do either/both of those seem reasonable to add?\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Tue, 28 May 2024 08:51:21 -0600", "msg_from": "James Coleman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: commitfest.postgresql.org is no longer fit for purpose" } ]
[ { "msg_contents": "Hi,\n\nI built a gist index to try to test a theory in some other thread. For that\nthe indexes need to cover a lot of entries. With gist creating the index took\na long time, which made me strace the index build process.\n\nWhich lead me to notice this:\n\n...\nopenat(AT_FDCWD, \"base/16462/17747_fsm\", O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)\nlseek(56, 0, SEEK_END) = 40173568\npwrite64(56, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., 8192, 40173568) = 8192\nlseek(56, 0, SEEK_END) = 40181760\nopenat(AT_FDCWD, \"base/16462/17747_fsm\", O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)\nlseek(56, 0, SEEK_END) = 40181760\npwrite64(56, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., 8192, 40181760) = 8192\nlseek(56, 0, SEEK_END) = 40189952\nlseek(56, 0, SEEK_END) = 40189952\nopenat(AT_FDCWD, \"base/16462/17747_fsm\", O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)\nlseek(56, 0, SEEK_END) = 40189952\n...\n\nI.e. for every write we try and fail to open the FSM.\n\n#0 __libc_open64 (file=0x30469c8 \"base/16462/17747_fsm\", oflag=524290) at ../sysdeps/unix/sysv/linux/open64.c:30\n#1 0x0000000000cbe582 in BasicOpenFilePerm (fileName=0x30469c8 \"base/16462/17747_fsm\", fileFlags=524290, fileMode=384)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/file/fd.c:1134\n#2 0x0000000000cbf057 in PathNameOpenFilePerm (fileName=0x30469c8 \"base/16462/17747_fsm\", fileFlags=524290, fileMode=384)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/file/fd.c:1620\n#3 0x0000000000cbef88 in PathNameOpenFile (fileName=0x30469c8 \"base/16462/17747_fsm\", fileFlags=2)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/file/fd.c:1577\n#4 0x0000000000cfeed2 in mdopenfork (reln=0x2fd5af8, forknum=FSM_FORKNUM, behavior=2)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/smgr/md.c:649\n#5 0x0000000000cfe20b in mdexists (reln=0x2fd5af8, forknum=FSM_FORKNUM) at ../../../../../home/andres/src/postgresql/src/backend/storage/smgr/md.c:181\n#6 0x0000000000d015b3 in smgrexists (reln=0x2fd5af8, forknum=FSM_FORKNUM) at ../../../../../home/andres/src/postgresql/src/backend/storage/smgr/smgr.c:400\n#7 0x0000000000cc5078 in fsm_readbuf (rel=0x7f5b87977f38, addr=..., extend=false)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/freespace/freespace.c:571\n#8 0x0000000000cc52d3 in fsm_search (rel=0x7f5b87977f38, min_cat=128 '\\200')\n at ../../../../../home/andres/src/postgresql/src/backend/storage/freespace/freespace.c:690\n#9 0x0000000000cc47a5 in GetPageWithFreeSpace (rel=0x7f5b87977f38, spaceNeeded=4096)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/freespace/freespace.c:141\n#10 0x0000000000cc5e52 in GetFreeIndexPage (rel=0x7f5b87977f38) at ../../../../../home/andres/src/postgresql/src/backend/storage/freespace/indexfsm.c:40\n#11 0x0000000000855d4a in gistNewBuffer (r=0x7f5b87977f38, heaprel=0x7f5b87979688)\n at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gistutil.c:831\n#12 0x0000000000844541 in gistplacetopage (rel=0x7f5b87977f38, freespace=819, giststate=0x2feae68, buffer=67261, itup=0x7ffd3ce86c30, ntup=1, oldoffnum=0,\n newblkno=0x0, leftchildbuf=0, splitinfo=0x7ffd3ce86be0, markfollowright=true, heapRel=0x7f5b87979688, is_build=true)\n at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gist.c:353\n#13 0x0000000000846263 in gistinserttuples (state=0x7ffd3ce86c90, stack=0x2fde7e8, giststate=0x2feae68, tuples=0x7ffd3ce86c30, ntup=1, oldoffnum=0,\n leftchild=0, rightchild=0, unlockbuf=false, unlockleftchild=false) at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gist.c:1298\n#14 0x00000000008461a7 in gistinserttuple (state=0x7ffd3ce86c90, stack=0x2fde7e8, giststate=0x2feae68, tuple=0x2fde708, oldoffnum=0)\n at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gist.c:1251\n#15 0x0000000000845681 in gistdoinsert (r=0x7f5b87977f38, itup=0x2fde708, freespace=819, giststate=0x2feae68, heapRel=0x7f5b87979688, is_build=true)\n at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gist.c:887\n#16 0x0000000000848c79 in gistBuildCallback (index=0x7f5b87977f38, tid=0x2f31f74, values=0x7ffd3ce87060, isnull=0x7ffd3ce87040, tupleIsAlive=true,\n state=0x7ffd3ce87340) at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gistbuild.c:863\n#17 0x000000000087d605 in heapam_index_build_range_scan (heapRelation=0x7f5b87979688, indexRelation=0x7f5b87977f38, indexInfo=0x2fd9f50, allow_sync=true,\n anyvisible=false, progress=true, start_blockno=0, numblocks=4294967295, callback=0x848b6b <gistBuildCallback>, callback_state=0x7ffd3ce87340,\n scan=0x2f31f18) at ../../../../../home/andres/src/postgresql/src/backend/access/heap/heapam_handler.c:1706\n#18 0x0000000000847996 in table_index_build_scan (table_rel=0x7f5b87979688, index_rel=0x7f5b87977f38, index_info=0x2fd9f50, allow_sync=true, progress=true,\n callback=0x848b6b <gistBuildCallback>, callback_state=0x7ffd3ce87340, scan=0x0)\n at ../../../../../home/andres/src/postgresql/src/include/access/tableam.h:1794\n#19 0x0000000000847da8 in gistbuild (heap=0x7f5b87979688, index=0x7f5b87977f38, indexInfo=0x2fd9f50)\n at ../../../../../home/andres/src/postgresql/src/backend/access/gist/gistbuild.c:313\n#20 0x000000000094c945 in index_build (heapRelation=0x7f5b87979688, indexRelation=0x7f5b87977f38, indexInfo=0x2fd9f50, isreindex=false, parallel=true)\n at ../../../../../home/andres/src/postgresql/src/backend/catalog/index.c:3021\n#21 0x000000000094970f in index_create (heapRelation=0x7f5b87979688, indexRelationName=0x2f2f798 \"foo_i_idx1\", indexRelationId=17747, parentIndexRelid=0,\n parentConstraintId=0, relFileNumber=0, indexInfo=0x2fd9f50, indexColNames=0x2f2fc60, accessMethodId=783, tableSpaceId=0, collationIds=0x2f32340,\n opclassIds=0x2f32358, opclassOptions=0x2f32370, coloptions=0x2f32388, stattargets=0x0, reloptions=0, flags=0, constr_flags=0,\n allow_system_table_mods=false, is_internal=false, constraintId=0x7ffd3ce876f4)\n at ../../../../../home/andres/src/postgresql/src/backend/catalog/index.c:1275\n\n\nThe reason we reopen over and over is that we close the file in mdexist():\n\n\t/*\n\t * Close it first, to ensure that we notice if the fork has been unlinked\n\t * since we opened it. As an optimization, we can skip that in recovery,\n\t * which already closes relations when dropping them.\n\t */\n\tif (!InRecovery)\n\t\tmdclose(reln, forknum);\n\nWe call smgrexists as part of this code:\n\nstatic Buffer\nfsm_readbuf(Relation rel, FSMAddress addr, bool extend)\n...\n\t/*\n\t * If we haven't cached the size of the FSM yet, check it first. Also\n\t * recheck if the requested block seems to be past end, since our cached\n\t * value might be stale. (We send smgr inval messages on truncation, but\n\t * not on extension.)\n\t */\n\tif (reln->smgr_cached_nblocks[FSM_FORKNUM] == InvalidBlockNumber ||\n\t\tblkno >= reln->smgr_cached_nblocks[FSM_FORKNUM])\n\t{\n\t\t/* Invalidate the cache so smgrnblocks asks the kernel. */\n\t\treln->smgr_cached_nblocks[FSM_FORKNUM] = InvalidBlockNumber;\n\t\tif (smgrexists(reln, FSM_FORKNUM))\n\t\t\tsmgrnblocks(reln, FSM_FORKNUM);\n\t\telse\n\t\t\treln->smgr_cached_nblocks[FSM_FORKNUM] = 0;\n\t}\n\nBecause we set the size to 0 if the the fork doesn't exist, we'll reenter\nduring the next call, and then do the same thing again.\n\n\nI don't think this is a huge performance issue or anything, but somehow it\nseems indicative of something being \"wrong\".\n\nIt seems likely we encounter this issue not just with gist, but I haven't\nchecked yet.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 May 2024 13:17:54 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "gist index builds try to open FSM over and over" } ]
[ { "msg_contents": "Hi,\n\nIn the subthread at [1] I needed to trigger multiple rounds of index vacuuming\nwithin one vacuum.\n\nIt turns out that with the new dead tuple implementation, that got actually\nsomewhat expensive. Particularly if all tuples on all pages get deleted, the\nrepresentation is just \"too dense\". Normally that's obviously very good, but\nfor testing, not so much:\n\nWith the minimum setting of maintenance_work_mem=1024kB, a simple table with\nnarrow rows, where all rows are deleted, the first cleanup happens after\n3697812 dead tids. The table for that has to be > ~128MB.\n\nNeeding a ~128MB table to be able to test multiple cleanup passes makes it\nmuch more expensive to test and consequently will lead to worse test coverage.\n\nI think we should consider lowering the minimum setting of\nmaintenance_work_mem to the minimum of work_mem. For real-life workloads\nmaintenance_work_mem=1024kB is going to already be quite bad, so we don't\nprotect users much by forbidding a setting lower than 1MB.\n\n\nJust for comparison, with a limit of 1MB, < 17 needed to do the first cleanup\npass after 174472 dead tuples. That's a 20x improvement. Really nice.\n\nGreetings,\n\nAndres Freund\n\n[1\\ https://postgr.es/m/20240516193953.zdj545efq6vabymd%40awork3.anarazel.de\n\n\n", "msg_date": "Thu, 16 May 2024 13:54:58 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Lowering the minimum value for maintenance_work_mem" }, { "msg_contents": "On Fri, May 17, 2024 at 5:55 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> In the subthread at [1] I needed to trigger multiple rounds of index vacuuming\n> within one vacuum.\n>\n> It turns out that with the new dead tuple implementation, that got actually\n> somewhat expensive. Particularly if all tuples on all pages get deleted, the\n> representation is just \"too dense\". Normally that's obviously very good, but\n> for testing, not so much:\n>\n> With the minimum setting of maintenance_work_mem=1024kB, a simple table with\n> narrow rows, where all rows are deleted, the first cleanup happens after\n> 3697812 dead tids. The table for that has to be > ~128MB.\n>\n> Needing a ~128MB table to be able to test multiple cleanup passes makes it\n> much more expensive to test and consequently will lead to worse test coverage.\n>\n> I think we should consider lowering the minimum setting of\n> maintenance_work_mem to the minimum of work_mem.\n\n+1 for lowering the minimum value of maintenance_work_mem. I've faced\nthe same situation.\n\nEven if a shared tidstore is empty, TidStoreMemoryUsage() returns\n256kB because it's the minimum segment size of DSA, i.e.\nDSA_MIN_SEGMENT_SIZE. So we can lower the minimum maintenance_work_mem\ndown to 256kB, from a vacuum perspective.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 May 2024 13:58:28 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the minimum value for maintenance_work_mem" }, { "msg_contents": "On Mon, May 20, 2024 at 11:59 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, May 17, 2024 at 5:55 AM Andres Freund <[email protected]> wrote:\n\n> > I think we should consider lowering the minimum setting of\n> > maintenance_work_mem to the minimum of work_mem.\n>\n> +1 for lowering the minimum value of maintenance_work_mem. I've faced\n> the same situation.\n>\n> Even if a shared tidstore is empty, TidStoreMemoryUsage() returns\n> 256kB because it's the minimum segment size of DSA, i.e.\n> DSA_MIN_SEGMENT_SIZE. So we can lower the minimum maintenance_work_mem\n> down to 256kB, from a vacuum perspective.\n\nI've verified 256kB works with both local and shared memory with the\nbelow commands, and 200k records are enough to cause a second round of\nindex cleanup. I don't think we can go much smaller than that without\nchanging how we size the blocks in the node slab contexts (or when\nthey're created), which is currently somewhat arbitrary. That'll need\nsome thought, at least when we get a use case with work_mem as the\nlimit.\n\nset maintenance_work_mem = '256kB';\n\ndrop table if exists test;\ncreate unlogged table test (a int) with (autovacuum_enabled=false);\ninsert into test (a) select i from generate_series(1,200_000) i;\ncreate index on test (a);\n--create index on test (a); -- toggle for parallel vacuum\n\ndelete from test;\nvacuum (verbose) test;\n\nSide note: I'm confused why shared memory works at all in this case,\nsince it failed for 1MB init segments until we allowed callers to\nspecify a smaller init size. The overhead for DSA seems to be\nsignificant for small sizes, as evidenced from the amount of usable\nmemory:\n\nshared:\nINFO: finished vacuuming \"john.public.test\": index scans: 56\n\nlocal:\nINFO: finished vacuuming \"john.public.test\": index scans: 2\n\n\n", "msg_date": "Mon, 20 May 2024 13:05:32 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lowering the minimum value for maintenance_work_mem" } ]
[ { "msg_contents": "A number of pg_upgrade steps require connecting to each database and\nrunning a query. When there are many databases, these steps are\nparticularly time-consuming, especially since this is done sequentially in\na single process. At a quick glance, I see the following such steps:\n\n\t* create_logical_replication_slots\n\t* check_for_data_types_usage\n\t* check_for_isn_and_int8_passing_mismatch\n\t* check_for_user_defined_postfix_ops\n\t* check_for_incompatible_polymorphics\n\t* check_for_tables_with_oids\n\t* check_for_user_defined_encoding_conversions\n\t* check_old_cluster_subscription_state\n\t* get_loadable_libraries\n\t* get_db_rel_and_slot_infos\n\t* old_9_6_invalidate_hash_indexes\n\t* report_extension_updates\n\nI set out to parallelize these kinds of steps via multiple threads or\nprocesses, but I ended up realizing that we could likely achieve much of\nthe same gain with libpq's asynchronous APIs. Specifically, both\nestablishing the connections and running the queries can be done without\nblocking, so we can just loop over a handful of slots and advance a simple\nstate machine for each. The attached is a proof-of-concept grade patch for\ndoing this for get_db_rel_and_slot_infos(), which yielded the following\nresults on my laptop for \"pg_upgrade --link --sync-method=syncfs --jobs 8\"\nfor a cluster with 10K empty databases.\n\n\ttotal pg_upgrade_time:\n\t* HEAD: 14m 8s\n\t* patch: 10m 58s\n\n\tget_db_rel_and_slot_infos() on old cluster:\n\t* HEAD: 2m 45s\n\t* patch: 36s\n\n\tget_db_rel_and_slot_infos() on new cluster:\n\t* HEAD: 1m 46s\n\t* patch: 29s\n\nI am posting this early to get thoughts on the general approach. If we\nproceeded with this strategy, I'd probably create some generic tooling that\neach relevant step would provide a set of callback functions.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 May 2024 16:16:38 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, 2024-05-16 at 16:16 -0500, Nathan Bossart wrote:\n> I am posting this early to get thoughts on the general approach.  If\n> we\n> proceeded with this strategy, I'd probably create some generic\n> tooling that\n> each relevant step would provide a set of callback functions.\n\nThe documentation states:\n\n\"pg_dump -j uses multiple database connections; it connects to the\ndatabase once with the leader process and once again for each worker\njob.\"\n\nThat might need to be adjusted.\n\nHow much complexity do you avoid by using async instead of multiple\nprocesses?\n\nAlso, did you consider connecting once to each database and running\nmany queries? Most of those seem like just checks.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 16 May 2024 17:09:55 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, May 16, 2024 at 05:09:55PM -0700, Jeff Davis wrote:\n> How much complexity do you avoid by using async instead of multiple\n> processes?\n\nIf we didn't want to use async, my guess is we'd want to use threads to\navoid complicated IPC. And if we followed pgbench's example for using\nthreads, it might end up at a comparable level of complexity, although I'd\nbet that threading would be the more complex of the two. It's hard to say\ndefinitively without coding it up both ways, which might be worth doing.\n\n> Also, did you consider connecting once to each database and running\n> many queries? Most of those seem like just checks.\n\nThis was the idea behind 347758b. It may be possible to do more along\nthese lines. IMO parallelizing will still be useful even if we do combine\nmore of the steps.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 May 2024 20:24:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "I figured I'd post what I have so far since this thread hasn't been updated\nin a while. The attached patches are still \"proof-of-concept grade,\" but\nthey are at least moving in the right direction (IMHO). The variable\nnaming is still not great, and they are woefully undercommented, among\nother things.\n\n0001 introduces a new API for registering callbacks and running them in\nparallel on all databases in the cluster. This new system manages a set of\n\"slots\" that follow a simple state machine to asynchronously establish a\nconnection and run the queries. It uses system() to wait for these\nasynchronous tasks to complete. Users of this API only need to provide two\ncallbacks: one to return the query that should be run on each database and\nanother to process the results of that query. If multiple queries are\nrequired for each database, users can provide multiple sets of callbacks.\n\nThe other patches change several of the existing tasks to use this new API.\nWith these patches applied, I see the following differences in the output\nof 'pg_upgrade | ts -i' for a cluster with 1k empty databases:\n\n\tWITHOUT PATCH\n\n\t00:00:19 Checking database user is the install user ok\n\t00:00:02 Checking for subscription state ok\n\t00:00:06 Adding \".old\" suffix to old global/pg_control ok\n\t00:00:04 Checking for extension updates ok\n\n\tWITH PATCHES (--jobs 1)\n\n\t00:00:10 Checking database user is the install user ok\n\t00:00:02 Checking for subscription state ok\n\t00:00:07 Adding \".old\" suffix to old global/pg_control ok\n\t00:00:05 Checking for extension updates ok\n\n\tWITH PATCHES (--jobs 4)\n\n\t00:00:06 Checking database user is the install user ok\n\t00:00:00 Checking for subscription state ok\n\t00:00:02 Adding \".old\" suffix to old global/pg_control ok\n\t00:00:01 Checking for extension updates ok\n\nNote that the \"Checking database user is the install user\" time also\nincludes the call to get_db_rel_and_slot_infos() on the old cluster as well\nas the call to get_loadable_libraries() on the old cluster. I believe the\nimprovement with the patches with just one job is due to the consolidation\nof the queries into one database connection (presently,\nget_db_rel_and_slot_infos() creates 3 connections per database for some\nupgrades). Similarly, the \"Adding \\\".old\\\" suffix to old\nglobal/pg_control\" time includes the call to get_db_rel_and_slot_infos() on\nthe new cluster.\n\nThere are several remaining places where we could use this new API to speed\nup upgrades. For example, I haven't attempted to use it for the data type\nchecks yet, and that tends to eat up a sizable chunk of time when there are\nmany databases.\n\nOn Thu, May 16, 2024 at 08:24:08PM -0500, Nathan Bossart wrote:\n> On Thu, May 16, 2024 at 05:09:55PM -0700, Jeff Davis wrote:\n>> Also, did you consider connecting once to each database and running\n>> many queries? Most of those seem like just checks.\n> \n> This was the idea behind 347758b. It may be possible to do more along\n> these lines. IMO parallelizing will still be useful even if we do combine\n> more of the steps.\n\nMy current thinking is that any possible further consolidation should\nhappen as part of a follow-up effort to parallelization. I'm cautiously\noptimistic that the parallelization work will make the consolidation easier\nsince it moves things to rigidly-defined callback functions.\n\nA separate piece of off-list feedback from Michael Paquier is that this new\nparallel system might be something we can teach the ParallelSlot code used\nby bin/scripts/ to do. I've yet to look too deeply into this, but I\nsuspect that it will be difficult to combine the two. For example, the\nParallelSlot system doesn't seem well-suited for the kind of\nrun-once-in-each-database tasks required by pg_upgrade, and the error\nhandling is probably little different, too. However, it's still worth a\ncloser look, and I'm interested in folks' opinions on the subject.\n\n-- \nnathan", "msg_date": "Mon, 1 Jul 2024 14:46:56 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Mon, Jul 1, 2024 at 3:47 PM Nathan Bossart <[email protected]> wrote:\n> 0001 introduces a new API for registering callbacks and running them in\n> parallel on all databases in the cluster. This new system manages a set of\n> \"slots\" that follow a simple state machine to asynchronously establish a\n> connection and run the queries. It uses system() to wait for these\n> asynchronous tasks to complete. Users of this API only need to provide two\n> callbacks: one to return the query that should be run on each database and\n> another to process the results of that query. If multiple queries are\n> required for each database, users can provide multiple sets of callbacks.\n\nI do really like the idea of using asynchronous communication here. It\nshould be significantly cheaper than using multiple processes or\nthreads, and maybe less code, too. But I'm confused about why you\nwould need or want to use system() to wait for asynchronous tasks to\ncomplete. Wouldn't it be something like select()?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 Jul 2024 15:58:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Mon, Jul 01, 2024 at 03:58:16PM -0400, Robert Haas wrote:\n> On Mon, Jul 1, 2024 at 3:47 PM Nathan Bossart <[email protected]> wrote:\n>> 0001 introduces a new API for registering callbacks and running them in\n>> parallel on all databases in the cluster. This new system manages a set of\n>> \"slots\" that follow a simple state machine to asynchronously establish a\n>> connection and run the queries. It uses system() to wait for these\n>> asynchronous tasks to complete. Users of this API only need to provide two\n>> callbacks: one to return the query that should be run on each database and\n>> another to process the results of that query. If multiple queries are\n>> required for each database, users can provide multiple sets of callbacks.\n> \n> I do really like the idea of using asynchronous communication here. It\n> should be significantly cheaper than using multiple processes or\n> threads, and maybe less code, too. But I'm confused about why you\n> would need or want to use system() to wait for asynchronous tasks to\n> complete. Wouldn't it be something like select()?\n\nWhoops, I meant to type \"select()\" there. Sorry for the confusion.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 1 Jul 2024 15:00:53 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "As I mentioned elsewhere [0], here's a first attempt at parallelizing the\ndata type checks. I was worried that I might have to refactor Daniel's\nwork in commit 347758b quite significantly, but I was able to avoid that by\nusing a set of generic callbacks and providing each task step an index to\nthe data_types_usage_checks array.\n\n[0] https://postgr.es/m/Zov5kHbEyDMuHJI_%40nathan\n\n-- \nnathan", "msg_date": "Mon, 8 Jul 2024 10:22:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "I finished parallelizing everything in pg_upgrade I could easily\nparallelize with the proposed async task API. There are a few remaining\nplaces where I could not easily convert to the new API for whatever reason.\nAFAICT those remaining places are either not showing up prominently in my\ntests, or they only affect versions that are unsupported or will soon be\nunsupported when v18 is released. Similarly, I noticed many of the checks\nfollow a very similar coding pattern, but most (if not all) only affect\nolder versions, so I'm unsure whether it's worth the trouble to try\nconsolidating them into one scan through all the databases.\n\nThe code is still very rough and nowhere near committable, but this at\nleast gets the patch set into the editing phase.\n\n-- \nnathan", "msg_date": "Mon, 8 Jul 2024 22:33:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "> On 9 Jul 2024, at 05:33, Nathan Bossart <[email protected]> wrote:\n\n> The code is still very rough and nowhere near committable, but this at\n> least gets the patch set into the editing phase.\n\nFirst reaction after a read-through is that this seems really cool, can't wait\nto see how much v18 pg_upgrade will be over v17. I will do more testing and\nreview once back from vacation, below are some comments from reading which is\nall I had time for at this point:\n\n+static void\n+conn_failure(PGconn *conn)\n+{\n+ pg_log(PG_REPORT, \"%s\", PQerrorMessage(conn));\n+ printf(_(\"Failure, exiting\\n\"));\n+ exit(1);\n+}\n\nAny particular reason this isn't using pg_fatal()?\n\n\n+static void\n+dispatch_query(const ClusterInfo *cluster, AsyncSlot *slot,\n ....\n+ pg_free(query);\n+}\n\nA minor point, perhaps fueled by me not having played around much with this\npatchset. It seems a bit odd that dispatch_query is responsible for freeing\nthe query from the get_query callback. I would have expected the output from\nAsyncTaskGetQueryCB to be stored in AsyncTask and released by async_task_free.\n\n\n+static void\n+sub_process(DbInfo *dbinfo, PGresult *res, void *arg)\n+{\n ....\n+ fprintf(state->script, \"The table sync state \\\"%s\\\" is not allowed for database:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\"\\n\",\n+ PQgetvalue(res, i, 0),\n+ dbinfo->db_name,\n+ PQgetvalue(res, i, 1),\n\nWith the query being in a separate place in the code from the processing it\ntakes a bit of jumping around to resolve the columns in PQgetvalue calls like\nthis. Using PQfnumber() calls and descriptive names would make this easier. I\nknow this case is copying old behavior, but the function splits make it less\nuseful than before.\n\n\n+ char *query = pg_malloc(QUERY_ALLOC);\n\nShould we convert this to a PQExpBuffer?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 17 Jul 2024 23:16:59 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Wed, Jul 17, 2024 at 11:16:59PM +0200, Daniel Gustafsson wrote:\n> First reaction after a read-through is that this seems really cool, can't wait\n> to see how much v18 pg_upgrade will be over v17. I will do more testing and\n> review once back from vacation, below are some comments from reading which is\n> all I had time for at this point:\n\nThanks for taking a look!\n\n> +static void\n> +conn_failure(PGconn *conn)\n> +{\n> + pg_log(PG_REPORT, \"%s\", PQerrorMessage(conn));\n> + printf(_(\"Failure, exiting\\n\"));\n> + exit(1);\n> +}\n> \n> Any particular reason this isn't using pg_fatal()?\n\nIIRC this was to match the error in connectToServer(). We could probably\nmove to pg_fatal().\n\n> +static void\n> +dispatch_query(const ClusterInfo *cluster, AsyncSlot *slot,\n> ....\n> + pg_free(query);\n> +}\n> \n> A minor point, perhaps fueled by me not having played around much with this\n> patchset. It seems a bit odd that dispatch_query is responsible for freeing\n> the query from the get_query callback. I would have expected the output from\n> AsyncTaskGetQueryCB to be stored in AsyncTask and released by async_task_free.\n\nI don't see any problem with doing it the way you suggest.\n\nTangentially related, I waffled a bit on whether to require the query\ncallbacks to put the result in allocated memory. Some queries are the same\nno matter what, and some require customization at runtime. As you can see,\nI ended up just requiring allocated memory. That makes the code a tad\nsimpler, and I doubt the extra work is noticeable.\n\n> +static void\n> +sub_process(DbInfo *dbinfo, PGresult *res, void *arg)\n> +{\n> ....\n> + fprintf(state->script, \"The table sync state \\\"%s\\\" is not allowed for database:\\\"%s\\\" subscription:\\\"%s\\\" schema:\\\"%s\\\" relation:\\\"%s\\\"\\n\",\n> + PQgetvalue(res, i, 0),\n> + dbinfo->db_name,\n> + PQgetvalue(res, i, 1),\n> \n> With the query being in a separate place in the code from the processing it\n> takes a bit of jumping around to resolve the columns in PQgetvalue calls like\n> this. Using PQfnumber() calls and descriptive names would make this easier. I\n> know this case is copying old behavior, but the function splits make it less\n> useful than before.\n\nGood point.\n\n> + char *query = pg_malloc(QUERY_ALLOC);\n> \n> Should we convert this to a PQExpBuffer?\n\nSeems like a good idea. I think I was trying to change as few lines as\npossible for my proof-of-concept. :)\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 17 Jul 2024 16:32:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "> On 17 Jul 2024, at 23:32, Nathan Bossart <[email protected]> wrote:\n> On Wed, Jul 17, 2024 at 11:16:59PM +0200, Daniel Gustafsson wrote:\n\n>> +static void\n>> +dispatch_query(const ClusterInfo *cluster, AsyncSlot *slot,\n>> ....\n>> + pg_free(query);\n>> +}\n>> \n>> A minor point, perhaps fueled by me not having played around much with this\n>> patchset. It seems a bit odd that dispatch_query is responsible for freeing\n>> the query from the get_query callback. I would have expected the output from\n>> AsyncTaskGetQueryCB to be stored in AsyncTask and released by async_task_free.\n> \n> I don't see any problem with doing it the way you suggest.\n> \n> Tangentially related, I waffled a bit on whether to require the query\n> callbacks to put the result in allocated memory. Some queries are the same\n> no matter what, and some require customization at runtime. As you can see,\n> I ended up just requiring allocated memory. That makes the code a tad\n> simpler, and I doubt the extra work is noticeable.\n\nI absolutely agree with this.\n\n>> + char *query = pg_malloc(QUERY_ALLOC);\n>> \n>> Should we convert this to a PQExpBuffer?\n> \n> Seems like a good idea. I think I was trying to change as few lines as\n> possible for my proof-of-concept. :)\n\nYeah, that's a good approach, I just noticed it while reading the hunks. We\ncan do that separately from this patchset.\n\nIn order to trim down the size of the patchset I think going ahead with 0003\nindependently of this makes sense, it has value by itself.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 18 Jul 2024 09:57:23 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, Jul 18, 2024 at 09:57:23AM +0200, Daniel Gustafsson wrote:\n>> On 17 Jul 2024, at 23:32, Nathan Bossart <[email protected]> wrote:\n>> On Wed, Jul 17, 2024 at 11:16:59PM +0200, Daniel Gustafsson wrote:\n> \n>>> +static void\n>>> +dispatch_query(const ClusterInfo *cluster, AsyncSlot *slot,\n>>> ....\n>>> + pg_free(query);\n>>> +}\n>>> \n>>> A minor point, perhaps fueled by me not having played around much with this\n>>> patchset. It seems a bit odd that dispatch_query is responsible for freeing\n>>> the query from the get_query callback. I would have expected the output from\n>>> AsyncTaskGetQueryCB to be stored in AsyncTask and released by async_task_free.\n>> \n>> I don't see any problem with doing it the way you suggest.\n\nActually, I do see a problem. If we do it this way, we'll have to store a\nstring per database somewhere, which seems unnecessary.\n\nHowever, while looking into this, I noticed that only one get_query\ncallback (get_db_subscription_count()) actually customizes the generated\nquery using information in the provided DbInfo. AFAICT we can do this\nparticular step without running a query in each database, as I mentioned\nelsewhere [0]. That should speed things up a bit and allow us to simplify\nthe AsyncTask code.\n\nWith that, if we are willing to assume that a given get_query callback will\ngenerate the same string for all databases (and I think we should), we can\nrun the callback once and save the string in the step for dispatch_query()\nto use. This would look more like what you suggested in the quoted text.\n\n[0] https://postgr.es/m/ZprQJv_TxccN3tkr%40nathan\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 19 Jul 2024 16:21:37 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Fri, Jul 19, 2024 at 04:21:37PM -0500, Nathan Bossart wrote:\n> However, while looking into this, I noticed that only one get_query\n> callback (get_db_subscription_count()) actually customizes the generated\n> query using information in the provided DbInfo. AFAICT we can do this\n> particular step without running a query in each database, as I mentioned\n> elsewhere [0]. That should speed things up a bit and allow us to simplify\n> the AsyncTask code.\n> \n> With that, if we are willing to assume that a given get_query callback will\n> generate the same string for all databases (and I think we should), we can\n> run the callback once and save the string in the step for dispatch_query()\n> to use. This would look more like what you suggested in the quoted text.\n\nHere is a new patch set. I've included the latest revision of the patch to\nfix get_db_subscription_count() from the other thread [0] as 0001 since I\nexpect that to be committed soon. I've also moved the patch that moves the\n\"live_check\" variable to \"user_opts\" to 0002 since I plan on committing\nthat sooner than later, too. Otherwise, I've tried to address all feedback\nprovided thus far.\n\n[0] https://commitfest.postgresql.org/49/5135/\n\n-- \nnathan", "msg_date": "Mon, 22 Jul 2024 15:07:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Mon, Jul 22, 2024 at 03:07:10PM -0500, Nathan Bossart wrote:\n> Here is a new patch set. I've included the latest revision of the patch to\n> fix get_db_subscription_count() from the other thread [0] as 0001 since I\n> expect that to be committed soon. I've also moved the patch that moves the\n> \"live_check\" variable to \"user_opts\" to 0002 since I plan on committing\n> that sooner than later, too. Otherwise, I've tried to address all feedback\n> provided thus far.\n\nHere is just the live_check patch. I rebased it, gave it a commit message,\nand fixed a silly mistake. Barring objections or cfbot indigestion, I plan\nto commit this within the next couple of days.\n\n-- \nnathan", "msg_date": "Wed, 24 Jul 2024 21:58:52 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "> On 25 Jul 2024, at 04:58, Nathan Bossart <[email protected]> wrote:\n> \n> On Mon, Jul 22, 2024 at 03:07:10PM -0500, Nathan Bossart wrote:\n>> Here is a new patch set. I've included the latest revision of the patch to\n>> fix get_db_subscription_count() from the other thread [0] as 0001 since I\n>> expect that to be committed soon. I've also moved the patch that moves the\n>> \"live_check\" variable to \"user_opts\" to 0002 since I plan on committing\n>> that sooner than later, too. Otherwise, I've tried to address all feedback\n>> provided thus far.\n> \n> Here is just the live_check patch. I rebased it, gave it a commit message,\n> and fixed a silly mistake. Barring objections or cfbot indigestion, I plan\n> to commit this within the next couple of days.\n\nLGTM\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 11:42:55 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, Jul 25, 2024 at 11:42:55AM +0200, Daniel Gustafsson wrote:\n>> On 25 Jul 2024, at 04:58, Nathan Bossart <[email protected]> wrote:\n>> Here is just the live_check patch. I rebased it, gave it a commit message,\n>> and fixed a silly mistake. Barring objections or cfbot indigestion, I plan\n>> to commit this within the next couple of days.\n> \n> LGTM\n\nThanks, committed.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 26 Jul 2024 13:39:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "\nOn 22.07.2024 21:07, Nathan Bossart wrote:\n> On Fri, Jul 19, 2024 at 04:21:37PM -0500, Nathan Bossart wrote:\n>> However, while looking into this, I noticed that only one get_query\n>> callback (get_db_subscription_count()) actually customizes the generated\n>> query using information in the provided DbInfo. AFAICT we can do this\n>> particular step without running a query in each database, as I mentioned\n>> elsewhere [0]. That should speed things up a bit and allow us to simplify\n>> the AsyncTask code.\n>>\n>> With that, if we are willing to assume that a given get_query callback will\n>> generate the same string for all databases (and I think we should), we can\n>> run the callback once and save the string in the step for dispatch_query()\n>> to use. This would look more like what you suggested in the quoted text.\n> Here is a new patch set. I've included the latest revision of the patch to\n> fix get_db_subscription_count() from the other thread [0] as 0001 since I\n> expect that to be committed soon. I've also moved the patch that moves the\n> \"live_check\" variable to \"user_opts\" to 0002 since I plan on committing\n> that sooner than later, too. Otherwise, I've tried to address all feedback\n> provided thus far.\n>\n> [0] https://commitfest.postgresql.org/49/5135/\n>\nHi,\n\nI like your idea of parallelizing these checks with async libpq API, \nthanks for working on it. The patch doesn't apply cleanly on master \nanymore, but I've rebased locally and taken it for a quick spin with a \npg16 instance of 1000 empty databases. Didn't see any regressions with \n-j 1, there's some speedup with -j 8 (33 sec vs 8 sec for these checks).\n\nOne thing that I noticed that could be improved is we could start a new \nconnection right away after having run all query callbacks for the \ncurrent connection in process_slot, instead of just returning and \nestablishing the new connection only on the next iteration of the loop \nin async_task_run after potentially sleeping on select.\n\n+1 to Jeff's suggestion that perhaps we could reuse connections, but \nperhaps that's a separate story.\n\nRegards,\n\nIlya\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 22:55:33 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Wed, Jul 31, 2024 at 10:55:33PM +0100, Ilya Gladyshev wrote:\n> I like your idea of parallelizing these checks with async libpq API, thanks\n> for working on it. The patch doesn't apply cleanly on master anymore, but\n> I've rebased locally and taken it for a quick spin with a pg16 instance of\n> 1000 empty databases. Didn't see any regressions with -j 1, there's some\n> speedup with -j 8 (33 sec vs 8 sec for these checks).\n\nThanks for taking a look. I'm hoping to do a round of polishing before\nposting a rebased patch set soon.\n\n> One thing that I noticed that could be improved is we could start a new\n> connection right away after having run all query callbacks for the current\n> connection in process_slot, instead of just returning and establishing the\n> new connection only on the next iteration of the loop in async_task_run\n> after potentially sleeping on select.\n\nYeah, we could just recursively call process_slot() right after freeing the\nslot. That'd at least allow us to avoid the spinning behavior as we run\nout of databases to process, if nothing else.\n\n> +1 to Jeff's suggestion that perhaps we could reuse connections, but perhaps\n> that's a separate story.\n\nWhen I skimmed through the various tasks, I didn't see a ton of\nopportunities for further consolidation, or at least opportunities that\nwould help for upgrades from supported versions. The data type checks are\nalready consolidated, for example.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 1 Aug 2024 12:44:35 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, Aug 01, 2024 at 12:44:35PM -0500, Nathan Bossart wrote:\n> On Wed, Jul 31, 2024 at 10:55:33PM +0100, Ilya Gladyshev wrote:\n>> I like your idea of parallelizing these checks with async libpq API, thanks\n>> for working on it. The patch doesn't apply cleanly on master anymore, but\n>> I've rebased locally and taken it for a quick spin with a pg16 instance of\n>> 1000 empty databases. Didn't see any regressions with -j 1, there's some\n>> speedup with -j 8 (33 sec vs 8 sec for these checks).\n> \n> Thanks for taking a look. I'm hoping to do a round of polishing before\n> posting a rebased patch set soon.\n> \n>> One thing that I noticed that could be improved is we could start a new\n>> connection right away after having run all query callbacks for the current\n>> connection in process_slot, instead of just returning and establishing the\n>> new connection only on the next iteration of the loop in async_task_run\n>> after potentially sleeping on select.\n> \n> Yeah, we could just recursively call process_slot() right after freeing the\n> slot. That'd at least allow us to avoid the spinning behavior as we run\n> out of databases to process, if nothing else.\n\nHere is a new patch set. Besides rebasing, I've added the recursive call\nto process_slot() mentioned in the quoted text, and I've added quite a bit\nof commentary to async.c.\n\n-- \nnathan", "msg_date": "Thu, 1 Aug 2024 16:41:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "\nOn 01.08.2024 22:41, Nathan Bossart wrote:\n> Here is a new patch set. Besides rebasing, I've added the recursive call\n> to process_slot() mentioned in the quoted text, and I've added quite a bit\n> of commentary to async.c.\nThat's much better now, thanks! Here's my code review, note that I \nhaven't tested the patches yet:\n\n+void\n+async_task_add_step(AsyncTask *task,\n+                    AsyncTaskGetQueryCB query_cb,\n+                    AsyncTaskProcessCB process_cb, bool free_result,\n+                    void *arg)\n\nIs there any reason to have query as a callback function instead of char \n*? From what I see right now, it doesn't give any extra flexibility, as \nthe query has to be static anyway (can't be customized on a per-database \nbasis) and will be created once before all the callbacks are run. While \npassing in char * makes the API simpler, excludes any potential error of \nmaking the query dependent on the current database and removes the \nunnecessary malloc/free of the static strings.\n\n+static void\n+dispatch_query(const ClusterInfo *cluster, AsyncSlot *slot,\n+               const AsyncTask *task)\n+{\n+    ...\n+    if (!PQsendQuery(slot->conn, cbs->query))\n+        conn_failure(slot->conn);\n+}\n\nThis will print \"connection failure: connection pointer is NULL\", which \nI don't think makes a lot of sense to the end user. I'd prefer something \nlike pg_fatal(\"failed to allocate a new connection\").\n\n      if (found)\n-        pg_fatal(\"Data type checks failed: %s\", report.data);\n+    {\n+        pg_fatal(\"Data type checks failed: %s\", \ndata_type_check_report.data);\n+        termPQExpBuffer(&data_type_check_report);\n+    }\n\n`found` should be removed and replaced with `data_type_check_failed`, as \nit's not set anymore. Also the termPQExpBuffer after pg_fatal looks \nunnecessary.\n\n+static bool *data_type_check_results;\n+static bool data_type_check_failed;\n+static PQExpBufferData data_type_check_report;\n\nIMO, it would be nicer to have these as a local state, that's passed in \nas an arg* to the AsyncTaskProcessCB, which aligns with how the other \nchecks do it.\n\n-- End of review --\n\nRegarding keeping the connections, the way I envisioned it entailed \npassing a list of connections from one check to the next one (or keeping \na global state with connections?). I didn't concretely look at the code \nto verify this, so it's just an abstract idea.\n\n\n\n\n", "msg_date": "Sun, 4 Aug 2024 19:19:57 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Sun, Aug 04, 2024 at 07:19:57PM +0100, Ilya Gladyshev wrote:\n> -- End of review --\n\nThanks for the review. I've attempted to address all your feedback in v8\nof the patch set. I think the names could still use some work, but I\nwanted to get the main structure in place before trying to fix them.\n\n> Regarding keeping the connections, the way I envisioned it entailed passing\n> a list of connections from one check to the next one (or keeping a global\n> state with connections?). I didn't concretely look at the code to verify\n> this, so it's just an abstract idea.\n\nMy main concern with doing something like that is it could require\nmaintaining a huge number of connections when there are many databases.\nGUCs like max_connections would need to be set accordingly. I'm a little\ndubious that the gains would be enough to justify it.\n\n-- \nnathan", "msg_date": "Tue, 6 Aug 2024 14:20:14 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Tue, Aug 6, 2024 at 3:20 PM Nathan Bossart <[email protected]>\nwrote:\n\n> On Sun, Aug 04, 2024 at 07:19:57PM +0100, Ilya Gladyshev wrote:\n> > -- End of review --\n>\n> Thanks for the review. I've attempted to address all your feedback in v8\n> of the patch set. I think the names could still use some work, but I\n> wanted to get the main structure in place before trying to fix them.\n>\n\nI think the underlying mechanism is basically solid, but I have one\nquestion: isn't this the ideal case for using libpq pipelining? That would\nallow subsequent tasks to launch while the main loop slowly gets around to\nclearing off completed tasks on some other connection.\n\nOn Tue, Aug 6, 2024 at 3:20 PM Nathan Bossart <[email protected]> wrote:On Sun, Aug 04, 2024 at 07:19:57PM +0100, Ilya Gladyshev wrote:\n> -- End of review --\n\nThanks for the review.  I've attempted to address all your feedback in v8\nof the patch set.  I think the names could still use some work, but I\nwanted to get the main structure in place before trying to fix them.I think the underlying mechanism is basically solid, but I have one question: isn't this the ideal case for using libpq pipelining? That would allow subsequent tasks to launch while the main loop slowly gets around to clearing off completed tasks on some other connection.", "msg_date": "Thu, 8 Aug 2024 18:18:38 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, Aug 08, 2024 at 06:18:38PM -0400, Corey Huinker wrote:\n> I think the underlying mechanism is basically solid, but I have one\n> question: isn't this the ideal case for using libpq pipelining? That would\n> allow subsequent tasks to launch while the main loop slowly gets around to\n> clearing off completed tasks on some other connection.\n\nI'll admit I hadn't really considered pipelining, but I'm tempted to say\nthat it's probably not worth the complexity. Not only do most of the tasks\nhave only one step, but even tasks like the data types check are unlikely\nto require more than a few queries for upgrades from supported versions.\nFurthermore, most of the callbacks should do almost nothing for a given\nupgrade, and since pg_upgrade runs on the server, client/server round-trip\ntime should be pretty low.\n\nPerhaps pipelining would make more sense if we consolidated the tasks a bit\nbetter, but when I last looked into that, I didn't see a ton of great\nopportunities that would help anything except for upgrades from really old\nversions. Even then, I'm not sure if pipelining is worth it.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 9 Aug 2024 09:43:59 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": ">\n> I'll admit I hadn't really considered pipelining, but I'm tempted to say\n> that it's probably not worth the complexity. Not only do most of the tasks\n> have only one step, but even tasks like the data types check are unlikely\n> to require more than a few queries for upgrades from supported versions.\n>\n\nCan you point me to a complex multi-step task that you think wouldn't work\nfor pipelining? My skimming of the other patches all seemed to be one query\nwith one result set to be processed by one callback.\n\nFurthermore, most of the callbacks should do almost nothing for a given\n> upgrade, and since pg_upgrade runs on the server, client/server round-trip\n> time should be pretty low.\n>\n\nTo my mind, that makes pipelining make more sense, you throw out N queries,\nmost of which are trivial, and by the time you cycle back around and start\ndigesting result sets via callbacks, more of the queries have finished\nbecause they were waiting on the query ahead of them in the pipeline, not\nwaiting on a callback to finish consuming its assigned result set and then\nlaunching the next task query.\n\n\n>\n> Perhaps pipelining would make more sense if we consolidated the tasks a bit\n> better, but when I last looked into that, I didn't see a ton of great\n> opportunities that would help anything except for upgrades from really old\n> versions. Even then, I'm not sure if pipelining is worth it.\n>\n\nI think you'd want to do the opposite of consolidating the tasks. If\nanything, you'd want to break them down in known single-query operations,\nand if the callback function for one of them happens to queue up a\nsubsequent query (with subsequent callback) then so be it.\n\nI'll admit I hadn't really considered pipelining, but I'm tempted to say\nthat it's probably not worth the complexity.  Not only do most of the tasks\nhave only one step, but even tasks like the data types check are unlikely\nto require more than a few queries for upgrades from supported versions.Can you point me to a complex multi-step task that you think wouldn't work for pipelining? My skimming of the other patches all seemed to be one query with one result set to be processed by one callback.\nFurthermore, most of the callbacks should do almost nothing for a given\nupgrade, and since pg_upgrade runs on the server, client/server round-trip\ntime should be pretty low.To my mind, that makes pipelining make more sense, you throw out N queries, most of which are trivial, and by the time you cycle back around and start digesting result sets via callbacks, more of the queries have finished because they were waiting on the query ahead of them in the pipeline, not waiting on a callback to finish consuming its assigned result set and then launching the next task query. \n\nPerhaps pipelining would make more sense if we consolidated the tasks a bit\nbetter, but when I last looked into that, I didn't see a ton of great\nopportunities that would help anything except for upgrades from really old\nversions.  Even then, I'm not sure if pipelining is worth it.I think you'd want to do the opposite of consolidating the tasks. If anything, you'd want to break them down in known single-query operations, and if the callback function for one of them happens to queue up a subsequent query (with subsequent callback) then so be it.", "msg_date": "Fri, 9 Aug 2024 16:06:16 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Fri, Aug 09, 2024 at 04:06:16PM -0400, Corey Huinker wrote:\n>> I'll admit I hadn't really considered pipelining, but I'm tempted to say\n>> that it's probably not worth the complexity. Not only do most of the tasks\n>> have only one step, but even tasks like the data types check are unlikely\n>> to require more than a few queries for upgrades from supported versions.\n> \n> Can you point me to a complex multi-step task that you think wouldn't work\n> for pipelining? My skimming of the other patches all seemed to be one query\n> with one result set to be processed by one callback.\n\nI think it would work fine. I'm just not sure it's worth it, especially\nfor tasks that run one exactly one query in each connection.\n\n>> Furthermore, most of the callbacks should do almost nothing for a given\n>> upgrade, and since pg_upgrade runs on the server, client/server round-trip\n>> time should be pretty low.\n> \n> To my mind, that makes pipelining make more sense, you throw out N queries,\n> most of which are trivial, and by the time you cycle back around and start\n> digesting result sets via callbacks, more of the queries have finished\n> because they were waiting on the query ahead of them in the pipeline, not\n> waiting on a callback to finish consuming its assigned result set and then\n> launching the next task query.\n\nMy assumption is that the \"waiting for a callback before launching the next\nquery\" time will typically be pretty short in practice. I could try\nmeasuring it...\n\n>> Perhaps pipelining would make more sense if we consolidated the tasks a bit\n>> better, but when I last looked into that, I didn't see a ton of great\n>> opportunities that would help anything except for upgrades from really old\n>> versions. Even then, I'm not sure if pipelining is worth it.\n> \n> I think you'd want to do the opposite of consolidating the tasks. If\n> anything, you'd want to break them down in known single-query operations,\n> and if the callback function for one of them happens to queue up a\n> subsequent query (with subsequent callback) then so be it.\n\nBy \"consolidating,\" I mean combining tasks into fewer tasks with additional\nsteps. This would allow us to reuse connections instead of creating N\nconnections for every single query. If we used a task per query, I'd\nexpect pipelining to provide zero benefit.\n\n-- \nnathan\n\n\n", "msg_date": "Sat, 10 Aug 2024 10:17:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Sat, Aug 10, 2024 at 10:17:27AM -0500, Nathan Bossart wrote:\n> On Fri, Aug 09, 2024 at 04:06:16PM -0400, Corey Huinker wrote:\n>>> Furthermore, most of the callbacks should do almost nothing for a given\n>>> upgrade, and since pg_upgrade runs on the server, client/server round-trip\n>>> time should be pretty low.\n>> \n>> To my mind, that makes pipelining make more sense, you throw out N queries,\n>> most of which are trivial, and by the time you cycle back around and start\n>> digesting result sets via callbacks, more of the queries have finished\n>> because they were waiting on the query ahead of them in the pipeline, not\n>> waiting on a callback to finish consuming its assigned result set and then\n>> launching the next task query.\n> \n> My assumption is that the \"waiting for a callback before launching the next\n> query\" time will typically be pretty short in practice. I could try\n> measuring it...\n\nAnother option might be to combine all the queries for a task into a single\nstring and then send that in one PQsendQuery() call. That may be a simpler\nway to eliminate the time between queries.\n\n-- \nnathan\n\n\n", "msg_date": "Sat, 10 Aug 2024 10:35:46 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Sat, Aug 10, 2024 at 10:35:46AM -0500, Nathan Bossart wrote:\n> Another option might be to combine all the queries for a task into a single\n> string and then send that in one PQsendQuery() call. That may be a simpler\n> way to eliminate the time between queries.\n\nI tried this out and didn't see any difference in my tests. However, the\nidea seems sound, and I could remove ~40 lines of code by doing this and by\nmaking the search_path query an implicit first step (instead of its own\nstate). So, here's a v9 of the patch set with those changes.\n\n-- \nnathan", "msg_date": "Thu, 15 Aug 2024 11:03:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "I spent some time cleaning up names, comments, etc. Barring additional\nfeedback, I'm planning to commit this stuff in the September commitfest so\nthat it has plenty of time to bake in the buildfarm.\n\n-- \nnathan", "msg_date": "Wed, 28 Aug 2024 16:46:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "LGTM in general, but here are some final nitpicks:\n\n+\tif (maxFd != 0)\n+\t\t(void) select(maxFd + 1, &input_mask, &output_mask, &except_mask, NULL);\n\nIt’s a good idea to check for the return value of select, in case it returns any errors.\n\n+\t\t\tdbs_complete++;\n+\t\t\t(void) PQgetResult(slot->conn);\n+\t\t\tPQfinish(slot->conn);\n\nPerhaps it’s rather for me to understand, what does PQgetResult call do here?\n\n+\t\t\t/* Check for connection failure. */\n+\t\t\tif (PQconnectPoll(slot->conn) == PGRES_POLLING_FAILED)\n+\t\t\t\tpg_fatal(\"connection failure: %s\", PQerrorMessage(slot->conn));\n+\n+\t\t\t/* Check whether the connection is still establishing. */\n+\t\t\tif (PQconnectPoll(slot->conn) != PGRES_POLLING_OK)\n+\t\t\t\treturn;\n\nAre the two consecutive calls of PQconnectPoll intended here? Seems a bit wasteful, but maybe that’s ok.\n\nWe should probably mention this change in the docs as well, I found these two places that I think can be improved:\n\ndiff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml\nindex 9877f2f01c..ad7aa33f07 100644\n--- a/doc/src/sgml/ref/pgupgrade.sgml\n+++ b/doc/src/sgml/ref/pgupgrade.sgml\n@@ -118,7 +118,7 @@ PostgreSQL documentation\n <varlistentry>\n <term><option>-j <replaceable class=\"parameter\">njobs</replaceable></option></term>\n <term><option>--jobs=<replaceable class=\"parameter\">njobs</replaceable></option></term>\n- <listitem><para>number of simultaneous processes or threads to use\n+ <listitem><para>number of simultaneous processes, threads or connections to use\n </para></listitem>\n </varlistentry>\n \n@@ -587,8 +587,9 @@ NET STOP postgresql-&majorversion;\n \n <para>\n The <option>--jobs</option> option allows multiple CPU cores to be used\n- for copying/linking of files and to dump and restore database schemas\n- in parallel; a good place to start is the maximum of the number of\n+ for copying/linking of files, to dump and restore database schemas in\n+ parallel and to use multiple simultaneous connections for upgrade checks;\n+ a good place to start is the maximum of the number of\n CPU cores and tablespaces. This option can dramatically reduce the\n time to upgrade a multi-database server running on a multiprocessor\n machine.\n-- \n\n\n> 28 авг. 2024 г., в 22:46, Nathan Bossart <[email protected]> написал(а):\n> \n> I spent some time cleaning up names, comments, etc. Barring additional\n> feedback, I'm planning to commit this stuff in the September commitfest so\n> that it has plenty of time to bake in the buildfarm.\n> \n> -- \n> nathan\n> <v10-0001-Introduce-framework-for-parallelizing-various-pg.patch><v10-0002-Use-pg_upgrade-s-new-parallel-framework-for-subs.patch><v10-0003-Use-pg_upgrade-s-new-parallel-framework-to-get-r.patch><v10-0004-Use-pg_upgrade-s-new-parallel-framework-to-get-l.patch><v10-0005-Use-pg_upgrade-s-new-parallel-framework-for-exte.patch><v10-0006-Use-pg_upgrade-s-new-parallel-framework-for-data.patch><v10-0007-Use-pg_upgrade-s-new-parallel-framework-for-isn-.patch><v10-0008-Use-pg_upgrade-s-new-parallel-framework-for-post.patch><v10-0009-Use-pg_upgrade-s-new-parallel-framework-for-poly.patch><v10-0010-Use-pg_upgrade-s-new-parallel-framework-for-WITH.patch><v10-0011-Use-pg_upgrade-s-parallel-framework-for-encoding.patch>\n\n\nLGTM in general, but here are some final nitpicks:+ if (maxFd != 0)+ (void) select(maxFd + 1, &input_mask, &output_mask, &except_mask, NULL);It’s a good idea to check for the return value of select, in case it returns any errors.+ dbs_complete++;+ (void) PQgetResult(slot->conn);+ PQfinish(slot->conn);Perhaps it’s rather for me to understand, what does PQgetResult call do here?+ /* Check for connection failure. */+ if (PQconnectPoll(slot->conn) == PGRES_POLLING_FAILED)+ pg_fatal(\"connection failure: %s\", PQerrorMessage(slot->conn));++ /* Check whether the connection is still establishing. */+ if (PQconnectPoll(slot->conn) != PGRES_POLLING_OK)+ return;Are the two consecutive calls of PQconnectPoll intended here? Seems a bit wasteful, but maybe that’s ok.We should probably mention this change in the docs as well, I found these two places that I think can be improved:diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgmlindex 9877f2f01c..ad7aa33f07 100644--- a/doc/src/sgml/ref/pgupgrade.sgml+++ b/doc/src/sgml/ref/pgupgrade.sgml@@ -118,7 +118,7 @@ PostgreSQL documentation      <varlistentry>       <term><option>-j <replaceable class=\"parameter\">njobs</replaceable></option></term>       <term><option>--jobs=<replaceable class=\"parameter\">njobs</replaceable></option></term>-      <listitem><para>number of simultaneous processes or threads to use+      <listitem><para>number of simultaneous processes, threads or connections to use       </para></listitem>      </varlistentry> @@ -587,8 +587,9 @@ NET STOP postgresql-&majorversion;      <para>      The <option>--jobs</option> option allows multiple CPU cores to be used-     for copying/linking of files and to dump and restore database schemas-     in parallel;  a good place to start is the maximum of the number of+     for copying/linking of files, to dump and restore database schemas in+     parallel and to use multiple simultaneous connections for upgrade checks;+      a good place to start is the maximum of the number of      CPU cores and tablespaces.  This option can dramatically reduce the      time to upgrade a multi-database server running on a multiprocessor      machine.-- 28 авг. 2024 г., в 22:46, Nathan Bossart <[email protected]> написал(а):I spent some time cleaning up names, comments, etc.  Barring additionalfeedback, I'm planning to commit this stuff in the September commitfest sothat it has plenty of time to bake in the buildfarm.-- nathan<v10-0001-Introduce-framework-for-parallelizing-various-pg.patch><v10-0002-Use-pg_upgrade-s-new-parallel-framework-for-subs.patch><v10-0003-Use-pg_upgrade-s-new-parallel-framework-to-get-r.patch><v10-0004-Use-pg_upgrade-s-new-parallel-framework-to-get-l.patch><v10-0005-Use-pg_upgrade-s-new-parallel-framework-for-exte.patch><v10-0006-Use-pg_upgrade-s-new-parallel-framework-for-data.patch><v10-0007-Use-pg_upgrade-s-new-parallel-framework-for-isn-.patch><v10-0008-Use-pg_upgrade-s-new-parallel-framework-for-post.patch><v10-0009-Use-pg_upgrade-s-new-parallel-framework-for-poly.patch><v10-0010-Use-pg_upgrade-s-new-parallel-framework-for-WITH.patch><v10-0011-Use-pg_upgrade-s-parallel-framework-for-encoding.patch>", "msg_date": "Sat, 31 Aug 2024 01:18:10 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Sat, Aug 31, 2024 at 01:18:10AM +0100, Ilya Gladyshev wrote:\n> LGTM in general, but here are some final nitpicks:\n\nThanks for reviewing.\n\n> +\tif (maxFd != 0)\n> +\t\t(void) select(maxFd + 1, &input_mask, &output_mask, &except_mask, NULL);\n> \n> It�s a good idea to check for the return value of select, in case it\n> returns any errors.\n\nDone.\n\n> +\t\t\tdbs_complete++;\n> +\t\t\t(void) PQgetResult(slot->conn);\n> +\t\t\tPQfinish(slot->conn);\n> \n> Perhaps it�s rather for me to understand, what does PQgetResult call do\n> here?\n\nI believe I was trying to follow the guidance that you should always call\nPQgetResult() until it returns NULL, but looking again, I don't see any\nneed to call it since we free the connection immediately afterwards.\n\n> +\t\t\t/* Check for connection failure. */\n> +\t\t\tif (PQconnectPoll(slot->conn) == PGRES_POLLING_FAILED)\n> +\t\t\t\tpg_fatal(\"connection failure: %s\", PQerrorMessage(slot->conn));\n> +\n> +\t\t\t/* Check whether the connection is still establishing. */\n> +\t\t\tif (PQconnectPoll(slot->conn) != PGRES_POLLING_OK)\n> +\t\t\t\treturn;\n> \n> Are the two consecutive calls of PQconnectPoll intended here? Seems a bit\n> wasteful, but maybe that�s ok.\n\nI think we can actually just use PQstatus() here. But furthermore, I think\nthe way I was initiating connections was completely bogus. IIUC before\ncalling PQconnectPoll() the first time, we should wait for a write\nindicator from select(), and then we should only call PQconnectPoll() after\nsubsequent indicators from select(). After spending quite a bit of time\nstaring at the PQconnectPoll() code, I'm quite surprised I haven't seen any\nproblems thus far. If I had to guess, I'd say that either PQconnectPoll()\nis more resilient than I think it is, or I've just gotten lucky because\npg_upgrade establishes connections quickly.\n\nAnyway, to fix this, I've added some more fields to the slot struct to\nkeep track of the information we need to properly establish connections,\nand we now pay careful attention to the return value of select() so that we\nknow which slots are ready for processing. This seemed like a nice little\noptimization independent of fixing connection establishment. I was worried\nthis was going to require a lot more code, but I think it only added ~50\nlines or so.\n\n> We should probably mention this change in the docs as well, I found these\n> two places that I think can be improved:\n\nI've adjusted the documentation in v11.\n\n-- \nnathan", "msg_date": "Sun, 1 Sep 2024 16:05:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "In v12, I've moved the \"queries\" PQExpBuffer up to the UpgradeTask struct\nso that we don't need to rebuild it for every database. I think this patch\nset is in reasonable state, and I still plan to commit it this month.\n\n-- \nnathan", "msg_date": "Tue, 3 Sep 2024 13:39:25 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On 01.09.2024 22:05, Nathan Bossart wrote:\n> I think we can actually just use PQstatus() here. But furthermore, I think\n> the way I was initiating connections was completely bogus. IIUC before\n> calling PQconnectPoll() the first time, we should wait for a write\n> indicator from select(), and then we should only call PQconnectPoll() after\n> subsequent indicators from select(). After spending quite a bit of time\n> staring at the PQconnectPoll() code, I'm quite surprised I haven't seen any\n> problems thus far. If I had to guess, I'd say that either PQconnectPoll()\n> is more resilient than I think it is, or I've just gotten lucky because\n> pg_upgrade establishes connections quickly.\nGood catch, I didn't look so closely at this.\n> Anyway, to fix this, I've added some more fields to the slot struct to\n> keep track of the information we need to properly establish connections,\n> and we now pay careful attention to the return value of select() so that we\n> know which slots are ready for processing. This seemed like a nice little\n> optimization independent of fixing connection establishment. I was worried\n> this was going to require a lot more code, but I think it only added ~50\n> lines or so.\nThe fix looks right to me, but I got confused by the skip_wait and this \n`if`:\n\n+            if (PQstatus(slot->conn) != CONNECTION_OK)\n+                return;\n\nThis branch checks connection status that hasn't been refreshed after \nthe select. When we go back to wait_slots after this, PQconnectPoll will \nrefresh the connection status and run select with skip_wait=true, I \nbelieve, we could simplify this by moving the PQconnectPoll back to the \nprocess_slots, so that we can process connection right after polling, if \nit's ready. Something like this:\n\ndiff --git a/src/bin/pg_upgrade/task.c b/src/bin/pg_upgrade/task.c\nindex 3618dc08ff..73e987febf 100644\n--- a/src/bin/pg_upgrade/task.c\n+++ b/src/bin/pg_upgrade/task.c\n@@ -237,6 +237,8 @@ process_query_result(const ClusterInfo *cluster, \nUpgradeTaskSlot *slot,\n  static void\n  process_slot(const ClusterInfo *cluster, UpgradeTaskSlot *slot, const \nUpgradeTask *task)\n  {\n+    PostgresPollingStatusType status;\n+\n      if (!slot->ready)\n          return;\n\n@@ -260,26 +262,26 @@ process_slot(const ClusterInfo *cluster, \nUpgradeTaskSlot *slot, const UpgradeTas\n              start_conn(cluster, slot);\n\n              return;\n-\n          case CONNECTING:\n\n-            /* Check for connection failure. */\n-            if (PQstatus(slot->conn) == CONNECTION_BAD)\n-                pg_fatal(\"connection failure: %s\", \nPQerrorMessage(slot->conn));\n-\n-            /* Check whether the connection is still establishing. */\n-            if (PQstatus(slot->conn) != CONNECTION_OK)\n-                return;\n-\n-            /*\n-             * Move on to running/processing the queries in the task.\n-             */\n-            slot->state = RUNNING_QUERIES;\n-            if (!PQsendQuery(slot->conn, task->queries->data))\n+            status = PQconnectPoll(slot->conn);\n+            if (status == PGRES_POLLING_READING)\n+                slot->select_mode = true;\n+            else if (status == PGRES_POLLING_WRITING)\n+                slot->select_mode = false;\n+            else if (status == PGRES_POLLING_FAILED)\n                  pg_fatal(\"connection failure: %s\", \nPQerrorMessage(slot->conn));\n+            else\n+            {\n+                /*\n+                 * Move on to running/processing the queries in the task.\n+                 */\n+                slot->state = RUNNING_QUERIES;\n+                if (!PQsendQuery(slot->conn, task->queries->data))\n+                    pg_fatal(\"connection failure: %s\", \nPQerrorMessage(slot->conn));\n\n+            }\n              return;\n-\n          case RUNNING_QUERIES:\n\n              /*\n@@ -370,8 +372,6 @@ wait_on_slots(UpgradeTaskSlot *slots, int numslots)\n\n      for (int i = 0; i < numslots; i++)\n      {\n-        PostgresPollingStatusType status;\n-\n          switch (slots[i].state)\n          {\n              case FREE:\n@@ -386,33 +386,7 @@ wait_on_slots(UpgradeTaskSlot *slots, int numslots)\n                  continue;\n\n              case CONNECTING:\n-\n-                /*\n-                 * Don't call PQconnectPoll() again for this slot until\n-                 * select() tells us something is ready.  Be sure to \nuse the\n-                 * previous poll mode in this case.\n-                 */\n-                if (!slots[i].ready)\n-                    break;\n-\n-                /*\n-                 * If we are waiting for the connection to establish, \nchoose\n-                 * whether to wait for reading or for writing on the \nsocket as\n-                 * appropriate.  If neither apply, mark the slot as \nready and\n-                 * skip waiting so that it is handled ASAP (we assume this\n-                 * means the connection is either bad or fully ready).\n-                 */\n-                status = PQconnectPoll(slots[i].conn);\n-                if (status == PGRES_POLLING_READING)\n-                    slots[i].select_mode = true;\n-                else if (status == PGRES_POLLING_WRITING)\n-                    slots[i].select_mode = false;\n-                else\n-                {\n-                    slots[i].ready = true;\n-                    skip_wait = true;\n-                    continue;\n-                }\n+                /* All the slot metadata was already setup in \nprocess_slots() */\n\n                  break;\n\n-- \n2.43.0\n\nskip_wait can be removed in this case as well.\n\nThis is up to you, I think the v12 is good and commitable in any case.\n\n\n\n\n\n\n\n\nOn 01.09.2024 22:05, Nathan Bossart\n wrote:\n\n\nI think we can actually just use PQstatus() here. But furthermore, I think\nthe way I was initiating connections was completely bogus. IIUC before\ncalling PQconnectPoll() the first time, we should wait for a write\nindicator from select(), and then we should only call PQconnectPoll() after\nsubsequent indicators from select(). After spending quite a bit of time\nstaring at the PQconnectPoll() code, I'm quite surprised I haven't seen any\nproblems thus far. If I had to guess, I'd say that either PQconnectPoll()\nis more resilient than I think it is, or I've just gotten lucky because\npg_upgrade establishes connections quickly.\n\n Good catch, I didn't look so closely at this.\n\n\n\nAnyway, to fix this, I've added some more fields to the slot struct to\nkeep track of the information we need to properly establish connections,\nand we now pay careful attention to the return value of select() so that we\nknow which slots are ready for processing. This seemed like a nice little\noptimization independent of fixing connection establishment. I was worried\nthis was going to require a lot more code, but I think it only added ~50\nlines or so.\n\n\n The fix looks right to me, but I got confused by the skip_wait and\n this `if`:\n\n +            if (PQstatus(slot->conn) != CONNECTION_OK)\n +                return;\nThis branch checks connection status that hasn't been refreshed\n after the select. When we go back to wait_slots after this,\n PQconnectPoll will refresh the connection status and run select\n with skip_wait=true, I believe, we could simplify this by moving\n the PQconnectPoll back to the process_slots, so that we can\n process connection right after polling, if it's ready. Something\n like this:\ndiff --git a/src/bin/pg_upgrade/task.c\n b/src/bin/pg_upgrade/task.c\n index 3618dc08ff..73e987febf 100644\n --- a/src/bin/pg_upgrade/task.c\n +++ b/src/bin/pg_upgrade/task.c\n @@ -237,6 +237,8 @@ process_query_result(const ClusterInfo\n *cluster, UpgradeTaskSlot *slot,\n  static void\n  process_slot(const ClusterInfo *cluster, UpgradeTaskSlot *slot,\n const UpgradeTask *task)\n  {\n +    PostgresPollingStatusType status;\n +\n      if (!slot->ready)\n          return;\n  \n @@ -260,26 +262,26 @@ process_slot(const ClusterInfo *cluster,\n UpgradeTaskSlot *slot, const UpgradeTas\n              start_conn(cluster, slot);\n  \n              return;\n -\n          case CONNECTING:\n  \n -            /* Check for connection failure. */\n -            if (PQstatus(slot->conn) == CONNECTION_BAD)\n -                pg_fatal(\"connection failure: %s\",\n PQerrorMessage(slot->conn));\n -\n -            /* Check whether the connection is still\n establishing. */\n -            if (PQstatus(slot->conn) != CONNECTION_OK)\n -                return;\n -\n -            /*\n -             * Move on to running/processing the queries in the\n task.\n -             */\n -            slot->state = RUNNING_QUERIES;\n -            if (!PQsendQuery(slot->conn,\n task->queries->data))\n +            status = PQconnectPoll(slot->conn);\n +            if (status == PGRES_POLLING_READING)\n +                slot->select_mode = true;\n +            else if (status == PGRES_POLLING_WRITING)\n +                slot->select_mode = false;\n +            else if (status == PGRES_POLLING_FAILED)\n                  pg_fatal(\"connection failure: %s\",\n PQerrorMessage(slot->conn));\n +            else\n +            {\n +                /*\n +                 * Move on to running/processing the queries in\n the task.\n +                 */\n +                slot->state = RUNNING_QUERIES;\n +                if (!PQsendQuery(slot->conn,\n task->queries->data))\n +                    pg_fatal(\"connection failure: %s\",\n PQerrorMessage(slot->conn));\n  \n +            }\n              return;\n -\n          case RUNNING_QUERIES:\n  \n              /*\n @@ -370,8 +372,6 @@ wait_on_slots(UpgradeTaskSlot *slots, int\n numslots)\n  \n      for (int i = 0; i < numslots; i++)\n      {\n -        PostgresPollingStatusType status;\n -\n          switch (slots[i].state)\n          {\n              case FREE:\n @@ -386,33 +386,7 @@ wait_on_slots(UpgradeTaskSlot *slots, int\n numslots)\n                  continue;\n  \n              case CONNECTING:\n -\n -                /*\n -                 * Don't call PQconnectPoll() again for this slot\n until\n -                 * select() tells us something is ready.  Be sure\n to use the\n -                 * previous poll mode in this case.\n -                 */\n -                if (!slots[i].ready)\n -                    break;\n -\n -                /*\n -                 * If we are waiting for the connection to\n establish, choose\n -                 * whether to wait for reading or for writing on\n the socket as\n -                 * appropriate.  If neither apply, mark the slot\n as ready and\n -                 * skip waiting so that it is handled ASAP (we\n assume this\n -                 * means the connection is either bad or fully\n ready).\n -                 */\n -                status = PQconnectPoll(slots[i].conn);\n -                if (status == PGRES_POLLING_READING)\n -                    slots[i].select_mode = true;\n -                else if (status == PGRES_POLLING_WRITING)\n -                    slots[i].select_mode = false;\n -                else\n -                {\n -                    slots[i].ready = true;\n -                    skip_wait = true;\n -                    continue;\n -                }\n +                /* All the slot metadata was already setup in\n process_slots() */\n  \n                  break;\n  \n -- \n 2.43.0\n\n skip_wait can be removed in this case as well. \n\nThis is up to you, I think the v12 is good and commitable in any\n case.", "msg_date": "Wed, 4 Sep 2024 00:28:23 +0100", "msg_from": "Ilya Gladyshev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Wed, Sep 04, 2024 at 12:28:23AM +0100, Ilya Gladyshev wrote:\n> The fix looks right to me, but I got confused by the skip_wait and this\n> `if`:\n> \n> +����������� if (PQstatus(slot->conn) != CONNECTION_OK)\n> +��������������� return;\n> \n> This branch checks connection status that hasn't been refreshed after the\n> select. When we go back to wait_slots after this, PQconnectPoll will refresh\n> the connection status and run select with skip_wait=true, I believe, we\n> could simplify this by moving the PQconnectPoll back to the process_slots,\n> so that we can process connection right after polling, if it's ready.\n\nAh, yes, that's a nice way to simplify things. I ended up just making it\nprocess_slot()'s responsibility to set the correct select_mode, at which\npoint the logic in the switch statement in wait_on_slots() is sparse enough\nthat it seems better to convert it to a couple of short \"if\" statements.\n\n-- \nnathan", "msg_date": "Tue, 3 Sep 2024 20:41:49 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "I've read and tested through the latest version of this patchset and I think\nit's ready to go in. The one concern I have is that tasks can exit(1) on libpq\nerrors tearing down perfectly functional connections without graceful shutdown.\nLonger term I think it would make sense to add similar exit handler callbacks\nto the ones in pg_dump for graceful cleanup of connections. However, in order\nto keep goalposts in clear view I don't think this patch need to have it, but\nit would be good to consider once in.\n\nSpotted a small typo in the comments:\n\n+\t * nothing to process. This is primarily intended for the inital step in\ns/inital/initial/\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 5 Sep 2024 13:32:34 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Thu, Sep 05, 2024 at 01:32:34PM +0200, Daniel Gustafsson wrote:\n> I've read and tested through the latest version of this patchset and I think\n> it's ready to go in.\n\nThanks for reviewing. I'm aiming to commit it later this week.\n\n> The one concern I have is that tasks can exit(1) on libpq\n> errors tearing down perfectly functional connections without graceful shutdown.\n> Longer term I think it would make sense to add similar exit handler callbacks\n> to the ones in pg_dump for graceful cleanup of connections. However, in order\n> to keep goalposts in clear view I don't think this patch need to have it, but\n> it would be good to consider once in.\n\nThis did cross my mind. I haven't noticed any problems in my testing, and\nit looks like there are several existing places in pg_upgrade that call\npg_fatal() with open connections, so I'm inclined to agree that this is a\nnice follow-up task that needn't hold up this patch set.\n\n> Spotted a small typo in the comments:\n> \n> +\t * nothing to process. This is primarily intended for the inital step in\n> s/inital/initial/\n\nWill fix.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 9 Sep 2024 14:17:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "> On 9 Sep 2024, at 21:17, Nathan Bossart <[email protected]> wrote:\n> \n> On Thu, Sep 05, 2024 at 01:32:34PM +0200, Daniel Gustafsson wrote:\n>> I've read and tested through the latest version of this patchset and I think\n>> it's ready to go in.\n> \n> Thanks for reviewing. I'm aiming to commit it later this week.\n\n+1. Looking forward to seeing what all the pg_dump/pg_upgrade changes amount\nto in speed improvement when combined.\n\n>> The one concern I have is that tasks can exit(1) on libpq\n>> errors tearing down perfectly functional connections without graceful shutdown.\n>> Longer term I think it would make sense to add similar exit handler callbacks\n>> to the ones in pg_dump for graceful cleanup of connections. However, in order\n>> to keep goalposts in clear view I don't think this patch need to have it, but\n>> it would be good to consider once in.\n> \n> This did cross my mind. I haven't noticed any problems in my testing, and\n> it looks like there are several existing places in pg_upgrade that call\n> pg_fatal() with open connections, so I'm inclined to agree that this is a\n> nice follow-up task that needn't hold up this patch set.\n\nIt could perhaps be a good introductory task for a new contributor who want a\nfairly confined project to hack on.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 9 Sep 2024 23:20:28 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" }, { "msg_contents": "On Mon, Sep 09, 2024 at 11:20:28PM +0200, Daniel Gustafsson wrote:\n>> On 9 Sep 2024, at 21:17, Nathan Bossart <[email protected]> wrote:\n>> \n>> On Thu, Sep 05, 2024 at 01:32:34PM +0200, Daniel Gustafsson wrote:\n>>> I've read and tested through the latest version of this patchset and I think\n>>> it's ready to go in.\n>> \n>> Thanks for reviewing. I'm aiming to commit it later this week.\n> \n> +1. Looking forward to seeing what all the pg_dump/pg_upgrade changes amount\n> to in speed improvement when combined.\n\nCommitted.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 16 Sep 2024 16:16:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing pg_upgrade's once-in-each-database steps" } ]
[ { "msg_contents": "Hackers,\n\nThis is greatly simplified implementation of the patch proposed in [1] \nand hopefully it addresses the concerns expressed there. Since the \nimplementation is quite different it seemed like a new thread was \nappropriate, especially since the old thread title would be very \nmisleading regarding the new functionality.\n\nThe basic idea is to harden recovery by returning a copy of pg_control \nfrom pg_backup_stop() that has a flag set to prevent recovery if the \nbackup_label file is missing. Instead of backup software copying \npg_control from PGDATA, it stores an updated version that is returned \nfrom pg_backup_stop(). This is better for the following reasons:\n\n* The user can no longer remove backup_label and get what looks like a \nsuccessful recovery (while almost certainly causing corruption). If \nbackup_label is removed the cluster will not start. The user may try \npg_resetwal, but that tool makes it pretty clear that corruption will \nresult from its use.\n\n* We don't need to worry about backup software seeing a torn copy of \npg_control, since Postgres can safely read it out of memory and provide \na valid copy via pg_backup_stop(). This solves torn reads without \nneeding to write pg_control via a temp file, which may affect \nperformance on a standby.\n\n* For backup from standby, we no longer need to instruct the backup \nsoftware to copy pg_control last. In fact the backup software should not \ncopy pg_control from PGDATA at all.\n\nThese changes have no impact on current backup software and they are \nfree to use the pg_control available from pg_stop_backup() or continue \nto use pg_control from PGDATA. Of course they will miss the benefits of \ngetting a consistent copy of pg_control and the backup_label checking, \nbut will be no worse off than before.\n\nI'll register this in the July CF.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/[email protected]", "msg_date": "Fri, 17 May 2024 12:46:49 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Return pg_control from pg_backup_stop()." } ]
[ { "msg_contents": "Hi Hackers,\n\nI have been playing with PG on the Windows platform recently. An annoying\nthing I faced is that a lot of Visual Studio's temp files kept appearing in\ngit changed files. Therefore, I am submitting this very trivial patch to\nignore these temp files.\n\nLooking forward to the PG guru's guidance!\n\nRegards...\n\nYasir Hussain\nPrincipal Software Engineer\nBitnine Global Inc.", "msg_date": "Fri, 17 May 2024 11:09:09 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Ignore Visual Studio's Temp Files While Working with PG on Windows" }, { "msg_contents": "Hi Hackers,\n\nI have been playing with PG on the Windows platform recently. An annoying\nthing I faced is that a lot of Visual Studio's temp files kept appearing in\ngit changed files. Therefore, I am submitting this very trivial patch to\nignore these temp files.\n\nLooking forward to the PG guru's guidance!\n\nRegards...\n\nYasir Hussain\nPrincipal Software Engineer\nBitnine Global Inc.", "msg_date": "Fri, 17 May 2024 11:17:31 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On 17.05.24 08:09, Yasir wrote:\n> I have been playing with PG on the Windows platform recently. An \n> annoying thing I faced is that a lot of Visual Studio's temp files kept \n> appearing in git changed files. Therefore, I am submitting this very \n> trivial patch to ignore these temp files.\n\nOur general recommendation is that you put such things into your \npersonal global git ignore file.\n\nFor example, I have in ~/.gitconfig\n\n[core]\n excludesFile = ~/.gitexcludes\n\nand then in ~/.gitexcludes I have various ignores that are specific to \nmy local tooling.\n\nThat way we don't have to maintain ignore lists for all the tools in the \nworld in the PostgreSQL source tree.\n\n\n\n", "msg_date": "Fri, 17 May 2024 08:34:46 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "Nice approach! Thankyou Peter for the guidance.\n\nRegards...\n\nYasir\n\nOn Fri, May 17, 2024 at 11:34 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 17.05.24 08:09, Yasir wrote:\n> > I have been playing with PG on the Windows platform recently. An\n> > annoying thing I faced is that a lot of Visual Studio's temp files kept\n> > appearing in git changed files. Therefore, I am submitting this very\n> > trivial patch to ignore these temp files.\n>\n> Our general recommendation is that you put such things into your\n> personal global git ignore file.\n>\n> For example, I have in ~/.gitconfig\n>\n> [core]\n> excludesFile = ~/.gitexcludes\n>\n> and then in ~/.gitexcludes I have various ignores that are specific to\n> my local tooling.\n>\n> That way we don't have to maintain ignore lists for all the tools in the\n> world in the PostgreSQL source tree.\n>\n>\n\nNice approach! Thankyou Peter for the guidance. Regards...YasirOn Fri, May 17, 2024 at 11:34 AM Peter Eisentraut <[email protected]> wrote:On 17.05.24 08:09, Yasir wrote:\n> I have been playing with PG on the Windows platform recently. An \n> annoying thing I faced is that a lot of Visual Studio's temp files kept \n> appearing in git changed files. Therefore, I am submitting this very \n> trivial patch to ignore these temp files.\n\nOur general recommendation is that you put such things into your \npersonal global git ignore file.\n\nFor example, I have in ~/.gitconfig\n\n[core]\n         excludesFile = ~/.gitexcludes\n\nand then in ~/.gitexcludes I have various ignores that are specific to \nmy local tooling.\n\nThat way we don't have to maintain ignore lists for all the tools in the \nworld in the PostgreSQL source tree.", "msg_date": "Fri, 17 May 2024 11:48:17 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "\nOn 2024-05-17 Fr 02:34, Peter Eisentraut wrote:\n> On 17.05.24 08:09, Yasir wrote:\n>> I have been playing with PG on the Windows platform recently. An \n>> annoying thing I faced is that a lot of Visual Studio's temp files \n>> kept appearing in git changed files. Therefore, I am submitting this \n>> very trivial patch to ignore these temp files.\n>\n> Our general recommendation is that you put such things into your \n> personal global git ignore file.\n>\n> For example, I have in ~/.gitconfig\n>\n> [core]\n>         excludesFile = ~/.gitexcludes\n>\n> and then in ~/.gitexcludes I have various ignores that are specific to \n> my local tooling.\n>\n> That way we don't have to maintain ignore lists for all the tools in \n> the world in the PostgreSQL source tree.\n>\n>\n>\n\nor if you want something repo-specific, you can add entries to \n.git/info/exclude\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 18 May 2024 10:24:04 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]> napsal:\n>\n> Hi Hackers,\n>\n> I have been playing with PG on the Windows platform recently. An annoying thing I faced is that a lot of Visual Studio's temp files kept appearing in git changed files. Therefore, I am submitting this very trivial patch to ignore these temp files.\n\nsee https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\nfor various strategies\n\nAnyway if those are not files specific to your setup (like editor\nones), but files which every PG hacker on Windows will generate as\nwell (which is this case IMHO), it will make sense to add it into\nproject's gitignore.\n\n> Looking forward to the PG guru's guidance!\n>\n> Regards...\n>\n> Yasir Hussain\n> Principal Software Engineer\n> Bitnine Global Inc.\n>\n\n\n", "msg_date": "Sat, 18 May 2024 16:27:41 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On Sat, May 18, 2024 at 7:27 PM Josef Šimánek <[email protected]>\nwrote:\n\n> pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n> napsal:\n> >\n> > Hi Hackers,\n> >\n> > I have been playing with PG on the Windows platform recently. An\n> annoying thing I faced is that a lot of Visual Studio's temp files kept\n> appearing in git changed files. Therefore, I am submitting this very\n> trivial patch to ignore these temp files.\n>\n> see\n> https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n> for various strategies\n>\n> Anyway if those are not files specific to your setup (like editor\n> ones), but files which every PG hacker on Windows will generate as\n> well (which is this case IMHO), it will make sense to add it into\n> project's gitignore.\n>\n\n.vs directory and temp files within it are created once you open any of the\n.sln, .vcproj or .vcxproj files (created with build command when PWD is\npostgres/src/tools/msvc) in visual studio. It's a common practice that\ndevelopers use visual studio on codebase as it's mostly the default c/c++\nfiles/projects editor.\nSo, it would be a common case for most of the developers with Windows\nplatform to add it in project's .gitignore.\n\n\n> > Looking forward to the PG guru's guidance!\n> >\n> > Regards...\n> >\n> > Yasir Hussain\n> > Principal Software Engineer\n> > Bitnine Global Inc.\n> >\n>\n\nOn Sat, May 18, 2024 at 7:27 PM Josef Šimánek <[email protected]> wrote:pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]> napsal:\n>\n> Hi Hackers,\n>\n> I have been playing with PG on the Windows platform recently. An annoying thing I faced is that a lot of Visual Studio's temp files kept appearing in git changed files. Therefore, I am submitting this very trivial patch to ignore these temp files.\n\nsee https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\nfor various strategies\n\nAnyway if those are not files specific to your setup (like editor\nones), but files which every PG hacker on Windows will generate as\nwell (which is this case IMHO), it will make sense to add it into\nproject's gitignore. .vs directory and temp files within it are created once you open any of the .sln, .vcproj or .vcxproj files (created with build command when PWD is postgres/src/tools/msvc) in visual studio. It's a common practice that developers use visual studio on codebase as it's mostly the default c/c++ files/projects editor. So, it would be a common case for most of the developers with Windows platform to add it in project's .gitignore. \n\n> Looking forward to the PG guru's guidance!\n>\n> Regards...\n>\n> Yasir Hussain\n> Principal Software Engineer\n> Bitnine Global Inc.\n>", "msg_date": "Sun, 19 May 2024 00:26:11 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On Sat, May 18, 2024 at 7:27 PM Josef Šimánek <[email protected]>\nwrote:\n\n> pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n> napsal:\n> >\n> > Hi Hackers,\n> >\n> > I have been playing with PG on the Windows platform recently. An\n> annoying thing I faced is that a lot of Visual Studio's temp files kept\n> appearing in git changed files. Therefore, I am submitting this very\n> trivial patch to ignore these temp files.\n>\n> see\n> https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n> for various strategies\n>\n>\nWe can add it to \"~/.config/git/ignore\" as it will ignore globally on\nwindows which we don't want. Also we don't have \".git/info/exclude\" in PG\nproject's so the best place left is projects's .gitignore. That's what was\npatched.\n\n\n> Anyway if those are not files specific to your setup (like editor\n> ones), but files which every PG hacker on Windows will generate as\n> well (which is this case IMHO), it will make sense to add it into\n> project's gitignore.\n>\n> > Looking forward to the PG guru's guidance!\n> >\n> > Regards...\n> >\n> > Yasir Hussain\n> > Principal Software Engineer\n> > Bitnine Global Inc.\n> >\n>\n\nOn Sat, May 18, 2024 at 7:27 PM Josef Šimánek <[email protected]> wrote:pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]> napsal:\n>\n> Hi Hackers,\n>\n> I have been playing with PG on the Windows platform recently. An annoying thing I faced is that a lot of Visual Studio's temp files kept appearing in git changed files. Therefore, I am submitting this very trivial patch to ignore these temp files.\n\nsee https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\nfor various strategies\nWe can add it to \"~/.config/git/ignore\" as it will ignore globally on windows which we don't want. Also we don't have \".git/info/exclude\" in PG project's so the best place left is projects's .gitignore. That's what was patched.  \nAnyway if those are not files specific to your setup (like editor\nones), but files which every PG hacker on Windows will generate as\nwell (which is this case IMHO), it will make sense to add it into\nproject's gitignore.\n\n> Looking forward to the PG guru's guidance!\n>\n> Regards...\n>\n> Yasir Hussain\n> Principal Software Engineer\n> Bitnine Global Inc.\n>", "msg_date": "Sun, 19 May 2024 00:43:42 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "Yasir <[email protected]> writes:\n> We can add it to \"~/.config/git/ignore\" as it will ignore globally on\n> windows which we don't want. Also we don't have \".git/info/exclude\" in PG\n> project's so the best place left is projects's .gitignore. That's what was\n> patched.\n\nAs Peter said, we're not going to do that. The intention with\nthe project's .gitignore files is to ignore files that are\nintentionally built by our \"make\" targets (and, hopefully, will be\nremoved by \"make maintainer-clean\"). Anything else that you want\ngit to ignore should be in a personal ignore list; especially\nfiles that are platform-specific. The fact that it's reasonable\nto ignore \".vs\" files when working with your toolset doesn't mean\nthat it's reasonable to ignore them when working on some other\nplatform.\n\nIf we used some other policy, we'd have tons of debates about\nwhich files were reasonable to exclude. For myself, for example,\nI exclude \"*~\" (Emacs backup files) and \"*.orig\" (patch(1)\nbackup files) but those choices are very much dependent on the\nset of tools I choose to use. Other developers have other\npersonal exclusion lists. If we tried to make the project's\nfiles be the union of all those lists, we'd be at serious risk\nof ignoring stuff we absolutely shouldn't ignore in some contexts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 16:36:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "so 18. 5. 2024 v 22:36 odesílatel Tom Lane <[email protected]> napsal:\n>\n> Yasir <[email protected]> writes:\n> > We can add it to \"~/.config/git/ignore\" as it will ignore globally on\n> > windows which we don't want. Also we don't have \".git/info/exclude\" in PG\n> > project's so the best place left is projects's .gitignore. That's what was\n> > patched.\n>\n> As Peter said, we're not going to do that. The intention with\n> the project's .gitignore files is to ignore files that are\n> intentionally built by our \"make\" targets (and, hopefully, will be\n> removed by \"make maintainer-clean\"). Anything else that you want\n> git to ignore should be in a personal ignore list; especially\n> files that are platform-specific. The fact that it's reasonable\n> to ignore \".vs\" files when working with your toolset doesn't mean\n> that it's reasonable to ignore them when working on some other\n> platform.\n>\n> If we used some other policy, we'd have tons of debates about\n> which files were reasonable to exclude. For myself, for example,\n> I exclude \"*~\" (Emacs backup files) and \"*.orig\" (patch(1)\n> backup files) but those choices are very much dependent on the\n> set of tools I choose to use. Other developers have other\n> personal exclusion lists. If we tried to make the project's\n> files be the union of all those lists, we'd be at serious risk\n> of ignoring stuff we absolutely shouldn't ignore in some contexts.\n\nBut this is different. If I understand it well, just by following\nhttps://www.postgresql.org/docs/16/install-windows-full.html you'll\nget those files no matter what is your specific environment (or\nspecific set of tools).\n\n> regards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 22:42:37 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On 2024-05-18 Sa 15:43, Yasir wrote:\n>\n>\n> On Sat, May 18, 2024 at 7:27 PM Josef Šimánek \n> <[email protected]> wrote:\n>\n> pá 17. 5. 2024 v 8:09 odesílatel Yasir\n> <[email protected]> napsal:\n> >\n> > Hi Hackers,\n> >\n> > I have been playing with PG on the Windows platform recently. An\n> annoying thing I faced is that a lot of Visual Studio's temp files\n> kept appearing in git changed files. Therefore, I am submitting\n> this very trivial patch to ignore these temp files.\n>\n> see\n> https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n> for various strategies\n>\n>\n> We can add it to \"~/.config/git/ignore\" as it will ignore globally on \n> windows which we don't want. Also we don't have \".git/info/exclude\" in \n> PG project's so the best place left is projects's .gitignore. That's \n> what was patched.\n\n\n\neh? git creates .git/info/exclude in every git repository AFAIK. And \nit's referred to here: <https://git-scm.com/docs/gitignore>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-18 Sa 15:43, Yasir wrote:\n\n\n\n\n\n\n\n\nOn Sat, May 18, 2024 at\n 7:27 PM Josef Šimánek <[email protected]>\n wrote:\n\npá\n 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n napsal:\n >\n > Hi Hackers,\n >\n > I have been playing with PG on the Windows platform\n recently. An annoying thing I faced is that a lot of Visual\n Studio's temp files kept appearing in git changed files.\n Therefore, I am submitting this very trivial patch to ignore\n these temp files.\n\n see https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n for various strategies\n\n\n\n\nWe can add it to \"~/.config/git/ignore\" as it will ignore\n globally on windows which we don't want. Also we don't have\n \".git/info/exclude\" in PG project's so the best place left\n is projects's .gitignore. That's what was patched. \n \n\n\n\n\n\n\n\neh? git creates .git/info/exclude in every git repository AFAIK.\n And it's referred to here:\n <https://git-scm.com/docs/gitignore>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 18 May 2024 16:45:27 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On Sun, May 19, 2024 at 1:45 AM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-05-18 Sa 15:43, Yasir wrote:\n>\n>\n>\n> On Sat, May 18, 2024 at 7:27 PM Josef Šimánek <[email protected]>\n> wrote:\n>\n>> pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n>> napsal:\n>> >\n>> > Hi Hackers,\n>> >\n>> > I have been playing with PG on the Windows platform recently. An\n>> annoying thing I faced is that a lot of Visual Studio's temp files kept\n>> appearing in git changed files. Therefore, I am submitting this very\n>> trivial patch to ignore these temp files.\n>>\n>> see\n>> https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n>> for various strategies\n>>\n>>\n> We can add it to \"~/.config/git/ignore\" as it will ignore globally on\n> windows which we don't want. Also we don't have \".git/info/exclude\" in PG\n> project's so the best place left is projects's .gitignore. That's what was\n> patched.\n>\n>\n>\n>\n> eh? git creates .git/info/exclude in every git repository AFAIK. And it's\n> referred to here: <https://git-scm.com/docs/gitignore>\n> <https://git-scm.com/docs/gitignore>\n>\n>\n> Yes, git creates .git/info/exclude but point is, it is not in PG\nmaintained codebase repo. So, no point adding to it.\n\nBTW, Tom and Peter said it's not going to be added anyway!\n\n\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn Sun, May 19, 2024 at 1:45 AM Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-05-18 Sa 15:43, Yasir wrote:\n\n\n\n\n\n\n\nOn Sat, May 18, 2024 at\n 7:27 PM Josef Šimánek <[email protected]>\n wrote:\n\npá\n 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n napsal:\n >\n > Hi Hackers,\n >\n > I have been playing with PG on the Windows platform\n recently. An annoying thing I faced is that a lot of Visual\n Studio's temp files kept appearing in git changed files.\n Therefore, I am submitting this very trivial patch to ignore\n these temp files.\n\n see https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n for various strategies\n\n\n\n\nWe can add it to \"~/.config/git/ignore\" as it will ignore\n globally on windows which we don't want. Also we don't have\n \".git/info/exclude\" in PG project's so the best place left\n is projects's .gitignore. That's what was patched. \n \n\n\n\n\n\n\n\neh? git creates .git/info/exclude in every git repository AFAIK.\n And it's referred to here:\n <https://git-scm.com/docs/gitignore>\nYes, git creates .git/info/exclude but point is, it is not in PG maintained codebase repo. So, no point adding to it. BTW, Tom and Peter said it's not going to be added anyway! \n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 19 May 2024 01:54:10 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> But this is different. If I understand it well, just by following\n> https://www.postgresql.org/docs/16/install-windows-full.html you'll\n> get those files no matter what is your specific environment (or\n> specific set of tools).\n\nHm? Visual Studio seems like quite a specific tool from here.\n\nI did some googling around the question of project .gitignore\nfiles ignoring .vs/, and was amused to come across this:\n\nhttps://github.com/github/gitignore/blob/main/VisualStudio.gitignore\n\nwhich seems like a mighty fine example of where we *don't*\nwant to go.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 17:16:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "so 18. 5. 2024 v 23:16 odesílatel Tom Lane <[email protected]> napsal:\n>\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> > But this is different. If I understand it well, just by following\n> > https://www.postgresql.org/docs/16/install-windows-full.html you'll\n> > get those files no matter what is your specific environment (or\n> > specific set of tools).\n>\n> Hm? Visual Studio seems like quite a specific tool from here.\n\nI initially thought the .vs folder is created just by compiling\nPostgreSQL using build.bat (like without opening Visual Studio at\nall). But I'm not 100% sure, I'll take a look and report back.\n\n> I did some googling around the question of project .gitignore\n> files ignoring .vs/, and was amused to come across this:\n>\n> https://github.com/github/gitignore/blob/main/VisualStudio.gitignore\n>\n> which seems like a mighty fine example of where we *don't*\n> want to go.\n\nThat's clearly a nightmare to maintain. But in this case it should be\nall hidden within one .vs folder.\n\n> regards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 23:23:09 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On Sun, May 19, 2024 at 2:16 AM Tom Lane <[email protected]> wrote:\n\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> > But this is different. If I understand it well, just by following\n> > https://www.postgresql.org/docs/16/install-windows-full.html you'll\n> > get those files no matter what is your specific environment (or\n> > specific set of tools).\n>\n> Hm? Visual Studio seems like quite a specific tool from here.\n>\n> I did some googling around the question of project .gitignore\n> files ignoring .vs/, and was amused to come across this:\n>\n> https://github.com/github/gitignore/blob/main/VisualStudio.gitignore\n>\n>\nThis is funny Tom. Adding an entry for each type of temp file in .gitignore\nis a childish thing, obviously.\n\nwhich seems like a mighty fine example of where we *don't*\n> want to go.\n>\n\nI agree we don't want to go in this direction.\n\n\n>\n> regards, tom lane\n>\n\nOn Sun, May 19, 2024 at 2:16 AM Tom Lane <[email protected]> wrote:=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> But this is different. If I understand it well, just by following\n> https://www.postgresql.org/docs/16/install-windows-full.html you'll\n> get those files no matter what is your specific environment (or\n> specific set of tools).\n\nHm?  Visual Studio seems like quite a specific tool from here.\n\nI did some googling around the question of project .gitignore\nfiles ignoring .vs/, and was amused to come across this:\n\nhttps://github.com/github/gitignore/blob/main/VisualStudio.gitignore\nThis is funny Tom. Adding an entry for each type of temp file in .gitignore is a childish thing, obviously.  \nwhich seems like a mighty fine example of where we *don't*\nwant to go.I agree we don't want to go in this direction. \n\n                        regards, tom lane", "msg_date": "Sun, 19 May 2024 02:27:18 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On Sun, May 19, 2024 at 2:23 AM Josef Šimánek <[email protected]>\nwrote:\n\n> so 18. 5. 2024 v 23:16 odesílatel Tom Lane <[email protected]> napsal:\n> >\n> > =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> > > But this is different. If I understand it well, just by following\n> > > https://www.postgresql.org/docs/16/install-windows-full.html you'll\n> > > get those files no matter what is your specific environment (or\n> > > specific set of tools).\n> >\n> > Hm? Visual Studio seems like quite a specific tool from here.\n>\n> I initially thought the .vs folder is created just by compiling\n> PostgreSQL using build.bat (like without opening Visual Studio at\n> all). But I'm not 100% sure, I'll take a look and report back.\n>\n\n.vs folder is not created just by compiling PG. It is created if you open\nany of .sln, .vcproj or .vcxproj files.\nI have verified it.\n\n\n>\n> > I did some googling around the question of project .gitignore\n> > files ignoring .vs/, and was amused to come across this:\n> >\n> > https://github.com/github/gitignore/blob/main/VisualStudio.gitignore\n> >\n> > which seems like a mighty fine example of where we *don't*\n> > want to go.\n>\n> That's clearly a nightmare to maintain. But in this case it should be\n> all hidden within one .vs folder.\n>\n> > regards, tom lane\n>\n\nOn Sun, May 19, 2024 at 2:23 AM Josef Šimánek <[email protected]> wrote:so 18. 5. 2024 v 23:16 odesílatel Tom Lane <[email protected]> napsal:\n>\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n> > But this is different. If I understand it well, just by following\n> > https://www.postgresql.org/docs/16/install-windows-full.html you'll\n> > get those files no matter what is your specific environment (or\n> > specific set of tools).\n>\n> Hm?  Visual Studio seems like quite a specific tool from here.\n\nI initially thought the .vs folder is created just by compiling\nPostgreSQL using build.bat (like without opening Visual Studio at\nall). But I'm not 100% sure, I'll take a look and report back..vs folder is not created just by compiling PG. It is created if you open any of .sln, .vcproj or .vcxproj files. I have verified it.  \n\n> I did some googling around the question of project .gitignore\n> files ignoring .vs/, and was amused to come across this:\n>\n> https://github.com/github/gitignore/blob/main/VisualStudio.gitignore\n>\n> which seems like a mighty fine example of where we *don't*\n> want to go.\n\nThat's clearly a nightmare to maintain. But in this case it should be\nall hidden within one .vs folder.\n\n>                         regards, tom lane", "msg_date": "Sun, 19 May 2024 02:29:28 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "so 18. 5. 2024 v 23:29 odesílatel Yasir <[email protected]> napsal:\n>\n>\n>\n> On Sun, May 19, 2024 at 2:23 AM Josef Šimánek <[email protected]> wrote:\n>>\n>> so 18. 5. 2024 v 23:16 odesílatel Tom Lane <[email protected]> napsal:\n>> >\n>> > =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]> writes:\n>> > > But this is different. If I understand it well, just by following\n>> > > https://www.postgresql.org/docs/16/install-windows-full.html you'll\n>> > > get those files no matter what is your specific environment (or\n>> > > specific set of tools).\n>> >\n>> > Hm? Visual Studio seems like quite a specific tool from here.\n>>\n>> I initially thought the .vs folder is created just by compiling\n>> PostgreSQL using build.bat (like without opening Visual Studio at\n>> all). But I'm not 100% sure, I'll take a look and report back.\n>\n>\n> .vs folder is not created just by compiling PG. It is created if you open any of .sln, .vcproj or .vcxproj files.\n> I have verified it.\n\nYes, I can confirm. Just running build.bat doesn't create .vs. I'm\nsorry for confusion and I do agree ignoring \".vs\" directory is a local\nenvironment thing and doesn't belong to Postgres .gitignore.\n\n>>\n>>\n>> > I did some googling around the question of project .gitignore\n>> > files ignoring .vs/, and was amused to come across this:\n>> >\n>> > https://github.com/github/gitignore/blob/main/VisualStudio.gitignore\n>> >\n>> > which seems like a mighty fine example of where we *don't*\n>> > want to go.\n>>\n>> That's clearly a nightmare to maintain. But in this case it should be\n>> all hidden within one .vs folder.\n>>\n>> > regards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 23:31:07 +0200", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On 2024-05-18 Sa 16:54, Yasir wrote:\n>\n>\n> On Sun, May 19, 2024 at 1:45 AM Andrew Dunstan <[email protected]> \n> wrote:\n>\n>\n> On 2024-05-18 Sa 15:43, Yasir wrote:\n>>\n>>\n>> On Sat, May 18, 2024 at 7:27 PM Josef Šimánek\n>> <[email protected]> wrote:\n>>\n>> pá 17. 5. 2024 v 8:09 odesílatel Yasir\n>> <[email protected]> napsal:\n>> >\n>> > Hi Hackers,\n>> >\n>> > I have been playing with PG on the Windows platform\n>> recently. An annoying thing I faced is that a lot of Visual\n>> Studio's temp files kept appearing in git changed files.\n>> Therefore, I am submitting this very trivial patch to ignore\n>> these temp files.\n>>\n>> see\n>> https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n>> for various strategies\n>>\n>>\n>> We can add it to \"~/.config/git/ignore\" as it will ignore\n>> globally on windows which we don't want. Also we don't have\n>> \".git/info/exclude\" in PG project's so the best place left is\n>> projects's .gitignore. That's what was patched.\n>\n>\n>\n> eh? git creates .git/info/exclude in every git repository AFAIK.\n> And it's referred to here: <https://git-scm.com/docs/gitignore>\n> <https://git-scm.com/docs/gitignore>\n>\n>\n> Yes, git creates .git/info/exclude but point is, it is not in PG \n> maintained codebase repo. So, no point adding to it.\n>\n> BTW, Tom and Peter said it's not going to be added anyway!\n>\n>\n\nYou've completely missed my point, which is that *you* should be adding \nit to that file, as an alternative to using a (locally) global gitignore \nfile.\n\nI agree with Tom and Peter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-18 Sa 16:54, Yasir wrote:\n\n\n\n\n\n\n\n\nOn Sun, May 19, 2024 at\n 1:45 AM Andrew Dunstan <[email protected]>\n wrote:\n\n\n\n\n\nOn 2024-05-18 Sa 15:43, Yasir wrote:\n\n\n\n\n\n\n\nOn Sat, May 18,\n 2024 at 7:27 PM Josef Šimánek <[email protected]>\n wrote:\n\npá\n 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n napsal:\n >\n > Hi Hackers,\n >\n > I have been playing with PG on the Windows\n platform recently. An annoying thing I faced is\n that a lot of Visual Studio's temp files kept\n appearing in git changed files. Therefore, I am\n submitting this very trivial patch to ignore these\n temp files.\n\n see https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n for various strategies\n\n\n\n\nWe can add it to \"~/.config/git/ignore\" as it\n will ignore globally on windows which we don't\n want. Also we don't have \".git/info/exclude\" in PG\n project's so the best place left is projects's\n .gitignore. That's what was patched. \n \n\n\n\n\n\n\n\neh? git creates .git/info/exclude in every git\n repository AFAIK. And it's referred to here: <https://git-scm.com/docs/gitignore>\n\n\n\n\nYes, git creates .git/info/exclude but point is, it is\n not in PG maintained codebase repo. So, no point adding to\n it. \n\n BTW, Tom and Peter said it's not going to be added anyway!\n \n\n\n \n\n\n\n\n\n\n\n\nYou've completely missed my point, which is that *you* should be\n adding it to that file, as an alternative to using a (locally)\n global gitignore file.\n\nI agree with Tom and Peter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 18 May 2024 17:35:04 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" }, { "msg_contents": "On Sun, May 19, 2024 at 2:35 AM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-05-18 Sa 16:54, Yasir wrote:\n>\n>\n>\n> On Sun, May 19, 2024 at 1:45 AM Andrew Dunstan <[email protected]>\n> wrote:\n>\n>>\n>> On 2024-05-18 Sa 15:43, Yasir wrote:\n>>\n>>\n>>\n>> On Sat, May 18, 2024 at 7:27 PM Josef Šimánek <[email protected]>\n>> wrote:\n>>\n>>> pá 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n>>> napsal:\n>>> >\n>>> > Hi Hackers,\n>>> >\n>>> > I have been playing with PG on the Windows platform recently. An\n>>> annoying thing I faced is that a lot of Visual Studio's temp files kept\n>>> appearing in git changed files. Therefore, I am submitting this very\n>>> trivial patch to ignore these temp files.\n>>>\n>>> see\n>>> https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n>>> for various strategies\n>>>\n>>>\n>> We can add it to \"~/.config/git/ignore\" as it will ignore globally on\n>> windows which we don't want. Also we don't have \".git/info/exclude\" in PG\n>> project's so the best place left is projects's .gitignore. That's what was\n>> patched.\n>>\n>>\n>>\n>>\n>> eh? git creates .git/info/exclude in every git repository AFAIK. And it's\n>> referred to here: <https://git-scm.com/docs/gitignore>\n>> <https://git-scm.com/docs/gitignore>\n>>\n>>\n>> Yes, git creates .git/info/exclude but point is, it is not in PG\n> maintained codebase repo. So, no point adding to it.\n>\n> BTW, Tom and Peter said it's not going to be added anyway!\n>\n>\n>>\n>>\n> You've completely missed my point, which is that *you* should be adding it\n> to that file, as an alternative to using a (locally) global gitignore file.\n>\nMy bad Andrew.\n\n> I agree with Tom and Peter.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn Sun, May 19, 2024 at 2:35 AM Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-05-18 Sa 16:54, Yasir wrote:\n\n\n\n\n\n\n\nOn Sun, May 19, 2024 at\n 1:45 AM Andrew Dunstan <[email protected]>\n wrote:\n\n\n\n\n\nOn 2024-05-18 Sa 15:43, Yasir wrote:\n\n\n\n\n\n\n\nOn Sat, May 18,\n 2024 at 7:27 PM Josef Šimánek <[email protected]>\n wrote:\n\npá\n 17. 5. 2024 v 8:09 odesílatel Yasir <[email protected]>\n napsal:\n >\n > Hi Hackers,\n >\n > I have been playing with PG on the Windows\n platform recently. An annoying thing I faced is\n that a lot of Visual Studio's temp files kept\n appearing in git changed files. Therefore, I am\n submitting this very trivial patch to ignore these\n temp files.\n\n see https://docs.github.com/en/get-started/getting-started-with-git/ignoring-files#configuring-ignored-files-for-all-repositories-on-your-computer\n for various strategies\n\n\n\n\nWe can add it to \"~/.config/git/ignore\" as it\n will ignore globally on windows which we don't\n want. Also we don't have \".git/info/exclude\" in PG\n project's so the best place left is projects's\n .gitignore. That's what was patched. \n \n\n\n\n\n\n\n\neh? git creates .git/info/exclude in every git\n repository AFAIK. And it's referred to here: <https://git-scm.com/docs/gitignore>\n\n\n\n\nYes, git creates .git/info/exclude but point is, it is\n not in PG maintained codebase repo. So, no point adding to\n it. \n\n BTW, Tom and Peter said it's not going to be added anyway!\n \n\n\n \n\n\n\n\n\n\n\n\nYou've completely missed my point, which is that *you* should be\n adding it to that file, as an alternative to using a (locally)\n global gitignore file.My bad Andrew.  \n\nI agree with Tom and Peter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 19 May 2024 02:41:39 +0500", "msg_from": "Yasir <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ignore Visual Studio's Temp Files While Working with PG on\n Windows" } ]
[ { "msg_contents": "hi.\n\nhttps://wiki.postgresql.org/wiki/Todo\nDates and Times[edit]\nAllow infinite intervals just like infinite timestamps\nhttps://www.postgresql.org/message-id/[email protected]\n\nthis done at\nhttps://git.postgresql.org/cgit/postgresql.git/commit/?id=519fc1bd9e9d7b408903e44f55f83f6db30742b7\n\nShould we remove this item?\n\n\n", "msg_date": "Fri, 17 May 2024 21:12:25 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "remove Todo item: Allow infinite intervals just like infinite\n timestamps" }, { "msg_contents": "On Fri, May 17, 2024 at 9:12 AM jian he <[email protected]> wrote:\n> https://wiki.postgresql.org/wiki/Todo\n> Dates and Times[edit]\n> Allow infinite intervals just like infinite timestamps\n> https://www.postgresql.org/message-id/[email protected]\n>\n> this done at\n> https://git.postgresql.org/cgit/postgresql.git/commit/?id=519fc1bd9e9d7b408903e44f55f83f6db30742b7\n>\n> Should we remove this item?\n\nIf the item is done, you should edit the wiki page and mark it that\nway. Note this note at the top of the page:\n\n[D] Completed item - marks changes that are done, and will appear in\nthe PostgreSQL 17 release.\n\nNote that the Todo list is not a very good Todo list and most people\ndon't use it to find projects (and haven't for a long time). So it may\nnot get updated, or consulted, very often.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 13:08:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: remove Todo item: Allow infinite intervals just like infinite\n timestamps" } ]
[ { "msg_contents": "Hi\r\n\r\nafter migration on PostgreSQL 16 I seen 3x times (about every week) broken\r\ntables on replica nodes. The query fails with error\r\n\r\nERROR: could not access status of transaction 1442871302\r\nDETAIL: Could not open file \"pg_xact/0560\": No such file or directory\r\n\r\nverify_heapam reports\r\n\r\n^[[Aprd=# select * from verify_heapam('account_login_history') where blkno\r\n= 179036;\r\n blkno | offnum | attnum | msg\r\n\r\n--------+--------+--------+-------------------------------------------------------------------\r\n 179036 | 30 | | xmin 1393743382 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 31 | | xmin 1393748413 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 32 | | xmin 1393751312 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 33 | | xmin 1393763601 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 34 | | xmin 1393795606 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 35 | | xmin 1393817722 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 36 | | xmin 1393821359 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 37 | | xmin 1393821373 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 38 | | xmin 1393821523 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 39 | | xmin 1410429961 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 40 | | xmin 1410433593 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 41 | | xmin 1410501438 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 42 | | xmin 1410511950 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 43 | | xmin 1410516400 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 44 | | xmin 1410527685 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 45 | | xmin 1421269000 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 46 | | xmin 1421304247 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 47 | | xmin 1421333991 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 48 | | xmin 1421365062 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 49 | | xmin 1421427152 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 50 | | xmin 1421442074 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 51 | | xmin 1421462607 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 52 | | xmin 1421464665 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 53 | | xmin 1421472360 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 54 | | xmin 1421479152 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 55 | | xmin 1424811032 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 56 | | xmin 1432758173 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 57 | | xmin 1437607659 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 58 | | xmin 1437618864 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 59 | | xmin 1437621879 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 60 | | xmin 1440619832 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 61 | | xmin 1440619912 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 62 | | xmin 1442052720 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 63 | | xmin 1442052739 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 64 | | xmin 1442052794 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 65 | | xmin 1442052935 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 66 | | xmin 1442052962 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 67 | | xmin 1442052967 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n 179036 | 68 | | xmin 1442871302 precedes oldest valid\r\ntransaction ID 3:1687012112\r\n(39 rows)\r\n\r\nbut only last tuple with (179036,68) is really broken. I can read others.\r\n\r\nmaster\r\n\r\n(2024-05-17 14:36:57) prd=# SELECT * FROM\r\npage_header(get_raw_page('account_login_history', 179036));\r\n lsn │ checksum │ flags │ lower │ upper │ special │ pagesize │\r\nversion │ prune_xid\r\n───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────\r\n A576/810F4CE0 │ 0 │ 4 │ 296 │ 296 │ 8192 │ 8192 │\r\n 4 │ 0\r\n(1 row)\r\n\r\n\r\nreplica\r\nprd_aukro=# SELECT * FROM page_header(get_raw_page('account_login_history',\r\n179036));\r\n lsn | checksum | flags | lower | upper | special | pagesize |\r\nversion | prune_xid\r\n---------------+----------+-------+-------+-------+---------+----------+---------+-----------\r\n A56C/63979DA0 | 0 | 0 | 296 | 296 | 8192 | 8192 |\r\n 4 | 0\r\n(1 row)\r\n\r\nmaster\r\n\r\n2024-05-17 14:38:48) prd_aukro=# SELECT * FROM\r\npage_checksum(get_raw_page('account_login_history', 179036), 179036);\r\n page_checksum\r\n───────────────\r\n 17148\r\n(1 row)\r\n\r\nreplica\r\n\r\nprd_aukro=# SELECT * FROM\r\npage_checksum(get_raw_page('account_login_history', 179036), 179036);\r\n page_checksum\r\n---------------\r\n -17522\r\n(1 row)\r\n\r\nThe server was under load - but the related tuples was not changed\r\n\r\nmaster\r\n\r\n(2024-05-17 14:41:35) prd=# SELECT * FROM\r\nheap_page_items(get_raw_page('account_login_history', 179036)) where lp =\r\n68;\r\n─[ RECORD 1\r\n]───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\nlp │ 68\r\nlp_off │ 296\r\nlp_flags │ 1\r\nlp_len │ 92\r\nt_xmin │ 1442871302\r\nt_xmax │ 0\r\nt_field3 │ 0\r\nt_ctid │ (179036,68)\r\nt_infomask2 │ 9\r\nt_infomask │ 2819\r\nt_hoff │ 32\r\nt_bits │ 1111110100000000\r\nt_oid │ ∅\r\n\r\n\r\nreplica\r\n\r\nprd=# SELECT * FROM heap_page_items(get_raw_page('account_login_history',\r\n179036)) where lp = 68;\r\n-[ RECORD 1\r\n]---------------------------------------------------------------------------------------------------------------------------\r\nlp | 68\r\nlp_off | 296\r\nlp_flags | 1\r\nlp_len | 92\r\nt_xmin | 1442871302\r\nt_xmax | 0\r\nt_field3 | 0\r\nt_ctid | (179036,68)\r\nt_infomask2 | 9\r\nt_infomask | 2051\r\nt_hoff | 32\r\nt_bits | 1111110100000000\r\nt_oid |\r\n\r\nmaster\r\n\r\n(2024-05-17 14:45:30) prd=# SELECT t_ctid, raw_flags, combined_flags\r\n FROM heap_page_items(get_raw_page('account_login_history',\r\n179036)),\r\n LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)\r\n WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;\r\n─[ RECORD 1\r\n]──┬────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,1)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 2\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,2)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 3\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,3)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 4\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,4)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 5\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,5)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 6\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,6)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 7\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,7)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 8\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,8)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 9\r\n]──┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,9)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 10\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,10)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 11\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,11)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 12\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,12)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 13\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,13)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 14\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,14)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 15\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,15)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 16\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,16)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 17\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,17)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 18\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,18)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 19\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,19)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 20\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,20)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 21\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,21)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 22\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,22)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 23\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,23)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 24\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,24)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 25\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,25)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 26\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,26)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 27\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,27)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 28\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,28)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 29\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,29)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 30\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,30)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 31\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,31)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 32\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,32)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 33\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,33)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 34\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,34)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 35\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,35)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 36\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,36)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 37\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,37)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 38\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,38)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 39\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,39)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 40\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,40)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 41\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,41)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 42\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,42)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 43\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,43)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 44\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,44)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 45\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,45)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 46\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,46)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 47\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,47)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 48\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,48)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 49\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,49)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 50\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,50)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 51\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,51)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 52\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,52)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 53\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,53)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 54\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,54)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 55\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,55)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 56\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,56)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 57\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,57)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 58\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,58)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 59\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,59)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 60\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,60)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 61\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,61)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 62\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,62)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 63\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,63)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 64\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,64)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 65\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,65)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 66\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,66)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 67\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,67)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n─[ RECORD 68\r\n]─┼────────────────────────────────────────────────────────────────────────────────────────\r\nt_ctid │ (179036,68)\r\nraw_flags │\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags │ {HEAP_XMIN_FROZEN}\r\n\r\nreplica\r\n\r\nprd=# SELECT t_ctid, raw_flags, combined_flags\r\nprd-# FROM heap_page_items(get_raw_page('account_login_history',\r\n179036)),\r\nprd-# LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)\r\nprd-# WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;\r\n-[ RECORD 1\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,1)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 2\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,2)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 3\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,3)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 4\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,4)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 5\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,5)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 6\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,6)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 7\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,7)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 8\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,8)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 9\r\n]--+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,9)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 10\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,10)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 11\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,11)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 12\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,12)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 13\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,13)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 14\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,14)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 15\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,15)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 16\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,16)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 17\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,17)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 18\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,18)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 19\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,19)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 20\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,20)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 21\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,21)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 22\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,22)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 23\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,23)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 24\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,24)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 25\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,25)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 26\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,26)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 27\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,27)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 28\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,28)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 29\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,29)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}\r\ncombined_flags | {HEAP_XMIN_FROZEN}\r\n-[ RECORD 30\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,30)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 31\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,31)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 32\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,32)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 33\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,33)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 34\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,34)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 35\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,35)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 36\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,36)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 37\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,37)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 38\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,38)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 39\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,39)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 40\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,40)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 41\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,41)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 42\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,42)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 43\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,43)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 44\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,44)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 45\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,45)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 46\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,46)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 47\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,47)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 48\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,48)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 49\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,49)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 50\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,50)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 51\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,51)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 52\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,52)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 53\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,53)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 54\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,54)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 55\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,55)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 56\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,56)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 57\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,57)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 58\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,58)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 59\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,59)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 60\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,60)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 61\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,61)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 62\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,62)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 63\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,63)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 64\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,64)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 65\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,65)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 66\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,66)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 67\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,67)\r\nraw_flags |\r\n{HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n-[ RECORD 68\r\n]-+----------------------------------------------------------------------------------------\r\nt_ctid | (179036,68)\r\nraw_flags | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}\r\ncombined_flags | {}\r\n\r\nregards\r\n\r\nprd=# select version();\r\n-[ RECORD 1\r\n]----------------------------------------------------------------------------------------------------\r\nversion | PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC)\r\n8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit\r\n\r\nPavel\r\n\nHiafter migration on PostgreSQL 16 I seen 3x times (about every week) broken tables on replica nodes. The query fails with error ERROR:  could not access status of transaction 1442871302DETAIL:  Could not open file \"pg_xact/0560\": No such file or directoryverify_heapam reports^[[Aprd=# select * from verify_heapam('account_login_history') where blkno = 179036; blkno  | offnum | attnum |                                msg                                --------+--------+--------+------------------------------------------------------------------- 179036 |     30 |        | xmin 1393743382 precedes oldest valid transaction ID 3:1687012112 179036 |     31 |        | xmin 1393748413 precedes oldest valid transaction ID 3:1687012112 179036 |     32 |        | xmin 1393751312 precedes oldest valid transaction ID 3:1687012112 179036 |     33 |        | xmin 1393763601 precedes oldest valid transaction ID 3:1687012112 179036 |     34 |        | xmin 1393795606 precedes oldest valid transaction ID 3:1687012112 179036 |     35 |        | xmin 1393817722 precedes oldest valid transaction ID 3:1687012112 179036 |     36 |        | xmin 1393821359 precedes oldest valid transaction ID 3:1687012112 179036 |     37 |        | xmin 1393821373 precedes oldest valid transaction ID 3:1687012112 179036 |     38 |        | xmin 1393821523 precedes oldest valid transaction ID 3:1687012112 179036 |     39 |        | xmin 1410429961 precedes oldest valid transaction ID 3:1687012112 179036 |     40 |        | xmin 1410433593 precedes oldest valid transaction ID 3:1687012112 179036 |     41 |        | xmin 1410501438 precedes oldest valid transaction ID 3:1687012112 179036 |     42 |        | xmin 1410511950 precedes oldest valid transaction ID 3:1687012112 179036 |     43 |        | xmin 1410516400 precedes oldest valid transaction ID 3:1687012112 179036 |     44 |        | xmin 1410527685 precedes oldest valid transaction ID 3:1687012112 179036 |     45 |        | xmin 1421269000 precedes oldest valid transaction ID 3:1687012112 179036 |     46 |        | xmin 1421304247 precedes oldest valid transaction ID 3:1687012112 179036 |     47 |        | xmin 1421333991 precedes oldest valid transaction ID 3:1687012112 179036 |     48 |        | xmin 1421365062 precedes oldest valid transaction ID 3:1687012112 179036 |     49 |        | xmin 1421427152 precedes oldest valid transaction ID 3:1687012112 179036 |     50 |        | xmin 1421442074 precedes oldest valid transaction ID 3:1687012112 179036 |     51 |        | xmin 1421462607 precedes oldest valid transaction ID 3:1687012112 179036 |     52 |        | xmin 1421464665 precedes oldest valid transaction ID 3:1687012112 179036 |     53 |        | xmin 1421472360 precedes oldest valid transaction ID 3:1687012112 179036 |     54 |        | xmin 1421479152 precedes oldest valid transaction ID 3:1687012112 179036 |     55 |        | xmin 1424811032 precedes oldest valid transaction ID 3:1687012112 179036 |     56 |        | xmin 1432758173 precedes oldest valid transaction ID 3:1687012112 179036 |     57 |        | xmin 1437607659 precedes oldest valid transaction ID 3:1687012112 179036 |     58 |        | xmin 1437618864 precedes oldest valid transaction ID 3:1687012112 179036 |     59 |        | xmin 1437621879 precedes oldest valid transaction ID 3:1687012112 179036 |     60 |        | xmin 1440619832 precedes oldest valid transaction ID 3:1687012112 179036 |     61 |        | xmin 1440619912 precedes oldest valid transaction ID 3:1687012112 179036 |     62 |        | xmin 1442052720 precedes oldest valid transaction ID 3:1687012112 179036 |     63 |        | xmin 1442052739 precedes oldest valid transaction ID 3:1687012112 179036 |     64 |        | xmin 1442052794 precedes oldest valid transaction ID 3:1687012112 179036 |     65 |        | xmin 1442052935 precedes oldest valid transaction ID 3:1687012112 179036 |     66 |        | xmin 1442052962 precedes oldest valid transaction ID 3:1687012112 179036 |     67 |        | xmin 1442052967 precedes oldest valid transaction ID 3:1687012112 179036 |     68 |        | xmin 1442871302 precedes oldest valid transaction ID 3:1687012112(39 rows)but only last tuple with (179036,68) is really broken. I can read others.master(2024-05-17 14:36:57) prd=# SELECT * FROM page_header(get_raw_page('account_login_history', 179036));      lsn      │ checksum │ flags │ lower │ upper │ special │ pagesize │ version │ prune_xid ───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼─────────── A576/810F4CE0 │        0 │     4 │   296 │   296 │    8192 │     8192 │       4 │         0(1 row)replicaprd_aukro=# SELECT * FROM page_header(get_raw_page('account_login_history', 179036));      lsn      | checksum | flags | lower | upper | special | pagesize | version | prune_xid ---------------+----------+-------+-------+-------+---------+----------+---------+----------- A56C/63979DA0 |        0 |     0 |   296 |   296 |    8192 |     8192 |       4 |         0(1 row)master2024-05-17 14:38:48) prd_aukro=# SELECT * FROM page_checksum(get_raw_page('account_login_history', 179036), 179036); page_checksum ───────────────         17148(1 row)replicaprd_aukro=# SELECT * FROM page_checksum(get_raw_page('account_login_history', 179036), 179036); page_checksum ---------------        -17522(1 row)The server was under load - but the related tuples was not changedmaster(2024-05-17 14:41:35) prd=# SELECT * FROM heap_page_items(get_raw_page('account_login_history', 179036)) where lp = 68;─[ RECORD 1 ]───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────lp          │ 68lp_off      │ 296lp_flags    │ 1lp_len      │ 92t_xmin      │ 1442871302t_xmax      │ 0t_field3    │ 0t_ctid      │ (179036,68)t_infomask2 │ 9t_infomask  │ 2819t_hoff      │ 32t_bits      │ 1111110100000000t_oid       │ ∅replicaprd=# SELECT * FROM heap_page_items(get_raw_page('account_login_history', 179036)) where lp = 68;-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------lp          | 68lp_off      | 296lp_flags    | 1lp_len      | 92t_xmin      | 1442871302t_xmax      | 0t_field3    | 0t_ctid      | (179036,68)t_infomask2 | 9t_infomask  | 2051t_hoff      | 32t_bits      | 1111110100000000t_oid       | master(2024-05-17 14:45:30) prd=# SELECT t_ctid, raw_flags, combined_flags         FROM heap_page_items(get_raw_page('account_login_history', 179036)),           LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)         WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;─[ RECORD 1 ]──┬────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,1)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 2 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,2)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 3 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,3)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 4 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,4)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 5 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,5)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 6 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,6)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 7 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,7)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 8 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,8)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 9 ]──┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,9)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 10 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,10)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 11 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,11)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 12 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,12)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 13 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,13)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 14 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,14)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 15 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,15)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 16 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,16)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 17 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,17)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 18 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,18)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 19 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,19)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 20 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,20)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 21 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,21)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 22 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,22)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 23 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,23)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 24 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,24)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 25 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,25)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 26 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,26)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 27 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,27)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 28 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,28)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 29 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,29)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 30 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,30)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 31 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,31)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 32 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,32)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 33 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,33)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 34 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,34)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 35 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,35)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 36 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,36)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 37 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,37)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 38 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,38)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 39 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,39)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 40 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,40)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 41 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,41)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 42 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,42)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 43 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,43)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 44 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,44)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 45 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,45)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 46 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,46)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 47 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,47)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 48 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,48)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 49 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,49)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 50 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,50)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 51 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,51)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 52 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,52)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 53 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,53)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 54 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,54)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 55 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,55)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 56 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,56)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 57 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,57)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 58 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,58)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 59 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,59)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 60 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,60)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 61 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,61)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 62 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,62)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 63 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,63)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 64 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,64)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 65 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,65)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 66 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,66)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 67 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,67)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}─[ RECORD 68 ]─┼────────────────────────────────────────────────────────────────────────────────────────t_ctid         │ (179036,68)raw_flags      │ {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags │ {HEAP_XMIN_FROZEN}replicaprd=# SELECT t_ctid, raw_flags, combined_flagsprd-#          FROM heap_page_items(get_raw_page('account_login_history', 179036)),prd-#            LATERAL heap_tuple_infomask_flags(t_infomask, t_infomask2)prd-#          WHERE t_infomask IS NOT NULL OR t_infomask2 IS NOT NULL;-[ RECORD 1 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,1)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 2 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,2)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 3 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,3)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 4 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,4)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 5 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,5)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 6 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,6)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 7 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,7)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 8 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,8)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 9 ]--+----------------------------------------------------------------------------------------t_ctid         | (179036,9)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 10 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,10)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 11 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,11)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 12 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,12)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 13 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,13)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 14 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,14)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 15 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,15)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 16 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,16)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 17 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,17)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 18 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,18)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 19 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,19)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 20 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,20)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 21 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,21)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 22 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,22)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 23 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,23)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 24 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,24)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 25 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,25)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 26 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,26)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 27 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,27)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 28 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,28)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 29 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,29)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMIN_INVALID,HEAP_XMAX_INVALID}combined_flags | {HEAP_XMIN_FROZEN}-[ RECORD 30 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,30)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 31 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,31)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 32 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,32)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 33 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,33)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 34 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,34)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 35 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,35)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 36 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,36)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 37 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,37)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 38 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,38)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 39 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,39)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 40 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,40)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 41 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,41)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 42 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,42)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 43 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,43)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 44 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,44)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 45 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,45)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 46 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,46)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 47 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,47)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 48 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,48)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 49 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,49)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 50 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,50)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 51 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,51)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 52 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,52)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 53 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,53)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 54 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,54)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 55 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,55)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 56 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,56)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 57 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,57)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 58 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,58)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 59 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,59)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 60 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,60)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 61 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,61)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 62 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,62)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 63 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,63)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 64 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,64)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 65 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,65)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 66 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,66)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 67 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,67)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID}combined_flags | {}-[ RECORD 68 ]-+----------------------------------------------------------------------------------------t_ctid         | (179036,68)raw_flags      | {HEAP_HASNULL,HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}combined_flags | {}regardsprd=# select version();-[ RECORD 1 ]----------------------------------------------------------------------------------------------------version | PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bitPavel", "msg_date": "Fri, 17 May 2024 15:12:31 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "On Fri, May 17, 2024 at 9:13 AM Pavel Stehule <[email protected]> wrote:\n> after migration on PostgreSQL 16 I seen 3x times (about every week) broken tables on replica nodes. The query fails with error\n>\n> ERROR: could not access status of transaction 1442871302\n> DETAIL: Could not open file \"pg_xact/0560\": No such file or directory\n\nYou've shown an inconsistency between the primary and standby with\nrespect to the heap tuple infomask bits related to freezing. It looks\nlike a FREEZE WAL record from the primary was never replayed on the\nstandby.\n\nIt's natural for me to wonder if my Postgres 16 work on page-level\nfreezing might be a factor here. If that really was true, then it\nwould be necessary to explain why the primary and standby are\ninconsistent (no reason to suspect a problem on the primary here).\nIt'd have to be the kind of issue that could be detected mechanically\nusing wal_consistency_checking, but wasn't detected that way before\nnow -- that seems unlikely.\n\nIt's worth considering if the more aggressive behavior around\nrelfrozenxid advancement (in 15) and freezing (in 16) has increased\nthe likelihood of problems like these in setups that were already\nfaulty, in whatever way. The standby database is indeed corrupt, but\neven on 16 it's fairly isolated corruption in practical terms. The\nfull extent of the problem is clear once amcheck is run, but only one\ntuple can actually cause the system to error due to the influence of\nhint bits (for better or worse, hint bits mask the problem quite well,\neven on 16).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 May 2024 12:02:25 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "pá 17. 5. 2024 v 18:02 odesílatel Peter Geoghegan <[email protected]> napsal:\n\n> On Fri, May 17, 2024 at 9:13 AM Pavel Stehule <[email protected]>\n> wrote:\n> > after migration on PostgreSQL 16 I seen 3x times (about every week)\n> broken tables on replica nodes. The query fails with error\n> >\n> > ERROR: could not access status of transaction 1442871302\n> > DETAIL: Could not open file \"pg_xact/0560\": No such file or directory\n>\n> You've shown an inconsistency between the primary and standby with\n> respect to the heap tuple infomask bits related to freezing. It looks\n> like a FREEZE WAL record from the primary was never replayed on the\n> standby.\n>\n\nIt think is possible so broken tuples was created before upgrade from\nPostgres 15 to Postgres 16 - not too far before, so this bug can be side\neffect of upgrade\n\n\n\n>\n> It's natural for me to wonder if my Postgres 16 work on page-level\n> freezing might be a factor here. If that really was true, then it\n> would be necessary to explain why the primary and standby are\n> inconsistent (no reason to suspect a problem on the primary here).\n> It'd have to be the kind of issue that could be detected mechanically\n> using wal_consistency_checking, but wasn't detected that way before\n> now -- that seems unlikely.\n>\n> It's worth considering if the more aggressive behavior around\n> relfrozenxid advancement (in 15) and freezing (in 16) has increased\n> the likelihood of problems like these in setups that were already\n> faulty, in whatever way. The standby database is indeed corrupt, but\n> even on 16 it's fairly isolated corruption in practical terms. The\n> full extent of the problem is clear once amcheck is run, but only one\n> tuple can actually cause the system to error due to the influence of\n> hint bits (for better or worse, hint bits mask the problem quite well,\n> even on 16).\n>\n> --\n> Peter Geoghegan\n>\n\npá 17. 5. 2024 v 18:02 odesílatel Peter Geoghegan <[email protected]> napsal:On Fri, May 17, 2024 at 9:13 AM Pavel Stehule <[email protected]> wrote:\n> after migration on PostgreSQL 16 I seen 3x times (about every week) broken tables on replica nodes. The query fails with error\n>\n> ERROR:  could not access status of transaction 1442871302\n> DETAIL:  Could not open file \"pg_xact/0560\": No such file or directory\n\nYou've shown an inconsistency between the primary and standby with\nrespect to the heap tuple infomask bits related to freezing. It looks\nlike a FREEZE WAL record from the primary was never replayed on the\nstandby.It think is possible so broken tuples was created before upgrade from Postgres 15 to Postgres 16 - not too far before, so this bug can be side effect of upgrade \n\nIt's natural for me to wonder if my Postgres 16 work on page-level\nfreezing might be a factor here. If that really was true, then it\nwould be necessary to explain why the primary and standby are\ninconsistent (no reason to suspect a problem on the primary here).\nIt'd have to be the kind of issue that could be detected mechanically\nusing wal_consistency_checking, but wasn't detected that way before\nnow -- that seems unlikely.\n\nIt's worth considering if the more aggressive behavior around\nrelfrozenxid advancement (in 15) and freezing (in 16) has increased\nthe likelihood of problems like these in setups that were already\nfaulty, in whatever way. The standby database is indeed corrupt, but\neven on 16 it's fairly isolated corruption in practical terms. The\nfull extent of the problem is clear once amcheck is run, but only one\ntuple can actually cause the system to error due to the influence of\nhint bits (for better or worse, hint bits mask the problem quite well,\neven on 16).\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 17 May 2024 19:18:17 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "On Fri, May 17, 2024 at 1:18 PM Pavel Stehule <[email protected]> wrote:\n> pá 17. 5. 2024 v 18:02 odesílatel Peter Geoghegan <[email protected]> napsal:\n>> You've shown an inconsistency between the primary and standby with\n>> respect to the heap tuple infomask bits related to freezing. It looks\n>> like a FREEZE WAL record from the primary was never replayed on the\n>> standby.\n>\n>\n> It think is possible so broken tuples was created before upgrade from Postgres 15 to Postgres 16 - not too far before, so this bug can be side effect of upgrade\n\nI half suspected something like that myself. Maybe the problem\nhappened *during* the upgrade, even.\n\nThere have been historical bugs affecting pg_upgrade and relfrozenxid.\nCommit 74cf7d46 is one good example from only a few years ago.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 May 2024 13:25:21 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "Hi,\n\nOn 2024-05-17 15:12:31 +0200, Pavel Stehule wrote:\n> after migration on PostgreSQL 16 I seen 3x times (about every week) broken\n> tables on replica nodes. The query fails with error\n\nMigrating from what version?\n\n\nYou're saying that the data is correctly accessible on primaries, but broken\non standbys? Is there any difference in how the page looks like on the primary\nvs standby?\n\n\n> ERROR: could not access status of transaction 1442871302\n> DETAIL: Could not open file \"pg_xact/0560\": No such file or directory\n>\n> verify_heapam reports\n>\n> ^[[Aprd=# select * from verify_heapam('account_login_history') where blkno\n> = 179036;\n> blkno | offnum | attnum | msg\n>\n> --------+--------+--------+-------------------------------------------------------------------\n> 179036 | 30 | | xmin 1393743382 precedes oldest valid\n> transaction ID 3:1687012112\n\nSo that's not just a narrow race...\n\n\n> master\n>\n> (2024-05-17 14:36:57) prd=# SELECT * FROM\n> page_header(get_raw_page('account_login_history', 179036));\n> lsn │ checksum │ flags │ lower │ upper │ special │ pagesize │\n> version │ prune_xid\n> ───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────\n> A576/810F4CE0 │ 0 │ 4 │ 296 │ 296 │ 8192 │ 8192 │\n> 4 │ 0\n> (1 row)\n>\n>\n> replica\n> prd_aukro=# SELECT * FROM page_header(get_raw_page('account_login_history',\n> 179036));\n> lsn | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> ---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n> A56C/63979DA0 | 0 | 0 | 296 | 296 | 8192 | 8192 |\n> 4 | 0\n> (1 row)\n\nIs the replica behind the primary? Or did we somehow end up with diverging\ndata? The page LSNs differ by about 40GB...\n\nIs there evidence of failed truncations of the relation in the log? From\nautovacuum?\n\nDoes the data in the readable versions of the tuples on that page actually\nlook valid? Is it possibly duplicated data?\n\n\nI'm basically wondering whether it's possible that we errored out during\ntruncation (e.g. due to a file permission issue or such). Due to some\nbrokenness in RelationTruncate() that can lead to data divergence between\nprimary and standby and to old tuples re-appearing on either.\n\n\nAnother question: Do you use pg_repack or such?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 May 2024 12:50:10 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16\n (3x times last month)" }, { "msg_contents": "On Fri, May 17, 2024 at 3:50 PM Andres Freund <[email protected]> wrote:\n> You're saying that the data is correctly accessible on primaries, but broken\n> on standbys? Is there any difference in how the page looks like on the primary\n> vs standby?\n\nThere clearly is. The relevant infomask bits are different. I didn't\nexamine it much closer than that, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 17 May 2024 16:03:09 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "pá 17. 5. 2024 v 21:50 odesílatel Andres Freund <[email protected]> napsal:\n\n> Hi,\n>\n> On 2024-05-17 15:12:31 +0200, Pavel Stehule wrote:\n> > after migration on PostgreSQL 16 I seen 3x times (about every week)\n> broken\n> > tables on replica nodes. The query fails with error\n>\n> Migrating from what version?\n>\n\nI think 14, but it should be verified tomorrow\n\n>\n>\n> You're saying that the data is correctly accessible on primaries, but\n> broken\n> on standbys? Is there any difference in how the page looks like on the\n> primary\n> vs standby?\n>\n\nI saved one page from master and standby. Can I send it privately? There\nare some private data (although not too sensitive)\n\n\n>\n>\n> > ERROR: could not access status of transaction 1442871302\n> > DETAIL: Could not open file \"pg_xact/0560\": No such file or directory\n> >\n> > verify_heapam reports\n> >\n> > ^[[Aprd=# select * from verify_heapam('account_login_history') where\n> blkno\n> > = 179036;\n> > blkno | offnum | attnum | msg\n> >\n> >\n> --------+--------+--------+-------------------------------------------------------------------\n> > 179036 | 30 | | xmin 1393743382 precedes oldest valid\n> > transaction ID 3:1687012112\n>\n> So that's not just a narrow race...\n>\n>\n> > master\n> >\n> > (2024-05-17 14:36:57) prd=# SELECT * FROM\n> > page_header(get_raw_page('account_login_history', 179036));\n> > lsn │ checksum │ flags │ lower │ upper │ special │ pagesize │\n> > version │ prune_xid\n> >\n> ───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────\n> > A576/810F4CE0 │ 0 │ 4 │ 296 │ 296 │ 8192 │ 8192 │\n> > 4 │ 0\n> > (1 row)\n> >\n> >\n> > replica\n> > prd_aukro=# SELECT * FROM\n> page_header(get_raw_page('account_login_history',\n> > 179036));\n> > lsn | checksum | flags | lower | upper | special | pagesize |\n> > version | prune_xid\n> >\n> ---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n> > A56C/63979DA0 | 0 | 0 | 296 | 296 | 8192 | 8192 |\n> > 4 | 0\n> > (1 row)\n>\n> Is the replica behind the primary? Or did we somehow end up with diverging\n> data? The page LSNs differ by about 40GB...\n>\n> Is there evidence of failed truncations of the relation in the log? From\n> autovacuum?\n>\n\nno I did not see these bugs,\n\n>\n> Does the data in the readable versions of the tuples on that page actually\n> look valid? Is it possibly duplicated data?\n>\n\nlooks well, I didn't see any strange content\n\n>\n>\n> I'm basically wondering whether it's possible that we errored out during\n> truncation (e.g. due to a file permission issue or such). Due to some\n> brokenness in RelationTruncate() that can lead to data divergence between\n> primary and standby and to old tuples re-appearing on either.\n>\n>\n> Another question: Do you use pg_repack or such?\n>\n\npg_repack was used 2 months before migration\n\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\npá 17. 5. 2024 v 21:50 odesílatel Andres Freund <[email protected]> napsal:Hi,\n\nOn 2024-05-17 15:12:31 +0200, Pavel Stehule wrote:\n> after migration on PostgreSQL 16 I seen 3x times (about every week) broken\n> tables on replica nodes. The query fails with error\n\nMigrating from what version?I think 14, but it should be verified tomorrow  \n\n\nYou're saying that the data is correctly accessible on primaries, but broken\non standbys? Is there any difference in how the page looks like on the primary\nvs standby?I saved one page from master and standby. Can I send it privately? There are some private data (although not too sensitive) \n\n\n> ERROR:  could not access status of transaction 1442871302\n> DETAIL:  Could not open file \"pg_xact/0560\": No such file or directory\n>\n> verify_heapam reports\n>\n> ^[[Aprd=# select * from verify_heapam('account_login_history') where blkno\n> = 179036;\n>  blkno  | offnum | attnum |                                msg\n>\n> --------+--------+--------+-------------------------------------------------------------------\n>  179036 |     30 |        | xmin 1393743382 precedes oldest valid\n> transaction ID 3:1687012112\n\nSo that's not just a narrow race...\n\n\n> master\n>\n> (2024-05-17 14:36:57) prd=# SELECT * FROM\n> page_header(get_raw_page('account_login_history', 179036));\n>       lsn      │ checksum │ flags │ lower │ upper │ special │ pagesize │\n> version │ prune_xid\n> ───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────\n>  A576/810F4CE0 │        0 │     4 │   296 │   296 │    8192 │     8192 │\n>     4 │         0\n> (1 row)\n>\n>\n> replica\n> prd_aukro=# SELECT * FROM page_header(get_raw_page('account_login_history',\n> 179036));\n>       lsn      | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> ---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n>  A56C/63979DA0 |        0 |     0 |   296 |   296 |    8192 |     8192 |\n>     4 |         0\n> (1 row)\n\nIs the replica behind the primary? Or did we somehow end up with diverging\ndata? The page LSNs differ by about 40GB...\n\nIs there evidence of failed truncations of the relation in the log? From\nautovacuum?no I did not see these bugs, \n\nDoes the data in the readable versions of the tuples on that page actually\nlook valid? Is it possibly duplicated data?looks well, I didn't see any strange content \n\n\nI'm basically wondering whether it's possible that we errored out during\ntruncation (e.g. due to a file permission issue or such). Due to some\nbrokenness in RelationTruncate() that can lead to data divergence between\nprimary and standby and to old tuples re-appearing on either.\n\n\nAnother question: Do you use pg_repack or such?pg_repack was used 2 months before migration  \n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 17 May 2024 22:05:15 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "pá 17. 5. 2024 v 22:05 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n>\n>\n> pá 17. 5. 2024 v 21:50 odesílatel Andres Freund <[email protected]>\n> napsal:\n>\n>> Hi,\n>>\n>> On 2024-05-17 15:12:31 +0200, Pavel Stehule wrote:\n>> > after migration on PostgreSQL 16 I seen 3x times (about every week)\n>> broken\n>> > tables on replica nodes. The query fails with error\n>>\n>> Migrating from what version?\n>>\n>\n> I think 14, but it should be verified tomorrow\n>\n\nupgrade was from 15.2\n\n\n\n>\n>>\n>> You're saying that the data is correctly accessible on primaries, but\n>> broken\n>> on standbys? Is there any difference in how the page looks like on the\n>> primary\n>> vs standby?\n>>\n>\n> I saved one page from master and standby. Can I send it privately? There\n> are some private data (although not too sensitive)\n>\n>\n>>\n>>\n>> > ERROR: could not access status of transaction 1442871302\n>> > DETAIL: Could not open file \"pg_xact/0560\": No such file or directory\n>> >\n>> > verify_heapam reports\n>> >\n>> > ^[[Aprd=# select * from verify_heapam('account_login_history') where\n>> blkno\n>> > = 179036;\n>> > blkno | offnum | attnum | msg\n>> >\n>> >\n>> --------+--------+--------+-------------------------------------------------------------------\n>> > 179036 | 30 | | xmin 1393743382 precedes oldest valid\n>> > transaction ID 3:1687012112\n>>\n>> So that's not just a narrow race...\n>>\n>>\n>> > master\n>> >\n>> > (2024-05-17 14:36:57) prd=# SELECT * FROM\n>> > page_header(get_raw_page('account_login_history', 179036));\n>> > lsn │ checksum │ flags │ lower │ upper │ special │ pagesize │\n>> > version │ prune_xid\n>> >\n>> ───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────\n>> > A576/810F4CE0 │ 0 │ 4 │ 296 │ 296 │ 8192 │ 8192 │\n>> > 4 │ 0\n>> > (1 row)\n>> >\n>> >\n>> > replica\n>> > prd_aukro=# SELECT * FROM\n>> page_header(get_raw_page('account_login_history',\n>> > 179036));\n>> > lsn | checksum | flags | lower | upper | special | pagesize |\n>> > version | prune_xid\n>> >\n>> ---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n>> > A56C/63979DA0 | 0 | 0 | 296 | 296 | 8192 | 8192 |\n>> > 4 | 0\n>> > (1 row)\n>>\n>> Is the replica behind the primary? Or did we somehow end up with diverging\n>> data? The page LSNs differ by about 40GB...\n>>\n>> Is there evidence of failed truncations of the relation in the log? From\n>> autovacuum?\n>>\n>\n> no I did not see these bugs,\n>\n>>\n>> Does the data in the readable versions of the tuples on that page actually\n>> look valid? Is it possibly duplicated data?\n>>\n>\n> looks well, I didn't see any strange content\n>\n>>\n>>\n>> I'm basically wondering whether it's possible that we errored out during\n>> truncation (e.g. due to a file permission issue or such). Due to some\n>> brokenness in RelationTruncate() that can lead to data divergence between\n>> primary and standby and to old tuples re-appearing on either.\n>>\n>>\n>> Another question: Do you use pg_repack or such?\n>>\n>\n> pg_repack was used 2 months before migration\n>\n>\n>\n>>\n>> Greetings,\n>>\n>> Andres Freund\n>>\n>\n\npá 17. 5. 2024 v 22:05 odesílatel Pavel Stehule <[email protected]> napsal:pá 17. 5. 2024 v 21:50 odesílatel Andres Freund <[email protected]> napsal:Hi,\n\nOn 2024-05-17 15:12:31 +0200, Pavel Stehule wrote:\n> after migration on PostgreSQL 16 I seen 3x times (about every week) broken\n> tables on replica nodes. The query fails with error\n\nMigrating from what version?I think 14, but it should be verified tomorrow  upgrade was from 15.2 \n\n\nYou're saying that the data is correctly accessible on primaries, but broken\non standbys? Is there any difference in how the page looks like on the primary\nvs standby?I saved one page from master and standby. Can I send it privately? There are some private data (although not too sensitive) \n\n\n> ERROR:  could not access status of transaction 1442871302\n> DETAIL:  Could not open file \"pg_xact/0560\": No such file or directory\n>\n> verify_heapam reports\n>\n> ^[[Aprd=# select * from verify_heapam('account_login_history') where blkno\n> = 179036;\n>  blkno  | offnum | attnum |                                msg\n>\n> --------+--------+--------+-------------------------------------------------------------------\n>  179036 |     30 |        | xmin 1393743382 precedes oldest valid\n> transaction ID 3:1687012112\n\nSo that's not just a narrow race...\n\n\n> master\n>\n> (2024-05-17 14:36:57) prd=# SELECT * FROM\n> page_header(get_raw_page('account_login_history', 179036));\n>       lsn      │ checksum │ flags │ lower │ upper │ special │ pagesize │\n> version │ prune_xid\n> ───────────────┼──────────┼───────┼───────┼───────┼─────────┼──────────┼─────────┼───────────\n>  A576/810F4CE0 │        0 │     4 │   296 │   296 │    8192 │     8192 │\n>     4 │         0\n> (1 row)\n>\n>\n> replica\n> prd_aukro=# SELECT * FROM page_header(get_raw_page('account_login_history',\n> 179036));\n>       lsn      | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> ---------------+----------+-------+-------+-------+---------+----------+---------+-----------\n>  A56C/63979DA0 |        0 |     0 |   296 |   296 |    8192 |     8192 |\n>     4 |         0\n> (1 row)\n\nIs the replica behind the primary? Or did we somehow end up with diverging\ndata? The page LSNs differ by about 40GB...\n\nIs there evidence of failed truncations of the relation in the log? From\nautovacuum?no I did not see these bugs, \n\nDoes the data in the readable versions of the tuples on that page actually\nlook valid? Is it possibly duplicated data?looks well, I didn't see any strange content \n\n\nI'm basically wondering whether it's possible that we errored out during\ntruncation (e.g. due to a file permission issue or such). Due to some\nbrokenness in RelationTruncate() that can lead to data divergence between\nprimary and standby and to old tuples re-appearing on either.\n\n\nAnother question: Do you use pg_repack or such?pg_repack was used 2 months before migration  \n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 17 May 2024 22:29:32 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "Hi\n\n\n\n>\n> Another question: Do you use pg_repack or such?\n>\n\npg_repack was used for some tables, but I found broken tables, where\npg_repack was not used.\n\nRegards\n\nPavel\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi\n\n\nAnother question: Do you use pg_repack or such?pg_repack was used for some tables, but I found broken tables, where pg_repack was not used.RegardsPavel\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 21 May 2024 07:48:27 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16 (3x\n times last month)" }, { "msg_contents": "On 2024-05-17 16:03:09 -0400, Peter Geoghegan wrote:\n> On Fri, May 17, 2024 at 3:50 PM Andres Freund <[email protected]> wrote:\n> > You're saying that the data is correctly accessible on primaries, but broken\n> > on standbys? Is there any difference in how the page looks like on the primary\n> > vs standby?\n> \n> There clearly is. The relevant infomask bits are different. I didn't\n> examine it much closer than that, though.\n\nThat could also just be because of a different replay position, hence my\nquestion about that somewhere else in the email...\n\n\n", "msg_date": "Tue, 21 May 2024 08:28:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: broken tables on hot standby after migration on PostgreSQL 16\n (3x times last month)" } ]
[ { "msg_contents": "(moving to a new thread)\n\nOn Thu, May 16, 2024 at 09:16:46PM -0500, Nathan Bossart wrote:\n> On Thu, May 16, 2024 at 04:37:10PM +0000, Imseih (AWS), Sami wrote:\n>> Also, Not sure if I am mistaken here, but the \"+ 5\" in the existing docs\n>> seems wrong.\n>> \n>> If it refers to NUM_AUXILIARY_PROCS defined in \n>> include/storage/proc.h, it should a \"6\"\n>> \n>> #define NUM_AUXILIARY_PROCS 6\n>> \n>> This is not a consequence of this patch, and can be dealt with\n>> In a separate thread if my understanding is correct.\n> \n> Ha, I think it should actually be \"+ 7\"! The value is calculated as\n> \n> \tMaxConnections + autovacuum_max_workers + 1 + max_worker_processes + max_wal_senders + 6\n> \n> Looking at the history, this documentation tends to be wrong quite often.\n> In v9.2, the checkpointer was introduced, and these formulas were not\n> updated. In v9.3, background worker processes were introduced, and the\n> formulas were still not updated. Finally, in v9.6, it was fixed in commit\n> 597f7e3. Then, in v14, the archiver process was made an auxiliary process\n> (commit d75288f), making the formulas out-of-date again. And in v17, the\n> WAL summarizer was added.\n> \n> On top of this, IIUC you actually need even more semaphores if your system\n> doesn't support atomics, and from a quick skim this doesn't seem to be\n> covered in this documentation.\n\nA couple of other problems I noticed:\n\n* max_wal_senders is missing from this sentence:\n\n When using System V semaphores,\n <productname>PostgreSQL</productname> uses one semaphore per allowed connection\n (<xref linkend=\"guc-max-connections\"/>), allowed autovacuum worker process\n (<xref linkend=\"guc-autovacuum-max-workers\"/>) and allowed background\n process (<xref linkend=\"guc-max-worker-processes\"/>), in sets of 16.\n\n* AFAICT the discussion about the formulas in the paragraphs following the\n table doesn't explain the reason for the constant.\n\n* IMHO the following sentence is difficult to decipher, and I can't tell if\n it actually matches the formula in the table:\n\n The maximum number of semaphores in the system\n is set by <varname>SEMMNS</varname>, which consequently must be at least\n as high as <varname>max_connections</varname> plus\n <varname>autovacuum_max_workers</varname> plus <varname>max_wal_senders</varname>,\n plus <varname>max_worker_processes</varname>, plus one extra for each 16\n allowed connections plus workers (see the formula in <xref\n linkend=\"sysvipc-parameters\"/>).\n\nAt a bare minimum, we should probably fix the obvious problems, but I\nwonder if we could simplify this section a bit, too. If the exact values\nare important, maybe we could introduce more GUCs like\nshared_memory_size_in_huge_pages that can be consulted (instead of\nrequiring users to break out their calculators).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 17 May 2024 11:44:52 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> [ many, many problems in documented formulas ]\n\n> At a bare minimum, we should probably fix the obvious problems, but I\n> wonder if we could simplify this section a bit, too.\n\nYup. \"The definition of insanity is doing the same thing over and\nover and expecting different results.\" Time to give up on documenting\nthese things in such detail. Anybody who really wants to know can\nlook at the source code.\n\n> If the exact values\n> are important, maybe we could introduce more GUCs like\n> shared_memory_size_in_huge_pages that can be consulted (instead of\n> requiring users to break out their calculators).\n\nI don't especially like shared_memory_size_in_huge_pages, and I don't\nwant to introduce more of those. GUCs are not the right way to expose\nvalues that you can't actually set. (Yeah, I'm guilty of some of the\nexisting ones like that, but it's still not a good thing.) Maybe it's\ntime to introduce a system view for such things? It could be really\nsimple, with name and value, or we could try to steal some additional\nideas such as units from pg_settings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 13:09:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Fri, May 17, 2024 at 01:09:55PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> [ many, many problems in documented formulas ]\n> \n>> At a bare minimum, we should probably fix the obvious problems, but I\n>> wonder if we could simplify this section a bit, too.\n> \n> Yup. \"The definition of insanity is doing the same thing over and\n> over and expecting different results.\" Time to give up on documenting\n> these things in such detail. Anybody who really wants to know can\n> look at the source code.\n\nCool. I'll at least fix the back-branches as-is, but I'll see about\nrevamping this stuff for v18.\n\n>> If the exact values\n>> are important, maybe we could introduce more GUCs like\n>> shared_memory_size_in_huge_pages that can be consulted (instead of\n>> requiring users to break out their calculators).\n> \n> I don't especially like shared_memory_size_in_huge_pages, and I don't\n> want to introduce more of those. GUCs are not the right way to expose\n> values that you can't actually set. (Yeah, I'm guilty of some of the\n> existing ones like that, but it's still not a good thing.) Maybe it's\n> time to introduce a system view for such things? It could be really\n> simple, with name and value, or we could try to steal some additional\n> ideas such as units from pg_settings.\n\nThe advantage of the GUC is that its value could be seen before trying to\nactually start the server. I don't dispute that it's not the right way to\nsurface this information, though.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 17 May 2024 12:48:37 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": ">>> If the exact values\r\n>>> are important, maybe we could introduce more GUCs like\r\n>>> shared_memory_size_in_huge_pages that can be consulted (instead of\r\n>>> requiring users to break out their calculators).\r\n>>\r\n>> I don't especially like shared_memory_size_in_huge_pages, and I don't\r\n>> want to introduce more of those. GUCs are not the right way to expose\r\n>> values that you can't actually set. (Yeah, I'm guilty of some of the\r\n>> existing ones like that, but it's still not a good thing.) Maybe it's\r\n>> time to introduce a system view for such things? It could be really\r\n>> simple, with name and value, or we could try to steal some additional\r\n>> ideas such as units from pg_settings.\r\n\r\nI always found some of the preset GUCs [1] to be useful for writing SQLs used by\r\nDBAs, particularly block_size, wal_block_size, server_version and server_version_num.\r\n\r\n> The advantage of the GUC is that its value could be seen before trying to\r\n> actually start the server. \r\n\r\nOnly if they have a sample in postgresql.conf file, right? \r\nA GUC like shared_memory_size_in_huge_pages will not be.\r\n\r\n\r\n[1] https://www.postgresql.org/docs/current/runtime-config-preset.html\r\n\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n", "msg_date": "Fri, 17 May 2024 18:30:08 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Fri, May 17, 2024 at 12:48:37PM -0500, Nathan Bossart wrote:\n> On Fri, May 17, 2024 at 01:09:55PM -0400, Tom Lane wrote:\n>> Nathan Bossart <[email protected]> writes:\n>>> At a bare minimum, we should probably fix the obvious problems, but I\n>>> wonder if we could simplify this section a bit, too.\n>> \n>> Yup. \"The definition of insanity is doing the same thing over and\n>> over and expecting different results.\" Time to give up on documenting\n>> these things in such detail. Anybody who really wants to know can\n>> look at the source code.\n> \n> Cool. I'll at least fix the back-branches as-is, but I'll see about\n> revamping this stuff for v18.\n\nAttached is probably the absolute least we should do for the back-branches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 17 May 2024 14:21:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Fri, May 17, 2024 at 06:30:08PM +0000, Imseih (AWS), Sami wrote:\n>> The advantage of the GUC is that its value could be seen before trying to\n>> actually start the server. \n> \n> Only if they have a sample in postgresql.conf file, right? \n> A GUC like shared_memory_size_in_huge_pages will not be.\n\nshared_memory_size_in_huge_pages is computed at runtime and can be viewed\nwith \"postgres -C\" before actually trying to start the server [0].\n\n[0] https://www.postgresql.org/docs/devel/kernel-resources.html#LINUX-HUGE-PAGES\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 17 May 2024 14:26:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "Hi,\n\nOn 2024-05-17 18:30:08 +0000, Imseih (AWS), Sami wrote:\n> > The advantage of the GUC is that its value could be seen before trying to\n> > actually start the server.\n>\n> Only if they have a sample in postgresql.conf file, right?\n> A GUC like shared_memory_size_in_huge_pages will not be.\n\nYou can query gucs with -C. E.g.\n\npostgres -D pgdev-dev -c shared_buffers=16MB -C shared_memory_size_in_huge_pages\n13\npostgres -D pgdev-dev -c shared_buffers=16MB -c huge_page_size=1GB -C shared_memory_size_in_huge_pages\n1\n\nWhich is very useful to be able to actually configure that number of huge\npages. I don't think a system view or such would not help here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 17 May 2024 12:28:20 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "> postgres -D pgdev-dev -c shared_buffers=16MB -C shared_memory_size_in_huge_pages\r\n> 13\r\n> postgres -D pgdev-dev -c shared_buffers=16MB -c huge_page_size=1GB -C shared_memory_size_in_huge_pages\r\n> 1\r\n\r\n\r\n> Which is very useful to be able to actually configure that number of huge\r\n> pages. I don't think a system view or such would not help here.\r\n\r\nOops. Totally missed the -C flag. Thanks for clarifying!\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n", "msg_date": "Fri, 17 May 2024 19:34:06 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Fri, May 17, 2024 at 02:21:23PM -0500, Nathan Bossart wrote:\n> On Fri, May 17, 2024 at 12:48:37PM -0500, Nathan Bossart wrote:\n>> Cool. I'll at least fix the back-branches as-is, but I'll see about\n>> revamping this stuff for v18.\n> \n> Attached is probably the absolute least we should do for the back-branches.\n\nAny concerns with doing something like this [0] for the back-branches? The\nconstant would be 6 instead of 7 on v14 through v16.\n\nI wrote a quick sketch for what a runtime-computed GUC might look like for\nv18. We don't have agreement on this approach, but I figured I'd post\nsomething while we search for a better one.\n\n[0] https://postgr.es/m/attachment/160360/v1-0001-fix-kernel-resources-docs-on-back-branches.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 21 May 2024 14:12:04 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "> Any concerns with doing something like this [0] for the back-branches? The\r\n> constant would be 6 instead of 7 on v14 through v16.\r\n\r\nAs far as backpatching the present inconsistencies in the docs,\r\n[0] looks good to me.\r\n\r\n[0] https://postgr.es/m/attachment/160360/v1-0001-fix-kernel-resources-docs-on-back-branches.patch <https://postgr.es/m/attachment/160360/v1-0001-fix-kernel-resources-docs-on-back-branches.patch>\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n", "msg_date": "Tue, 21 May 2024 23:15:14 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Tue, May 21, 2024 at 11:15:14PM +0000, Imseih (AWS), Sami wrote:\n>> Any concerns with doing something like this [0] for the back-branches? The\n>> constant would be 6 instead of 7 on v14 through v16.\n> \n> As far as backpatching the present inconsistencies in the docs,\n> [0] looks good to me.\n\nCommitted.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 3 Jun 2024 12:18:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Mon, Jun 03, 2024 at 12:18:21PM -0500, Nathan Bossart wrote:\n> On Tue, May 21, 2024 at 11:15:14PM +0000, Imseih (AWS), Sami wrote:\n>> As far as backpatching the present inconsistencies in the docs,\n>> [0] looks good to me.\n> \n> Committed.\n\nOf course, as soon as I committed this, I noticed another missing reference\nto max_wal_senders in the paragraph about POSIX semaphores. I plan to\ncommit/back-patch the attached patch within the next couple days.\n\n-- \nnathan", "msg_date": "Mon, 3 Jun 2024 14:04:19 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Mon, Jun 03, 2024 at 02:04:19PM -0500, Nathan Bossart wrote:\n> Of course, as soon as I committed this, I noticed another missing reference\n> to max_wal_senders in the paragraph about POSIX semaphores. I plan to\n> commit/back-patch the attached patch within the next couple days.\n\nCommitted.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 5 Jun 2024 15:39:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "Here is a rebased version of the patch for v18 that adds a runtime-computed\nGUC. As I noted earlier, there still isn't a consensus on this approach.\n\n-- \nnathan", "msg_date": "Thu, 6 Jun 2024 14:21:38 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Thu, Jun 6, 2024 at 3:21 PM Nathan Bossart <[email protected]> wrote:\n> Here is a rebased version of the patch for v18 that adds a runtime-computed\n> GUC. As I noted earlier, there still isn't a consensus on this approach.\n\nI don't really like making this a GUC, but what's the other option?\nIt's reasonable for people to want to ask the server how many\nresources it will need to start, and -C is the only tool we have for\nthat right now. So I feel like this is a fair thing to do.\n\nI do think the name could use some more thought, though.\nsemaphores_required would end up being the same kind of thing as\nshared_memory_size_in_huge_pages, but the names seem randomly\ndifferent. If semaphores_required is right here, why isn't\nshared_memory_required used there? Seems more like we ought to call\nthis semaphores or os_semaphores or num_semaphores or\nnum_os_semaphores or something.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 15:31:53 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Thu, Jun 06, 2024 at 03:31:53PM -0400, Robert Haas wrote:\n> I don't really like making this a GUC, but what's the other option?\n> It's reasonable for people to want to ask the server how many\n> resources it will need to start, and -C is the only tool we have for\n> that right now. So I feel like this is a fair thing to do.\n\nYeah, this is how I feel, too.\n\n> I do think the name could use some more thought, though.\n> semaphores_required would end up being the same kind of thing as\n> shared_memory_size_in_huge_pages, but the names seem randomly\n> different. If semaphores_required is right here, why isn't\n> shared_memory_required used there? Seems more like we ought to call\n> this semaphores or os_semaphores or num_semaphores or\n> num_os_semaphores or something.\n\nI'm fine with any of your suggestions. If I _had_ to pick one, I'd\nprobably choose num_os_semaphores because it's the most descriptive.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 6 Jun 2024 14:51:42 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Thu, Jun 06, 2024 at 02:51:42PM -0500, Nathan Bossart wrote:\n> On Thu, Jun 06, 2024 at 03:31:53PM -0400, Robert Haas wrote:\n>> I do think the name could use some more thought, though.\n>> semaphores_required would end up being the same kind of thing as\n>> shared_memory_size_in_huge_pages, but the names seem randomly\n>> different. If semaphores_required is right here, why isn't\n>> shared_memory_required used there? Seems more like we ought to call\n>> this semaphores or os_semaphores or num_semaphores or\n>> num_os_semaphores or something.\n> \n> I'm fine with any of your suggestions. If I _had_ to pick one, I'd\n> probably choose num_os_semaphores because it's the most descriptive.\n\nHere's a new version of the patch with the GUC renamed to\nnum_os_semaphores.\n\n-- \nnathan", "msg_date": "Sun, 9 Jun 2024 14:04:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "On Sun, Jun 09, 2024 at 02:04:17PM -0500, Nathan Bossart wrote:\n> Here's a new version of the patch with the GUC renamed to\n> num_os_semaphores.\n\nThe only thing stopping me from committing this right now is Tom's upthread\nobjection about adding more GUCs that just expose values that you can't\nactually set. If that objection still stands, I'll withdraw this patch\n(and maybe try introducing a new way to surface this information someday).\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 17 Jul 2024 11:29:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> The only thing stopping me from committing this right now is Tom's upthread\n> objection about adding more GUCs that just expose values that you can't\n> actually set. If that objection still stands, I'll withdraw this patch\n> (and maybe try introducing a new way to surface this information someday).\n\nIt still feels to me like not a great way to go about it. Having\nsaid that, it's not like we don't have any existing examples of\nthe category, so I won't cry hard if I'm outvoted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:16:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 26 Jul 2024 15:32:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problems with \"Shared Memory and Semaphores\" section of docs" } ]
[ { "msg_contents": "Hi,\n\nI've been looking at GiST to see if there could be a good way to do\nparallel builds - and there might be, if the opclass supports sorted\nbuilds, because then we could parallelize the sort.\n\nBut then I noticed we support this mode only for point_ops, because\nthat's the only opclass with sortsupport procedure. Which mostly makes\nsense, because types like geometries, polygons, ... do not have a well\ndefined order.\n\nStill, we have btree_gist, and I don't think there's much reason to not\nsupport sorted builds for those opclasses, where the gist opclass is\ndefined on top of btree ordering semantics.\n\nSo this patch does that, and the difference (compared to builds with\nbuiffering=on) can easily be an order of magnitude - at least that's\nwhat my tests show:\n\n\ntest=# create index on test_int8 using gist (a) with (buffering = on);\nCREATE INDEX\nTime: 578799.450 ms (09:38.799)\n\ntest=# create index on test_int8 using gist (a) with (buffering = auto);\nCREATE INDEX\nTime: 53022.593 ms (00:53.023)\n\n\ntest=# create index on test_uuid using gist (a) with (buffering = on);\nCREATE INDEX\nTime: 39322.799 ms (00:39.323)\n\ntest=# create index on test_uuid using gist (a) with (buffering = auto);\nCREATE INDEX\nTime: 6466.341 ms (00:06.466)\n\n\nThe WIP patch adds enables this for data types with a usable sortsupport\nprocedure, which excludes time, timetz, cash, interval, bit, vbit, bool,\nenum and macaddr8. I assume time, timetz and macaddr8 could be added,\nit's just that I didn't find any existing sortsupport procedure. Same\nfor cash, but IIRC it's mostly deprecated.\n\nOf course, people probably don't use btree_gist with a single column,\nbecause they could just use btree. It's useful for multi-column GiST\nindexes, with data types requiring GiST. And if the other opclasses also\nallow sorted builds, this patch makes that possible. Of course, most\n\"proper GiST opclasses\" don't support that, but why not - it's easy.\n\nFWIW this is also why sorted builds likely are not a very practical way\nto do parallel builds for GiST - it would help only with a small part of\ncases, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 17 May 2024 21:41:10 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "allow sorted builds for btree_gist" }, { "msg_contents": "On Fri, May 17, 2024 at 12:41 PM Tomas Vondra\n<[email protected]> wrote:\n> I've been looking at GiST to see if there could be a good way to do\n> parallel builds - and there might be, if the opclass supports sorted\n> builds, because then we could parallelize the sort.\n>\n> But then I noticed we support this mode only for point_ops, because\n> that's the only opclass with sortsupport procedure. Which mostly makes\n> sense, because types like geometries, polygons, ... do not have a well\n> defined order.\n\nOh, I'm excited about this for temporal tables. It seems to me that\nranges and multiranges should have a well-defined order (assuming\ntheir base types do), since you can do dictionary-style ordering\n(compare the first value, then the next, then the next, etc.). Is\nthere any reason not to support those? No reason not to commit these\nbtree_gist functions first though. If you aren't interested in adding\nGiST sortsupport for ranges & multiranges I'm willing to do it myself;\njust let me know.\n\nDo note that the 1.7 -> 1.8 changes have been reverted though (as part\nof my temporal work), and it looks like your patch is a bit messed up\nfrom that. You'll want to take 1.8 for yourself, and also your 1.9\nupgrade script is trying to add the reverted stratnum functions.\n\nYours,\nPaul\n\n\n", "msg_date": "Fri, 17 May 2024 17:00:47 -0700", "msg_from": "Paul A Jungwirth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow sorted builds for btree_gist" }, { "msg_contents": "\n\n> On 18 May 2024, at 00:41, Tomas Vondra <[email protected]> wrote:\n> \n> if the opclass supports sorted\n> builds, because then we could parallelize the sort.\n\nHi Tomas!\n\nYup, I'd also be glad to see this feature. PostGIS folks are using their geometry (sortsupport was developed for this) with object id (this disables sort build).\n\nIt was committed once [0], but then reverted, vardata opclasses were implemented wrong. Now it's on CF[1], Bernd is actively responding in the thread, but currently patch lacks tests.\n\nThanks for raising this!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9f984ba6d23dc\n[1] https://commitfest.postgresql.org/48/3686/\n\n", "msg_date": "Sat, 18 May 2024 11:51:00 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow sorted builds for btree_gist" }, { "msg_contents": "\n\nOn 5/18/24 08:51, Andrey M. Borodin wrote:\n> \n> \n>> On 18 May 2024, at 00:41, Tomas Vondra\n>> <[email protected]> wrote:\n>> \n>> if the opclass supports sorted builds, because then we could\n>> parallelize the sort.\n> \n> Hi Tomas!\n> \n> Yup, I'd also be glad to see this feature. PostGIS folks are using\n> their geometry (sortsupport was developed for this) with object id\n> (this disables sort build).\n> \n> It was committed once [0], but then reverted, vardata opclasses were\n> implemented wrong. Now it's on CF[1], Bernd is actively responding in\n> the thread, but currently patch lacks tests.\n> \n> Thanks for raising this!\n> \n\nOh, damn! I didn't notice the CF already has a patch doing this, and\nthat it was committed/reverted in 2021. I was completely oblivious to\nthat. Apologies.\n\nLet's continue working on that patch/thread, I'll take a look in the\nnext CF.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 May 2024 12:22:58 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow sorted builds for btree_gist" }, { "msg_contents": "\n\n> On 18 May 2024, at 15:22, Tomas Vondra <[email protected]> wrote:\n> \n> Let's continue working on that patch/thread, I'll take a look in the\n> next CF.\n\nCool! I'd be happy to review the patch before CF when Bernd or Christoph will address current issues.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sat, 18 May 2024 16:38:56 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow sorted builds for btree_gist" }, { "msg_contents": "On 5/18/24 02:00, Paul A Jungwirth wrote:\n> On Fri, May 17, 2024 at 12:41 PM Tomas Vondra\n> <[email protected]> wrote:\n>> I've been looking at GiST to see if there could be a good way to do\n>> parallel builds - and there might be, if the opclass supports sorted\n>> builds, because then we could parallelize the sort.\n>>\n>> But then I noticed we support this mode only for point_ops, because\n>> that's the only opclass with sortsupport procedure. Which mostly makes\n>> sense, because types like geometries, polygons, ... do not have a well\n>> defined order.\n> \n> Oh, I'm excited about this for temporal tables. It seems to me that\n> ranges and multiranges should have a well-defined order (assuming\n> their base types do), since you can do dictionary-style ordering\n> (compare the first value, then the next, then the next, etc.). Is\n> there any reason not to support those? No reason not to commit these\n> btree_gist functions first though. If you aren't interested in adding\n> GiST sortsupport for ranges & multiranges I'm willing to do it myself;\n> just let me know.\n> \n\nI believe that's pretty much what the existing patch [1] linked by\nAndrey (and apparently running for a number of CFs) does.\n\n[1] https://commitfest.postgresql.org/48/3686/\n\n> Do note that the 1.7 -> 1.8 changes have been reverted though (as part\n> of my temporal work), and it looks like your patch is a bit messed up\n> from that. You'll want to take 1.8 for yourself, and also your 1.9\n> upgrade script is trying to add the reverted stratnum functions.\n> \n\nYeah, I happened to branch from a slightly older master, not noticing\nthis is affected by the revert.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 May 2024 15:30:07 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow sorted builds for btree_gist" }, { "msg_contents": "\n\n> On 17 May 2024, at 21:41, Tomas Vondra <[email protected]> wrote:\n> \n> Hi,\n> \n> I've been looking at GiST to see if there could be a good way to do\n> parallel builds - and there might be, if the opclass supports sorted\n> builds, because then we could parallelize the sort.\n> \n> But then I noticed we support this mode only for point_ops, because\n> that's the only opclass with sortsupport procedure. Which mostly makes\n> sense, because types like geometries, polygons, ... do not have a well\n> defined order.\n> \n> Still, we have btree_gist, and I don't think there's much reason to not\n> support sorted builds for those opclasses, where the gist opclass is\n> defined on top of btree ordering semantics.\n> \n\n\nI wonder if it was possible to add sort support to pg_trgm. Speeding up index build for multicolumn indexes supporting text search would be great.\n\n—\nMichal\n\n\n\n", "msg_date": "Sat, 18 May 2024 20:42:41 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow sorted builds for btree_gist" }, { "msg_contents": "Hi,\n\nAm Samstag, dem 18.05.2024 um 12:22 +0200 schrieb Tomas Vondra:\n> > It was committed once [0], but then reverted, vardata opclasses\n> > were\n> > implemented wrong. Now it's on CF[1], Bernd is actively responding\n> > in\n> > the thread, but currently patch lacks tests.\n> > \n> > Thanks for raising this!\n> > \n> \n> Oh, damn! I didn't notice the CF already has a patch doing this, and\n> that it was committed/reverted in 2021. I was completely oblivious to\n> that. Apologies.\n> \n> Let's continue working on that patch/thread, I'll take a look in the\n> next CF.\n\nSorry for the delay, i was on vacation (we had some public holidays\nhere in germany) and i am currently involved in other time consuming\nprojects, so i didn't follow my mails very close lately.\nIf my time permits i'd like to add the remaining missing things to the\npatch, i am looking forward for your review, though!\n\nThanks for bringing this up again.\n\n\tBernd\n\n\n\n", "msg_date": "Mon, 03 Jun 2024 10:53:37 +0200", "msg_from": "Bernd Helmle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow sorted builds for btree_gist" } ]
[ { "msg_contents": "Hi!\n\nIn a thread about sorting comparators[0] Andres noted that we have infrastructure to help compiler optimize sorting. PFA attached PoC implementation. I've checked that it indeed works on the benchmark from that thread.\n\npostgres=# CREATE TABLE arrays_to_sort AS\n SELECT array_shuffle(a) arr\n FROM\n (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n generate_series(1, 10);\n\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\nTime: 990.199 ms\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\nTime: 696.156 ms\n\nThe benefit seems to be on the order of magnitude with 30% speedup.\n\nThere's plenty of sorting by TransactionId, BlockNumber, OffsetNumber, Oid etc. But this sorting routines never show up in perf top or something like that.\n\nSeems like in most cases we do not spend much time in sorting. But specialization does not cost us much too, only some CPU cycles of a compiler. I think we can further improve speedup by converting inline comparator to value extractor: more compilers will see what is actually going on. But I have no proofs for this reasoning.\n\nWhat do you think?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59", "msg_date": "Sat, 18 May 2024 23:52:11 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sort functions with specialized comparators" }, { "msg_contents": "Em sáb., 18 de mai. de 2024 às 15:52, Andrey M. Borodin <\[email protected]> escreveu:\n\n> Hi!\n>\n> In a thread about sorting comparators[0] Andres noted that we have\n> infrastructure to help compiler optimize sorting. PFA attached PoC\n> implementation. I've checked that it indeed works on the benchmark from\n> that thread.\n>\n> postgres=# CREATE TABLE arrays_to_sort AS\n> SELECT array_shuffle(a) arr\n> FROM\n> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n> generate_series(1, 10);\n>\n> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n> Time: 990.199 ms\n> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n> Time: 696.156 ms\n>\n> The benefit seems to be on the order of magnitude with 30% speedup.\n>\n> There's plenty of sorting by TransactionId, BlockNumber, OffsetNumber, Oid\n> etc. But this sorting routines never show up in perf top or something like\n> that.\n>\n> Seems like in most cases we do not spend much time in sorting. But\n> specialization does not cost us much too, only some CPU cycles of a\n> compiler. I think we can further improve speedup by converting inline\n> comparator to value extractor: more compilers will see what is actually\n> going on. But I have no proofs for this reasoning.\n>\n> What do you think?\n>\nMakes sense.\n\nRegarding the patch.\nYou could change the style to:\n\n+sort_int32_asc_cmp(const int32 *a, const int32 *b)\n+sort_int32_desc_cmp(const int32 *a, const int32 *b)\n\nWe must use const in all parameters that can be const.\n\nbest regards,\nRanier Vilela\n\nEm sáb., 18 de mai. de 2024 às 15:52, Andrey M. Borodin <[email protected]> escreveu:Hi!\n\nIn a thread about sorting comparators[0] Andres noted that we have infrastructure to help compiler optimize sorting. PFA attached PoC implementation. I've checked that it indeed works on the benchmark from that thread.\n\npostgres=# CREATE TABLE arrays_to_sort AS\n   SELECT array_shuffle(a) arr\n   FROM\n       (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n       generate_series(1, 10);\n\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\nTime: 990.199 ms\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\nTime: 696.156 ms\n\nThe benefit seems to be on the order of magnitude with 30% speedup.\n\nThere's plenty of sorting by TransactionId, BlockNumber, OffsetNumber, Oid etc. But this sorting routines never show up in perf top or something like that.\n\nSeems like in most cases we do not spend much time in sorting. But specialization does not cost us much too, only some CPU cycles of a compiler. I think we can further improve speedup by converting inline comparator to value extractor: more compilers will see what is actually going on. But I have no proofs for this reasoning.\n\nWhat do you think?Makes sense.Regarding the patch.You could change the style to:+sort_int32_asc_cmp(const int32 *a, const int32 *b)+sort_int32_desc_cmp(const int32 *a, const int32 *b)We must use const in all parameters that can be const.best regards,Ranier Vilela", "msg_date": "Sat, 18 May 2024 21:15:11 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "Hello all.\n\nI am interested in the proposed patch and would like to propose some\nadditional changes that would complement it. My changes would introduce\nsimilar optimizations when working with a list of integers or object\nidentifiers. Additionally, my patch includes an extension for benchmarking,\nwhich shows an average speedup of 30-40%.\n\npostgres=# SELECT bench_oid_sort(1000000);\n bench_oid_sort\n\n----------------------------------------------------------------------------------------------------------------\n Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort:\n80446640 ns, Percentage difference: 31.24%\n(1 row)\n\npostgres=# SELECT bench_int_sort(1000000);\n bench_int_sort\n\n----------------------------------------------------------------------------------------------------------------\n Time taken by list_sort: 118168506 ns, Time taken by list_int_sort:\n80523373 ns, Percentage difference: 31.86%\n(1 row)\n\nWhat do you think about these changes?\n\nBest regards, Stepan Neretin.\n\nOn Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]>\nwrote:\n\n> Hi!\n>\n> In a thread about sorting comparators[0] Andres noted that we have\n> infrastructure to help compiler optimize sorting. PFA attached PoC\n> implementation. I've checked that it indeed works on the benchmark from\n> that thread.\n>\n> postgres=# CREATE TABLE arrays_to_sort AS\n> SELECT array_shuffle(a) arr\n> FROM\n> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n> generate_series(1, 10);\n>\n> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n> Time: 990.199 ms\n> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n> Time: 696.156 ms\n>\n> The benefit seems to be on the order of magnitude with 30% speedup.\n>\n> There's plenty of sorting by TransactionId, BlockNumber, OffsetNumber, Oid\n> etc. But this sorting routines never show up in perf top or something like\n> that.\n>\n> Seems like in most cases we do not spend much time in sorting. But\n> specialization does not cost us much too, only some CPU cycles of a\n> compiler. I think we can further improve speedup by converting inline\n> comparator to value extractor: more compilers will see what is actually\n> going on. But I have no proofs for this reasoning.\n>\n> What do you think?\n>\n>\n> Best regards, Andrey Borodin.\n>\n> [0]\n> https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59\n>", "msg_date": "Sat, 8 Jun 2024 01:50:02 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]> wrote:\n\n> Hello all.\n>\n> I am interested in the proposed patch and would like to propose some\n> additional changes that would complement it. My changes would introduce\n> similar optimizations when working with a list of integers or object\n> identifiers. Additionally, my patch includes an extension for benchmarking,\n> which shows an average speedup of 30-40%.\n>\n> postgres=# SELECT bench_oid_sort(1000000);\n> bench_oid_sort\n>\n>\n> ----------------------------------------------------------------------------------------------------------------\n> Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort:\n> 80446640 ns, Percentage difference: 31.24%\n> (1 row)\n>\n> postgres=# SELECT bench_int_sort(1000000);\n> bench_int_sort\n>\n>\n> ----------------------------------------------------------------------------------------------------------------\n> Time taken by list_sort: 118168506 ns, Time taken by list_int_sort:\n> 80523373 ns, Percentage difference: 31.86%\n> (1 row)\n>\n> What do you think about these changes?\n>\n> Best regards, Stepan Neretin.\n>\n> On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]>\n> wrote:\n>\n>> Hi!\n>>\n>> In a thread about sorting comparators[0] Andres noted that we have\n>> infrastructure to help compiler optimize sorting. PFA attached PoC\n>> implementation. I've checked that it indeed works on the benchmark from\n>> that thread.\n>>\n>> postgres=# CREATE TABLE arrays_to_sort AS\n>> SELECT array_shuffle(a) arr\n>> FROM\n>> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n>> generate_series(1, 10);\n>>\n>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n>> Time: 990.199 ms\n>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n>> Time: 696.156 ms\n>>\n>> The benefit seems to be on the order of magnitude with 30% speedup.\n>>\n>> There's plenty of sorting by TransactionId, BlockNumber, OffsetNumber,\n>> Oid etc. But this sorting routines never show up in perf top or something\n>> like that.\n>>\n>> Seems like in most cases we do not spend much time in sorting. But\n>> specialization does not cost us much too, only some CPU cycles of a\n>> compiler. I think we can further improve speedup by converting inline\n>> comparator to value extractor: more compilers will see what is actually\n>> going on. But I have no proofs for this reasoning.\n>>\n>> What do you think?\n>>\n>>\n>> Best regards, Andrey Borodin.\n>>\n>> [0]\n>> https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59\n>\n>\nHello all.\n\nI have decided to explore more areas in which I can optimize and have added\ntwo new benchmarks. Do you have any thoughts on this?\n\npostgres=# select bench_int16_sort(1000000);\n bench_int16_sort\n\n-----------------------------------------------------------------------------------------------------------------\n Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n52151523 ns, Percentage difference: 21.41%\n(1 row)\n\npostgres=# select bench_float8_sort(1000000);\n bench_float8_sort\n\n------------------------------------------------------------------------------------------------------------------\n Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n74458545 ns, Percentage difference: 38.70%\n(1 row)\n\npostgres=#\n\nBest regards, Stepan Neretin.", "msg_date": "Tue, 11 Jun 2024 13:32:04 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": ">\n> Hello all.\n>\n> I have decided to explore more areas in which I can optimize and have added\n> two new benchmarks. Do you have any thoughts on this?\n>\n> postgres=# select bench_int16_sort(1000000);\n> bench_int16_sort\n>\n>\n> -----------------------------------------------------------------------------------------------------------------\n> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n> 52151523 ns, Percentage difference: 21.41%\n> (1 row)\n>\n> postgres=# select bench_float8_sort(1000000);\n> bench_float8_sort\n>\n>\n> ------------------------------------------------------------------------------------------------------------------\n> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n> 74458545 ns, Percentage difference: 38.70%\n> (1 row)\n>\n\n Hello all\nWe would like to see the relationship between the length of the sorted array\nand the performance gain, perhaps some graphs. We also want to see to set a\n\"worst case\" test, sorting the array in ascending order when it is initially\ndescending\n\nBest, regards, Antoine Violin\n\npostgres=#\n>\n\nOn Mon, Jul 15, 2024 at 10:32 AM Stepan Neretin <[email protected]> wrote:\n\n>\n>\n> On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]> wrote:\n>\n>> Hello all.\n>>\n>> I am interested in the proposed patch and would like to propose some\n>> additional changes that would complement it. My changes would introduce\n>> similar optimizations when working with a list of integers or object\n>> identifiers. Additionally, my patch includes an extension for\n>> benchmarking, which shows an average speedup of 30-40%.\n>>\n>> postgres=# SELECT bench_oid_sort(1000000);\n>> bench_oid_sort\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------\n>> Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort:\n>> 80446640 ns, Percentage difference: 31.24%\n>> (1 row)\n>>\n>> postgres=# SELECT bench_int_sort(1000000);\n>> bench_int_sort\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------\n>> Time taken by list_sort: 118168506 ns, Time taken by list_int_sort:\n>> 80523373 ns, Percentage difference: 31.86%\n>> (1 row)\n>>\n>> What do you think about these changes?\n>>\n>> Best regards, Stepan Neretin.\n>>\n>> On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]>\n>> wrote:\n>>\n>>> Hi!\n>>>\n>>> In a thread about sorting comparators[0] Andres noted that we have\n>>> infrastructure to help compiler optimize sorting. PFA attached PoC\n>>> implementation. I've checked that it indeed works on the benchmark from\n>>> that thread.\n>>>\n>>> postgres=# CREATE TABLE arrays_to_sort AS\n>>> SELECT array_shuffle(a) arr\n>>> FROM\n>>> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n>>> generate_series(1, 10);\n>>>\n>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n>>> Time: 990.199 ms\n>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n>>> Time: 696.156 ms\n>>>\n>>> The benefit seems to be on the order of magnitude with 30% speedup.\n>>>\n>>> There's plenty of sorting by TransactionId, BlockNumber, OffsetNumber,\n>>> Oid etc. But this sorting routines never show up in perf top or something\n>>> like that.\n>>>\n>>> Seems like in most cases we do not spend much time in sorting. But\n>>> specialization does not cost us much too, only some CPU cycles of a\n>>> compiler. I think we can further improve speedup by converting inline\n>>> comparator to value extractor: more compilers will see what is actually\n>>> going on. But I have no proofs for this reasoning.\n>>>\n>>> What do you think?\n>>>\n>>>\n>>> Best regards, Andrey Borodin.\n>>>\n>>> [0]\n>>> https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59\n>>\n>>\n> Hello all.\n>\n> I have decided to explore more areas in which I can optimize and have\n> added two new benchmarks. Do you have any thoughts on this?\n>\n> postgres=# select bench_int16_sort(1000000);\n> bench_int16_sort\n>\n>\n> -----------------------------------------------------------------------------------------------------------------\n> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n> 52151523 ns, Percentage difference: 21.41%\n> (1 row)\n>\n> postgres=# select bench_float8_sort(1000000);\n> bench_float8_sort\n>\n>\n> ------------------------------------------------------------------------------------------------------------------\n> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n> 74458545 ns, Percentage difference: 38.70%\n> (1 row)\n>\n> postgres=#\n>\n> Best regards, Stepan Neretin.\n>\n\nHello all.\nI have decided to explore more areas in which I can optimize and have addedtwo new benchmarks. Do you have any thoughts on this?\npostgres=# select bench_int16_sort(1000000); bench_int16_sort\n----------------------------------------------------------------------------------------------------------------- Time taken by usual sort: 66354981 ns, Time taken by optimized sort:52151523 ns, Percentage difference: 21.41%(1 row)\npostgres=# select bench_float8_sort(1000000); bench_float8_sort\n------------------------------------------------------------------------------------------------------------------ Time taken by usual sort: 121475231 ns, Time taken by optimized sort:74458545 ns, Percentage difference: 38.70%(1 row) Hello allWe would like to see the relationship between the length of the sorted array and the performance gain, perhaps some graphs. We also want to see to set a \"worst case\" test, sorting the array in ascending order when it is initially descendingBest, regards, Antoine Violinpostgres=#On Mon, Jul 15, 2024 at 10:32 AM Stepan Neretin <[email protected]> wrote:On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]> wrote:Hello all.I am interested in the proposed patch and would like to propose some additional changes that would complement it. My changes would introduce similar optimizations when working with a list of integers or object identifiers. Additionally, my patch includes an extension for benchmarking, which shows an average speedup of 30-40%.postgres=# SELECT bench_oid_sort(1000000);                                                 bench_oid_sort                                                 ---------------------------------------------------------------------------------------------------------------- Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort: 80446640 ns, Percentage difference: 31.24%(1 row)postgres=# SELECT bench_int_sort(1000000);                                                 bench_int_sort                                                 ---------------------------------------------------------------------------------------------------------------- Time taken by list_sort: 118168506 ns, Time taken by list_int_sort: 80523373 ns, Percentage difference: 31.86%(1 row)What do you think about these changes?Best regards, Stepan Neretin.On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]> wrote:Hi!\n\nIn a thread about sorting comparators[0] Andres noted that we have infrastructure to help compiler optimize sorting. PFA attached PoC implementation. I've checked that it indeed works on the benchmark from that thread.\n\npostgres=# CREATE TABLE arrays_to_sort AS\n   SELECT array_shuffle(a) arr\n   FROM\n       (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n       generate_series(1, 10);\n\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\nTime: 990.199 ms\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\nTime: 696.156 ms\n\nThe benefit seems to be on the order of magnitude with 30% speedup.\n\nThere's plenty of sorting by TransactionId, BlockNumber, OffsetNumber, Oid etc. But this sorting routines never show up in perf top or something like that.\n\nSeems like in most cases we do not spend much time in sorting. But specialization does not cost us much too, only some CPU cycles of a compiler. I think we can further improve speedup by converting inline comparator to value extractor: more compilers will see what is actually going on. But I have no proofs for this reasoning.\n\nWhat do you think?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59Hello all.I have decided to explore more areas in which I can optimize and have added two new benchmarks. Do you have any thoughts on this?postgres=# select bench_int16_sort(1000000);                                                bench_int16_sort                                                 ----------------------------------------------------------------------------------------------------------------- Time taken by usual sort: 66354981 ns, Time taken by optimized sort: 52151523 ns, Percentage difference: 21.41%(1 row)postgres=# select bench_float8_sort(1000000);                                                bench_float8_sort                                                 ------------------------------------------------------------------------------------------------------------------ Time taken by usual sort: 121475231 ns, Time taken by optimized sort: 74458545 ns, Percentage difference: 38.70%(1 row)postgres=#  Best regards, Stepan Neretin.", "msg_date": "Mon, 15 Jul 2024 12:22:16 +0700", "msg_from": "=?UTF-8?B?0JDQvdGC0YPQsNC9INCS0LjQvtC70LjQvQ==?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Mon, Jul 15, 2024 at 12:23 PM Антуан Виолин <[email protected]>\nwrote:\n\n> Hello all.\n>>\n>> I have decided to explore more areas in which I can optimize and have\n>> added\n>> two new benchmarks. Do you have any thoughts on this?\n>>\n>> postgres=# select bench_int16_sort(1000000);\n>> bench_int16_sort\n>>\n>>\n>> -----------------------------------------------------------------------------------------------------------------\n>> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n>> 52151523 ns, Percentage difference: 21.41%\n>> (1 row)\n>>\n>> postgres=# select bench_float8_sort(1000000);\n>> bench_float8_sort\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------\n>> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n>> 74458545 ns, Percentage difference: 38.70%\n>> (1 row)\n>>\n>\n> Hello all\n> We would like to see the relationship between the length of the sorted\n> array and the performance gain, perhaps some graphs. We also want to see\n> to set a \"worst case\" test, sorting the array in ascending order when it\n> is initially descending\n>\n> Best, regards, Antoine Violin\n>\n> postgres=#\n>>\n>\n> On Mon, Jul 15, 2024 at 10:32 AM Stepan Neretin <[email protected]> wrote:\n>\n>>\n>>\n>> On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]> wrote:\n>>\n>>> Hello all.\n>>>\n>>> I am interested in the proposed patch and would like to propose some\n>>> additional changes that would complement it. My changes would introduce\n>>> similar optimizations when working with a list of integers or object\n>>> identifiers. Additionally, my patch includes an extension for\n>>> benchmarking, which shows an average speedup of 30-40%.\n>>>\n>>> postgres=# SELECT bench_oid_sort(1000000);\n>>> bench_oid_sort\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------\n>>> Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort:\n>>> 80446640 ns, Percentage difference: 31.24%\n>>> (1 row)\n>>>\n>>> postgres=# SELECT bench_int_sort(1000000);\n>>> bench_int_sort\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------\n>>> Time taken by list_sort: 118168506 ns, Time taken by list_int_sort:\n>>> 80523373 ns, Percentage difference: 31.86%\n>>> (1 row)\n>>>\n>>> What do you think about these changes?\n>>>\n>>> Best regards, Stepan Neretin.\n>>>\n>>> On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]>\n>>> wrote:\n>>>\n>>>> Hi!\n>>>>\n>>>> In a thread about sorting comparators[0] Andres noted that we have\n>>>> infrastructure to help compiler optimize sorting. PFA attached PoC\n>>>> implementation. I've checked that it indeed works on the benchmark from\n>>>> that thread.\n>>>>\n>>>> postgres=# CREATE TABLE arrays_to_sort AS\n>>>> SELECT array_shuffle(a) arr\n>>>> FROM\n>>>> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n>>>> generate_series(1, 10);\n>>>>\n>>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n>>>> Time: 990.199 ms\n>>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n>>>> Time: 696.156 ms\n>>>>\n>>>> The benefit seems to be on the order of magnitude with 30% speedup.\n>>>>\n>>>> There's plenty of sorting by TransactionId, BlockNumber, OffsetNumber,\n>>>> Oid etc. But this sorting routines never show up in perf top or something\n>>>> like that.\n>>>>\n>>>> Seems like in most cases we do not spend much time in sorting. But\n>>>> specialization does not cost us much too, only some CPU cycles of a\n>>>> compiler. I think we can further improve speedup by converting inline\n>>>> comparator to value extractor: more compilers will see what is actually\n>>>> going on. But I have no proofs for this reasoning.\n>>>>\n>>>> What do you think?\n>>>>\n>>>>\n>>>> Best regards, Andrey Borodin.\n>>>>\n>>>> [0]\n>>>> https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59\n>>>\n>>>\n>> Hello all.\n>>\n>> I have decided to explore more areas in which I can optimize and have\n>> added two new benchmarks. Do you have any thoughts on this?\n>>\n>> postgres=# select bench_int16_sort(1000000);\n>> bench_int16_sort\n>>\n>>\n>> -----------------------------------------------------------------------------------------------------------------\n>> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n>> 52151523 ns, Percentage difference: 21.41%\n>> (1 row)\n>>\n>> postgres=# select bench_float8_sort(1000000);\n>> bench_float8_sort\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------\n>> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n>> 74458545 ns, Percentage difference: 38.70%\n>> (1 row)\n>>\n>> postgres=#\n>>\n>> Best regards, Stepan Neretin.\n>>\n>\n\nI run benchmark with my patches:\n./pgbench -c 10 -j2 -t1000 -d postgres\n\npgbench (18devel)\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 10\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 2\nmaximum number of tries: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\nnumber of failed transactions: 0 (0.000%)\nlatency average = 1.609 ms\ninitial connection time = 24.080 ms\ntps = 6214.244789 (without initial connection time)\n\nand without:\n./pgbench -c 10 -j2 -t1000 -d postgres\n\npgbench (18devel)\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 10\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 2\nmaximum number of tries: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\nnumber of failed transactions: 0 (0.000%)\nlatency average = 1.731 ms\ninitial connection time = 15.177 ms\ntps = 5776.173285 (without initial connection time)\n\ntps with my patches increase. What do you think?\n\nBest regards, Stepan Neretin.\n\nOn Mon, Jul 15, 2024 at 12:23 PM Антуан Виолин <[email protected]> wrote:Hello all.\nI have decided to explore more areas in which I can optimize and have addedtwo new benchmarks. Do you have any thoughts on this?\npostgres=# select bench_int16_sort(1000000); bench_int16_sort\n----------------------------------------------------------------------------------------------------------------- Time taken by usual sort: 66354981 ns, Time taken by optimized sort:52151523 ns, Percentage difference: 21.41%(1 row)\npostgres=# select bench_float8_sort(1000000); bench_float8_sort\n------------------------------------------------------------------------------------------------------------------ Time taken by usual sort: 121475231 ns, Time taken by optimized sort:74458545 ns, Percentage difference: 38.70%(1 row) Hello allWe would like to see the relationship between the length of the sorted array and the performance gain, perhaps some graphs. We also want to see to set a \"worst case\" test, sorting the array in ascending order when it is initially descendingBest, regards, Antoine Violinpostgres=#On Mon, Jul 15, 2024 at 10:32 AM Stepan Neretin <[email protected]> wrote:On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]> wrote:Hello all.I am interested in the proposed patch and would like to propose some additional changes that would complement it. My changes would introduce similar optimizations when working with a list of integers or object identifiers. Additionally, my patch includes an extension for benchmarking, which shows an average speedup of 30-40%.postgres=# SELECT bench_oid_sort(1000000);                                                 bench_oid_sort                                                 ---------------------------------------------------------------------------------------------------------------- Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort: 80446640 ns, Percentage difference: 31.24%(1 row)postgres=# SELECT bench_int_sort(1000000);                                                 bench_int_sort                                                 ---------------------------------------------------------------------------------------------------------------- Time taken by list_sort: 118168506 ns, Time taken by list_int_sort: 80523373 ns, Percentage difference: 31.86%(1 row)What do you think about these changes?Best regards, Stepan Neretin.On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]> wrote:Hi!\n\nIn a thread about sorting comparators[0] Andres noted that we have infrastructure to help compiler optimize sorting. PFA attached PoC implementation. I've checked that it indeed works on the benchmark from that thread.\n\npostgres=# CREATE TABLE arrays_to_sort AS\n   SELECT array_shuffle(a) arr\n   FROM\n       (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n       generate_series(1, 10);\n\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\nTime: 990.199 ms\npostgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\nTime: 696.156 ms\n\nThe benefit seems to be on the order of magnitude with 30% speedup.\n\nThere's plenty of sorting by TransactionId, BlockNumber, OffsetNumber, Oid etc. But this sorting routines never show up in perf top or something like that.\n\nSeems like in most cases we do not spend much time in sorting. But specialization does not cost us much too, only some CPU cycles of a compiler. I think we can further improve speedup by converting inline comparator to value extractor: more compilers will see what is actually going on. But I have no proofs for this reasoning.\n\nWhat do you think?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59Hello all.I have decided to explore more areas in which I can optimize and have added two new benchmarks. Do you have any thoughts on this?postgres=# select bench_int16_sort(1000000);                                                bench_int16_sort                                                 ----------------------------------------------------------------------------------------------------------------- Time taken by usual sort: 66354981 ns, Time taken by optimized sort: 52151523 ns, Percentage difference: 21.41%(1 row)postgres=# select bench_float8_sort(1000000);                                                bench_float8_sort                                                 ------------------------------------------------------------------------------------------------------------------ Time taken by usual sort: 121475231 ns, Time taken by optimized sort: 74458545 ns, Percentage difference: 38.70%(1 row)postgres=#  Best regards, Stepan Neretin.I run benchmark with my patches:./pgbench -c 10 -j2 -t1000 -d postgrespgbench (18devel)starting vacuum...end.transaction type: <builtin: TPC-B (sort of)>scaling factor: 10query mode: simplenumber of clients: 10number of threads: 2maximum number of tries: 1number of transactions per client: 1000number of transactions actually processed: 10000/10000number of failed transactions: 0 (0.000%)latency average = 1.609 msinitial connection time = 24.080 mstps = 6214.244789 (without initial connection time)and without:./pgbench -c 10 -j2 -t1000 -d postgrespgbench (18devel)starting vacuum...end.transaction type: <builtin: TPC-B (sort of)>scaling factor: 10query mode: simplenumber of clients: 10number of threads: 2maximum number of tries: 1number of transactions per client: 1000number of transactions actually processed: 10000/10000number of failed transactions: 0 (0.000%)latency average = 1.731 msinitial connection time = 15.177 mstps = 5776.173285 (without initial connection time)tps with my patches increase. What do you think?Best regards, Stepan Neretin.", "msg_date": "Mon, 15 Jul 2024 16:52:31 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Mon, Jul 15, 2024 at 4:52 PM Stepan Neretin <[email protected]> wrote:\n\n>\n>\n> On Mon, Jul 15, 2024 at 12:23 PM Антуан Виолин <[email protected]>\n> wrote:\n>\n>> Hello all.\n>>>\n>>> I have decided to explore more areas in which I can optimize and have\n>>> added\n>>> two new benchmarks. Do you have any thoughts on this?\n>>>\n>>> postgres=# select bench_int16_sort(1000000);\n>>> bench_int16_sort\n>>>\n>>>\n>>> -----------------------------------------------------------------------------------------------------------------\n>>> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n>>> 52151523 ns, Percentage difference: 21.41%\n>>> (1 row)\n>>>\n>>> postgres=# select bench_float8_sort(1000000);\n>>> bench_float8_sort\n>>>\n>>>\n>>> ------------------------------------------------------------------------------------------------------------------\n>>> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n>>> 74458545 ns, Percentage difference: 38.70%\n>>> (1 row)\n>>>\n>>\n>> Hello all\n>> We would like to see the relationship between the length of the sorted\n>> array and the performance gain, perhaps some graphs. We also want to see\n>> to set a \"worst case\" test, sorting the array in ascending order when it\n>> is initially descending\n>>\n>> Best, regards, Antoine Violin\n>>\n>> postgres=#\n>>>\n>>\n>> On Mon, Jul 15, 2024 at 10:32 AM Stepan Neretin <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>>> On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]> wrote:\n>>>\n>>>> Hello all.\n>>>>\n>>>> I am interested in the proposed patch and would like to propose some\n>>>> additional changes that would complement it. My changes would introduce\n>>>> similar optimizations when working with a list of integers or object\n>>>> identifiers. Additionally, my patch includes an extension for\n>>>> benchmarking, which shows an average speedup of 30-40%.\n>>>>\n>>>> postgres=# SELECT bench_oid_sort(1000000);\n>>>> bench_oid_sort\n>>>>\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------\n>>>> Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort:\n>>>> 80446640 ns, Percentage difference: 31.24%\n>>>> (1 row)\n>>>>\n>>>> postgres=# SELECT bench_int_sort(1000000);\n>>>> bench_int_sort\n>>>>\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------\n>>>> Time taken by list_sort: 118168506 ns, Time taken by list_int_sort:\n>>>> 80523373 ns, Percentage difference: 31.86%\n>>>> (1 row)\n>>>>\n>>>> What do you think about these changes?\n>>>>\n>>>> Best regards, Stepan Neretin.\n>>>>\n>>>> On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Hi!\n>>>>>\n>>>>> In a thread about sorting comparators[0] Andres noted that we have\n>>>>> infrastructure to help compiler optimize sorting. PFA attached PoC\n>>>>> implementation. I've checked that it indeed works on the benchmark from\n>>>>> that thread.\n>>>>>\n>>>>> postgres=# CREATE TABLE arrays_to_sort AS\n>>>>> SELECT array_shuffle(a) arr\n>>>>> FROM\n>>>>> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n>>>>> generate_series(1, 10);\n>>>>>\n>>>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n>>>>> Time: 990.199 ms\n>>>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n>>>>> Time: 696.156 ms\n>>>>>\n>>>>> The benefit seems to be on the order of magnitude with 30% speedup.\n>>>>>\n>>>>> There's plenty of sorting by TransactionId, BlockNumber, OffsetNumber,\n>>>>> Oid etc. But this sorting routines never show up in perf top or something\n>>>>> like that.\n>>>>>\n>>>>> Seems like in most cases we do not spend much time in sorting. But\n>>>>> specialization does not cost us much too, only some CPU cycles of a\n>>>>> compiler. I think we can further improve speedup by converting inline\n>>>>> comparator to value extractor: more compilers will see what is actually\n>>>>> going on. But I have no proofs for this reasoning.\n>>>>>\n>>>>> What do you think?\n>>>>>\n>>>>>\n>>>>> Best regards, Andrey Borodin.\n>>>>>\n>>>>> [0]\n>>>>> https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59\n>>>>\n>>>>\n>>> Hello all.\n>>>\n>>> I have decided to explore more areas in which I can optimize and have\n>>> added two new benchmarks. Do you have any thoughts on this?\n>>>\n>>> postgres=# select bench_int16_sort(1000000);\n>>> bench_int16_sort\n>>>\n>>>\n>>> -----------------------------------------------------------------------------------------------------------------\n>>> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n>>> 52151523 ns, Percentage difference: 21.41%\n>>> (1 row)\n>>>\n>>> postgres=# select bench_float8_sort(1000000);\n>>> bench_float8_sort\n>>>\n>>>\n>>> ------------------------------------------------------------------------------------------------------------------\n>>> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n>>> 74458545 ns, Percentage difference: 38.70%\n>>> (1 row)\n>>>\n>>> postgres=#\n>>>\n>>> Best regards, Stepan Neretin.\n>>>\n>>\n>\n> I run benchmark with my patches:\n> ./pgbench -c 10 -j2 -t1000 -d postgres\n>\n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 2\n> maximum number of tries: 1\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> number of failed transactions: 0 (0.000%)\n> latency average = 1.609 ms\n> initial connection time = 24.080 ms\n> tps = 6214.244789 (without initial connection time)\n>\n> and without:\n> ./pgbench -c 10 -j2 -t1000 -d postgres\n>\n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 2\n> maximum number of tries: 1\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> number of failed transactions: 0 (0.000%)\n> latency average = 1.731 ms\n> initial connection time = 15.177 ms\n> tps = 5776.173285 (without initial connection time)\n>\n> tps with my patches increase. What do you think?\n>\n> Best regards, Stepan Neretin.\n>\n\nI implement reverse benchmarks:\n\npostgres=# SELECT bench_oid_reverse_sort(1000);\n bench_oid_reverse_sort\n\n----------------------------------------------------------------------------------------------------------\n Time taken by list_sort: 182557 ns, Time taken by list_oid_sort: 85864 ns,\nPercentage difference: 52.97%\n(1 row)\n\nTime: 2,291 ms\npostgres=# SELECT bench_oid_reverse_sort(100000);\n bench_oid_reverse_sort\n\n-------------------------------------------------------------------------------------------------------------\n Time taken by list_sort: 9064163 ns, Time taken by list_oid_sort: 4313448\nns, Percentage difference: 52.41%\n(1 row)\n\nTime: 17,146 ms\npostgres=# SELECT bench_oid_reverse_sort(1000000);\n bench_oid_reverse_sort\n\n---------------------------------------------------------------------------------------------------------------\n Time taken by list_sort: 61990395 ns, Time taken by list_oid_sort:\n23703380 ns, Percentage difference: 61.76%\n(1 row)\n\npostgres=# SELECT bench_int_reverse_sort(1000000);\n bench_int_reverse_sort\n\n---------------------------------------------------------------------------------------------------------------\n Time taken by list_sort: 50712416 ns, Time taken by list_int_sort:\n24120417 ns, Percentage difference: 52.44%\n(1 row)\n\nTime: 89,359 ms\n\npostgres=# SELECT bench_float8_reverse_sort(1000000);\n bench_float8_reverse_sort\n\n-----------------------------------------------------------------------------------------------------------------\n Time taken by usual sort: 57447775 ns, Time taken by optimized sort:\n25214023 ns, Percentage difference: 56.11%\n(1 row)\n\nTime: 92,308 ms\n\nHello again. I want to show you the graphs of when we increase the length\nvector/array sorting time (ns). What do you think about graphs?\n\nBest regards, Stepan Neretin.", "msg_date": "Mon, 15 Jul 2024 17:47:32 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Mon, Jul 15, 2024 at 5:47 PM Stepan Neretin <[email protected]> wrote:\n\n>\n>\n> On Mon, Jul 15, 2024 at 4:52 PM Stepan Neretin <[email protected]> wrote:\n>\n>>\n>>\n>> On Mon, Jul 15, 2024 at 12:23 PM Антуан Виолин <[email protected]>\n>> wrote:\n>>\n>>> Hello all.\n>>>>\n>>>> I have decided to explore more areas in which I can optimize and have\n>>>> added\n>>>> two new benchmarks. Do you have any thoughts on this?\n>>>>\n>>>> postgres=# select bench_int16_sort(1000000);\n>>>> bench_int16_sort\n>>>>\n>>>>\n>>>> -----------------------------------------------------------------------------------------------------------------\n>>>> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n>>>> 52151523 ns, Percentage difference: 21.41%\n>>>> (1 row)\n>>>>\n>>>> postgres=# select bench_float8_sort(1000000);\n>>>> bench_float8_sort\n>>>>\n>>>>\n>>>> ------------------------------------------------------------------------------------------------------------------\n>>>> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n>>>> 74458545 ns, Percentage difference: 38.70%\n>>>> (1 row)\n>>>>\n>>>\n>>> Hello all\n>>> We would like to see the relationship between the length of the sorted\n>>> array and the performance gain, perhaps some graphs. We also want to see\n>>> to set a \"worst case\" test, sorting the array in ascending order when it\n>>> is initially descending\n>>>\n>>> Best, regards, Antoine Violin\n>>>\n>>> postgres=#\n>>>>\n>>>\n>>> On Mon, Jul 15, 2024 at 10:32 AM Stepan Neretin <[email protected]>\n>>> wrote:\n>>>\n>>>>\n>>>>\n>>>> On Sat, Jun 8, 2024 at 1:50 AM Stepan Neretin <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Hello all.\n>>>>>\n>>>>> I am interested in the proposed patch and would like to propose some\n>>>>> additional changes that would complement it. My changes would\n>>>>> introduce similar optimizations when working with a list of integers\n>>>>> or object identifiers. Additionally, my patch includes an extension\n>>>>> for benchmarking, which shows an average speedup of 30-40%.\n>>>>>\n>>>>> postgres=# SELECT bench_oid_sort(1000000);\n>>>>> bench_oid_sort\n>>>>>\n>>>>>\n>>>>> ----------------------------------------------------------------------------------------------------------------\n>>>>> Time taken by list_sort: 116990848 ns, Time taken by list_oid_sort:\n>>>>> 80446640 ns, Percentage difference: 31.24%\n>>>>> (1 row)\n>>>>>\n>>>>> postgres=# SELECT bench_int_sort(1000000);\n>>>>> bench_int_sort\n>>>>>\n>>>>>\n>>>>> ----------------------------------------------------------------------------------------------------------------\n>>>>> Time taken by list_sort: 118168506 ns, Time taken by list_int_sort:\n>>>>> 80523373 ns, Percentage difference: 31.86%\n>>>>> (1 row)\n>>>>>\n>>>>> What do you think about these changes?\n>>>>>\n>>>>> Best regards, Stepan Neretin.\n>>>>>\n>>>>> On Fri, Jun 7, 2024 at 11:08 PM Andrey M. Borodin <\n>>>>> [email protected]> wrote:\n>>>>>\n>>>>>> Hi!\n>>>>>>\n>>>>>> In a thread about sorting comparators[0] Andres noted that we have\n>>>>>> infrastructure to help compiler optimize sorting. PFA attached PoC\n>>>>>> implementation. I've checked that it indeed works on the benchmark from\n>>>>>> that thread.\n>>>>>>\n>>>>>> postgres=# CREATE TABLE arrays_to_sort AS\n>>>>>> SELECT array_shuffle(a) arr\n>>>>>> FROM\n>>>>>> (SELECT ARRAY(SELECT generate_series(1, 1000000)) a),\n>>>>>> generate_series(1, 10);\n>>>>>>\n>>>>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- original\n>>>>>> Time: 990.199 ms\n>>>>>> postgres=# SELECT (sort(arr))[1] FROM arrays_to_sort; -- patched\n>>>>>> Time: 696.156 ms\n>>>>>>\n>>>>>> The benefit seems to be on the order of magnitude with 30% speedup.\n>>>>>>\n>>>>>> There's plenty of sorting by TransactionId, BlockNumber,\n>>>>>> OffsetNumber, Oid etc. But this sorting routines never show up in perf top\n>>>>>> or something like that.\n>>>>>>\n>>>>>> Seems like in most cases we do not spend much time in sorting. But\n>>>>>> specialization does not cost us much too, only some CPU cycles of a\n>>>>>> compiler. I think we can further improve speedup by converting inline\n>>>>>> comparator to value extractor: more compilers will see what is actually\n>>>>>> going on. But I have no proofs for this reasoning.\n>>>>>>\n>>>>>> What do you think?\n>>>>>>\n>>>>>>\n>>>>>> Best regards, Andrey Borodin.\n>>>>>>\n>>>>>> [0]\n>>>>>> https://www.postgresql.org/message-id/flat/20240209184014.sobshkcsfjix6u4r%40awork3.anarazel.de#fc23df2cf314bef35095b632380b4a59\n>>>>>\n>>>>>\n>>>> Hello all.\n>>>>\n>>>> I have decided to explore more areas in which I can optimize and have\n>>>> added two new benchmarks. Do you have any thoughts on this?\n>>>>\n>>>> postgres=# select bench_int16_sort(1000000);\n>>>> bench_int16_sort\n>>>>\n>>>>\n>>>> -----------------------------------------------------------------------------------------------------------------\n>>>> Time taken by usual sort: 66354981 ns, Time taken by optimized sort:\n>>>> 52151523 ns, Percentage difference: 21.41%\n>>>> (1 row)\n>>>>\n>>>> postgres=# select bench_float8_sort(1000000);\n>>>> bench_float8_sort\n>>>>\n>>>>\n>>>> ------------------------------------------------------------------------------------------------------------------\n>>>> Time taken by usual sort: 121475231 ns, Time taken by optimized sort:\n>>>> 74458545 ns, Percentage difference: 38.70%\n>>>> (1 row)\n>>>>\n>>>> postgres=#\n>>>>\n>>>> Best regards, Stepan Neretin.\n>>>>\n>>>\n>>\n>> I run benchmark with my patches:\n>> ./pgbench -c 10 -j2 -t1000 -d postgres\n>>\n>> pgbench (18devel)\n>> starting vacuum...end.\n>> transaction type: <builtin: TPC-B (sort of)>\n>> scaling factor: 10\n>> query mode: simple\n>> number of clients: 10\n>> number of threads: 2\n>> maximum number of tries: 1\n>> number of transactions per client: 1000\n>> number of transactions actually processed: 10000/10000\n>> number of failed transactions: 0 (0.000%)\n>> latency average = 1.609 ms\n>> initial connection time = 24.080 ms\n>> tps = 6214.244789 (without initial connection time)\n>>\n>> and without:\n>> ./pgbench -c 10 -j2 -t1000 -d postgres\n>>\n>> pgbench (18devel)\n>> starting vacuum...end.\n>> transaction type: <builtin: TPC-B (sort of)>\n>> scaling factor: 10\n>> query mode: simple\n>> number of clients: 10\n>> number of threads: 2\n>> maximum number of tries: 1\n>> number of transactions per client: 1000\n>> number of transactions actually processed: 10000/10000\n>> number of failed transactions: 0 (0.000%)\n>> latency average = 1.731 ms\n>> initial connection time = 15.177 ms\n>> tps = 5776.173285 (without initial connection time)\n>>\n>> tps with my patches increase. What do you think?\n>>\n>> Best regards, Stepan Neretin.\n>>\n>\n> I implement reverse benchmarks:\n>\n> postgres=# SELECT bench_oid_reverse_sort(1000);\n> bench_oid_reverse_sort\n>\n>\n> ----------------------------------------------------------------------------------------------------------\n> Time taken by list_sort: 182557 ns, Time taken by list_oid_sort: 85864\n> ns, Percentage difference: 52.97%\n> (1 row)\n>\n> Time: 2,291 ms\n> postgres=# SELECT bench_oid_reverse_sort(100000);\n> bench_oid_reverse_sort\n>\n>\n> -------------------------------------------------------------------------------------------------------------\n> Time taken by list_sort: 9064163 ns, Time taken by list_oid_sort: 4313448\n> ns, Percentage difference: 52.41%\n> (1 row)\n>\n> Time: 17,146 ms\n> postgres=# SELECT bench_oid_reverse_sort(1000000);\n> bench_oid_reverse_sort\n>\n>\n> ---------------------------------------------------------------------------------------------------------------\n> Time taken by list_sort: 61990395 ns, Time taken by list_oid_sort:\n> 23703380 ns, Percentage difference: 61.76%\n> (1 row)\n>\n> postgres=# SELECT bench_int_reverse_sort(1000000);\n> bench_int_reverse_sort\n>\n>\n> ---------------------------------------------------------------------------------------------------------------\n> Time taken by list_sort: 50712416 ns, Time taken by list_int_sort:\n> 24120417 ns, Percentage difference: 52.44%\n> (1 row)\n>\n> Time: 89,359 ms\n>\n> postgres=# SELECT bench_float8_reverse_sort(1000000);\n> bench_float8_reverse_sort\n>\n>\n> -----------------------------------------------------------------------------------------------------------------\n> Time taken by usual sort: 57447775 ns, Time taken by optimized sort:\n> 25214023 ns, Percentage difference: 56.11%\n> (1 row)\n>\n> Time: 92,308 ms\n>\n> Hello again. I want to show you the graphs of when we increase the length\n> vector/array sorting time (ns). What do you think about graphs?\n>\n> Best regards, Stepan Neretin.\n>\n> Hello again :) I made a mistake in the benchmarks code. I am attaching new\n> corrected benchmarks for int sorting as example. And my stupid, simple\n> python script for making benchs and draw graphs. What do you think about\n> this graphs?\n>\n>\n> Best regards, Stepan Neretin.\n>", "msg_date": "Mon, 15 Jul 2024 23:42:08 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "\n\n> On 15 Jul 2024, at 12:52, Stepan Neretin <[email protected]> wrote:\n> \n> \n> I run benchmark with my patches:\n> ./pgbench -c 10 -j2 -t1000 -d postgres\n> \n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 2\n> maximum number of tries: 1\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> number of failed transactions: 0 (0.000%)\n> latency average = 1.609 ms\n> initial connection time = 24.080 ms\n> tps = 6214.244789 (without initial connection time)\n> \n> and without:\n> ./pgbench -c 10 -j2 -t1000 -d postgres\n> \n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 2\n> maximum number of tries: 1\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> number of failed transactions: 0 (0.000%)\n> latency average = 1.731 ms\n> initial connection time = 15.177 ms\n> tps = 5776.173285 (without initial connection time)\n> \n> tps with my patches increase. What do you think?\n\n\nHi Stepan!\n\nThank you for implementing specialized sorting and doing this benchmarks.\nI believe it's a possible direction for good improvement.\nHowever, I doubt in correctness of your benchmarks.\nIncreasing TPC-B performance from 5776 TPS to 6214 TPS seems too good to be true. \n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 15 Jul 2024 21:47:14 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Tue, Jul 16, 2024 at 1:47 AM Andrey M. Borodin <[email protected]>\nwrote:\n\n>\n>\n> > On 15 Jul 2024, at 12:52, Stepan Neretin <[email protected]> wrote:\n> >\n> >\n> > I run benchmark with my patches:\n> > ./pgbench -c 10 -j2 -t1000 -d postgres\n> >\n> > pgbench (18devel)\n> > starting vacuum...end.\n> > transaction type: <builtin: TPC-B (sort of)>\n> > scaling factor: 10\n> > query mode: simple\n> > number of clients: 10\n> > number of threads: 2\n> > maximum number of tries: 1\n> > number of transactions per client: 1000\n> > number of transactions actually processed: 10000/10000\n> > number of failed transactions: 0 (0.000%)\n> > latency average = 1.609 ms\n> > initial connection time = 24.080 ms\n> > tps = 6214.244789 (without initial connection time)\n> >\n> > and without:\n> > ./pgbench -c 10 -j2 -t1000 -d postgres\n> >\n> > pgbench (18devel)\n> > starting vacuum...end.\n> > transaction type: <builtin: TPC-B (sort of)>\n> > scaling factor: 10\n> > query mode: simple\n> > number of clients: 10\n> > number of threads: 2\n> > maximum number of tries: 1\n> > number of transactions per client: 1000\n> > number of transactions actually processed: 10000/10000\n> > number of failed transactions: 0 (0.000%)\n> > latency average = 1.731 ms\n> > initial connection time = 15.177 ms\n> > tps = 5776.173285 (without initial connection time)\n> >\n> > tps with my patches increase. What do you think?\n>\n>\n> Hi Stepan!\n>\n> Thank you for implementing specialized sorting and doing this benchmarks.\n> I believe it's a possible direction for good improvement.\n> However, I doubt in correctness of your benchmarks.\n> Increasing TPC-B performance from 5776 TPS to 6214 TPS seems too good to\n> be true.\n>\n>\n> Best regards, Andrey Borodin.\n\n\nYes... I agree.. Very strange.. I restarted the tps measurement and see\nthis:\n\ntps = 14291.893460 (without initial connection time) not patched\ntps = 14669.624075 (without initial connection time) patched\n\nWhat do you think about these measurements?\nBest regards, Stepan Neretin.\n\nOn Tue, Jul 16, 2024 at 1:47 AM Andrey M. Borodin <[email protected]> wrote:\n\n> On 15 Jul 2024, at 12:52, Stepan Neretin <[email protected]> wrote:\n> \n> \n> I run benchmark with my patches:\n> ./pgbench -c 10 -j2 -t1000 -d postgres\n> \n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 2\n> maximum number of tries: 1\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> number of failed transactions: 0 (0.000%)\n> latency average = 1.609 ms\n> initial connection time = 24.080 ms\n> tps = 6214.244789 (without initial connection time)\n> \n> and without:\n> ./pgbench -c 10 -j2 -t1000 -d postgres\n> \n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 2\n> maximum number of tries: 1\n> number of transactions per client: 1000\n> number of transactions actually processed: 10000/10000\n> number of failed transactions: 0 (0.000%)\n> latency average = 1.731 ms\n> initial connection time = 15.177 ms\n> tps = 5776.173285 (without initial connection time)\n> \n> tps with my patches increase. What do you think?\n\n\nHi Stepan!\n\nThank you for implementing specialized sorting and doing this benchmarks.\nI believe it's a possible direction for good improvement.\nHowever, I doubt in correctness of your benchmarks.\nIncreasing TPC-B performance from 5776 TPS to 6214 TPS seems too good to be true. \n\n\nBest regards, Andrey Borodin.Yes... I agree.. Very strange.. I restarted the tps measurement and see this:tps = 14291.893460 (without initial connection time)  not patchedtps = 14669.624075 (without initial connection time)  patchedWhat do you think about these measurements?Best regards, Stepan Neretin.", "msg_date": "Tue, 16 Jul 2024 03:31:19 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "Hi! I rebase, clean and some refactor my patches.\n\nBest regards, Stepan Neretin.", "msg_date": "Sun, 8 Sep 2024 15:50:55 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Sun, 8 Sept 2024 at 20:51, Stepan Neretin <[email protected]> wrote:\n> Hi! I rebase, clean and some refactor my patches.\n\nI'm unsure what exactly is going on with this thread. It started with\nAndrey proposing a patch to speed up intarray sorting and now it's\nturned into you proposing 10 patches which implement a series of sort\nspecialisation functions without any justification as to why the\nchange is useful.\n\nIf you want to have a performance patch accepted, then you'll need to\nshow your test case and the performance results before and after.\n\nWhat this patch series looks like to me is that you've just searched\nthe code base for qsort and just implemented a specialised qsort\nversion without any regard as to whether the change is useful or not.\nFor example, looking at v2-0006, you've added a specialisation to sort\nthe columns which are specified in the CREATE STATISTICS command. This\nseems highly unlikely to be useful. The number of elements in this\narray is limited by STATS_MAX_DIMENSIONS, which is 8. Are you claiming\nthe sort specialisation you've added makes a meaningful performance\nimprovement to sorting an 8 element array?\n\nIt looks to me like you've just derailed Andrey's proposal. I suggest\nyou validate which ones of these patches you can demonstrate produce a\nmeaningful performance improvement, ditch the remainder, and then\nstart your own thread showing your test case and results.\n\nDavid\n\n\n", "msg_date": "Sun, 8 Sep 2024 22:33:45 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "Hi, why do you think that I rejected Andrey's offer? I included his patch\nfirst in my own. Yes, patch 2-0006 is the only patch to which I have not\nattached any statistics and it looks really dubious. But the rest seem\nuseful. Above, I attached a speed graph for one of the patches and tps(\npgbench)\nWhat do you think is the format in which to make benchmarks for my patches?\nBest regards, Stepan Neretin.\n\nHi, why do you think that I rejected Andrey's offer? I included his patch first in my own. Yes, patch 2-0006 is the only patch to which I have not attached any statistics and it looks really dubious. But the rest seem useful. Above, I attached a speed graph for one of the patches and tps(pgbench)\nWhat do you think is the format in which to make benchmarks for my patches?Best regards, Stepan Neretin.", "msg_date": "Sun, 8 Sep 2024 19:59:52 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "On Mon, 9 Sept 2024 at 01:00, Stepan Neretin <[email protected]> wrote:\n> Hi, why do you think that I rejected Andrey's offer? I included his patch first in my own. Yes, patch 2-0006 is the only patch to which I have not attached any statistics and it looks really dubious. But the rest seem useful. Above, I attached a speed graph for one of the patches and tps(pgbench)\n\nThe difference with your patches and Andrey's patch is that he\nincluded a benchmark which is targeted to the code he changed and his\nresults show a speed-up.\n\nWhat it appears that you've done is made an assortment of changes and\npicked the least effort thing that tests performance of something. You\nby chance saw a performance increase so assumed it was due to your\nchanges.\n\n> What do you think is the format in which to make benchmarks for my patches?\n\nYou'll need a benchmark that exercises the code you've changed to some\ndegree where it has a positive impact on performance. As far as I can\nsee, you've not done that yet.\n\nJust to give you the benefit of the doubt, I applied all 10 v2 patches\nand adjusted the call sites to add a NOTICE to include the size of the\narray being sorted. Here is the result of running your benchmark:\n\n$ pgbench -t1000 -d postgres\npgbench (18devel)\nNOTICE: RelationGetIndexList 1\nNOTICE: RelationGetStatExtList 0\nNOTICE: RelationGetIndexList 3\nNOTICE: RelationGetStatExtList 0\nNOTICE: RelationGetIndexList 2\nNOTICE: RelationGetStatExtList 0\nNOTICE: RelationGetIndexList 1\nNOTICE: RelationGetStatExtList 0\nNOTICE: RelationGetIndexList 2\nNOTICE: RelationGetStatExtList 0\nstarting vacuum...NOTICE: RelationGetIndexList 1\nNOTICE: RelationGetIndexList 0\nend.\nNOTICE: RelationGetIndexList 1\nNOTICE: RelationGetStatExtList 0\nNOTICE: RelationGetIndexList 1\nNOTICE: RelationGetStatExtList 0\nNOTICE: RelationGetIndexList 1\nNOTICE: RelationGetStatExtList 0\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nmaximum number of tries: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\nnumber of failed transactions: 0 (0.000%)\nlatency average = 0.915 ms\ninitial connection time = 23.443 ms\ntps = 1092.326732 (without initial connection time)\n\nNote that -t1000 shows the same number of notices as -t1.\n\nSo, it seems everything you've changed that runs in your benchmark is\nRelationGetIndexList() and RelationGetStatExtList(). In one of the\ncalls to RelationGetIndexList() we're sorting up to a maximum of 3\nelements.\n\nJust to be clear, I'm not stating that I think all of your changes are\nuseless. If you want these patches accepted, then you're going to need\nto prove they're useful and you've not done that.\n\nAlso, unless Andrey is happy for you to tag onto the work he's doing,\nI'd suggest another thread for that work. I don't think there's any\ngood reason for that work to delay Andrey's work.\n\nDavid\n\n\n", "msg_date": "Mon, 9 Sep 2024 09:31:34 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort functions with specialized comparators" }, { "msg_contents": "\n\n> On 9 Sep 2024, at 02:31, David Rowley <[email protected]> wrote:\n> \n> Also, unless Andrey is happy for you to tag onto the work he's doing,\n> I'd suggest another thread for that work. I don't think there's any\n> good reason for that work to delay Andrey's work.\n\nStepan asked for mentoring project, so I handed him this patch set. We are working together, but the main goal is integrating Stepan into dev process. Well, the summer was really hot and we somehow were not advancing the project… So your thread bump is very timely!\nMany thanks for your input about benchmarks! We will focus on measuring impact of changes. I totally share your concerns about optimization of sorts that are not used frequently.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 9 Sep 2024 09:42:57 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort functions with specialized comparators" } ]
[ { "msg_contents": "When deploying RLS, I was surprised to find that certain queries which used \nonly builtin indexes and operators had dramatically different query plans when \na policy is applied. In my case, the query `tsvector @@ tsquery` over a GIN\nindex was no longer able to use that index. I was able to find one other\ninstance [1] of someone being surprised by this behavior on the mailing lists.\n\nThe docs already discuss the LEAKPROOF semantics in the abstract, but I think \nthey place not enough focus on the idea that builtin operators can be (and\nfrequently are) not leakproof. Based on the query given in the attached patch,\nI found that 387 operators are not leakproof versus 588 that are.\n\nThe attached patch updates the documentation to provide an easy query over\nsystem catalogs as a way of determining which operators will no longer perform\nwell under RLS or a security-barrier view.\n\nThanks,\nJosh\n\n[1] https://www.postgresql.org/message-id/CAGrP7a2t%2BJbeuxpQY%2BRSvNe4fr3%2B%3D%3DUmyimwV0GCD%2BwcrSSb%3Dw%40mail.gmail.com", "msg_date": "Sat, 18 May 2024 16:54:52 -0700", "msg_from": "Josh Snyder <[email protected]>", "msg_from_op": true, "msg_subject": "PATCH: Add query for operators unusable with RLS to documentation" }, { "msg_contents": "On Sat, 18 May 2024 16:54:52 -0700\nJosh Snyder <[email protected]> wrote:\n\n> When deploying RLS, I was surprised to find that certain queries which used \n> only builtin indexes and operators had dramatically different query plans when \n> a policy is applied. In my case, the query `tsvector @@ tsquery` over a GIN\n> index was no longer able to use that index. I was able to find one other\n> instance [1] of someone being surprised by this behavior on the mailing lists.\n> \n> The docs already discuss the LEAKPROOF semantics in the abstract, but I think \n> they place not enough focus on the idea that builtin operators can be (and\n> frequently are) not leakproof. Based on the query given in the attached patch,\n> I found that 387 operators are not leakproof versus 588 that are.\n> \n> The attached patch updates the documentation to provide an easy query over\n> system catalogs as a way of determining which operators will no longer perform\n> well under RLS or a security-barrier view.\n\nI think it would be worth mentioning an index involving non-LEAKPROOF operator\ncould not work with RLS or a security-barrier view in the documentation. \n(e.g. like https://www.postgresql.org/message-id/2273225.DEBA8KRT0r%40peanuts2)\nIt may help to avoid other users from facing the surprise you got.\n\nHowever, I am not sure if it is appropriate to write the query consulting\npg_amop in this part of the documentation.It is enough to add a reference to\nthe other part describing operation familiar, for example, \"11.10. Operator Classes\nand Operator Families\"? Additionally, is it useful to add LEAKPROOF information\nto the result of psql \\dAo(+) meta-comand, or a function that can check given index\nor operator is leakproof or not?\n\nRegards,\nYugo Nagata\n\n> Thanks,\n> Josh\n> \n> [1] https://www.postgresql.org/message-id/CAGrP7a2t%2BJbeuxpQY%2BRSvNe4fr3%2B%3D%3DUmyimwV0GCD%2BwcrSSb%3Dw%40mail.gmail.com\n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Sun, 23 Jun 2024 19:14:09 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add query for operators unusable with RLS to\n documentation" }, { "msg_contents": "On Sun, 23 Jun 2024 19:14:09 +0900\nYugo NAGATA <[email protected]> wrote:\n\n> and Operator Families\"? Additionally, is it useful to add LEAKPROOF information\n> to the result of psql \\dAo(+) meta-comand, or a function that can check given index\n> or operator is leakproof or not?\n\nI worte a pach to implement the proposal above and submitted in the new thread[1].\n\n[1] https://www.postgresql.org/message-id/20240701220817.483f9b645b95611f8b1f65da%40sranhm.sraoss.co.jp\n\nRegards,\nYugo Nagata\n\n\n> Regards,\n> Yugo Nagata\n> \n> > Thanks,\n> > Josh\n> > \n> > [1] https://www.postgresql.org/message-id/CAGrP7a2t%2BJbeuxpQY%2BRSvNe4fr3%2B%3D%3DUmyimwV0GCD%2BwcrSSb%3Dw%40mail.gmail.com\n> \n> \n> -- \n> Yugo NAGATA <[email protected]>\n> \n> \n\n\n-- \nYugo NAGATA <[email protected]>\n\n\n", "msg_date": "Mon, 1 Jul 2024 22:13:23 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PATCH: Add query for operators unusable with RLS to\n documentation" } ]
[ { "msg_contents": "I noticed that PlannedStmt.hasReturning and hasModifyingCTE have an\noutdated comment now that MERGE supports RETURNING (per commit\nc649fa24a)\n\ni.e. these two:\n\n> bool hasReturning; /* is it insert|update|delete RETURNING? */\n\n> bool hasModifyingCTE; /* has insert|update|delete in WITH? */\n\ntransformWithClause() has:\n\n/* must be a data-modifying statement */\nAssert(IsA(cte->ctequery, InsertStmt) ||\n IsA(cte->ctequery, UpdateStmt) ||\n IsA(cte->ctequery, DeleteStmt) ||\n IsA(cte->ctequery, MergeStmt));\n\npstate->p_hasModifyingCTE = true;\n\nwhich eventually makes it into PlannedStmt.hasModifyingCTE.\n\nThe attached trivial patch fixes these.\n\nDavid", "msg_date": "Sun, 19 May 2024 15:20:53 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Minor fixes for couple some comments around MERGE RETURNING" }, { "msg_contents": "On Sun, 19 May 2024 at 15:20, David Rowley <[email protected]> wrote:\n>\n> I noticed that PlannedStmt.hasReturning and hasModifyingCTE have an\n> outdated comment now that MERGE supports RETURNING (per commit\n> c649fa24a)\n>\n> i.e. these two:\n>\n> > bool hasReturning; /* is it insert|update|delete RETURNING? */\n>\n> > bool hasModifyingCTE; /* has insert|update|delete in WITH? */\n\nI've pushed the fix for that.\n\nDavid\n\n\n", "msg_date": "Thu, 23 May 2024 15:25:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor fixes for couple some comments around MERGE RETURNING" }, { "msg_contents": "On Thu, 23 May 2024 at 04:26, David Rowley <[email protected]> wrote:\n>\n> On Sun, 19 May 2024 at 15:20, David Rowley <[email protected]> wrote:\n> >\n> > I noticed that PlannedStmt.hasReturning and hasModifyingCTE have an\n> > outdated comment now that MERGE supports RETURNING (per commit\n> > c649fa24a)\n> >\n> > i.e. these two:\n> >\n> > > bool hasReturning; /* is it insert|update|delete RETURNING? */\n> >\n> > > bool hasModifyingCTE; /* has insert|update|delete in WITH? */\n>\n> I've pushed the fix for that.\n>\n\nThanks for taking care of that.\n\nI found another couple of similar comments that also needed updating,\nso I've pushed a fix for them too.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:39:05 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor fixes for couple some comments around MERGE RETURNING" } ]
[ { "msg_contents": "\npostgresql 17 (testing)  SLES 15.5  missing \"repodata\"  so i can´t add \nrepo to OpenSUSE...\nPlease add...  thank you..\n\nurl:\nhttps://download.postgresql.org/pub/repos/zypp/testing/17/suse/sles-15.5-x86_64/\n\n\n\n", "msg_date": "Sun, 19 May 2024 14:03:42 +0200", "msg_from": "=?UTF-8?Q?Andr=C3=A9_Verwijs?= <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql 17 (testing) SLES 15.5 missing \"repodata\" .." } ]
[ { "msg_contents": "I'm fairly disturbed about the readiness of pg_createsubscriber.\nThe 040_pg_createsubscriber.pl TAP test is moderately unstable\nin the buildfarm [1], and there are various unaddressed complaints\nat the end of the patch thread (pretty much everything below [2]).\nI guess this is good enough to start beta with, but it's far from\nbeing good enough to ship, IMO. If there were active work going\non to fix these things, I'd feel better, but neither the C code\nnor the test script have been touched since 1 April.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=30&branch=HEAD&member=&stage=pg_basebackupCheck&filter=Submit\n[2] https://www.postgresql.org/message-id/flat/3fa9ef0f-b277-4c13-850a-8ccc04de1406%40eisentraut.org#152dacecfc8f0cf08cbd8ecb79d6d38f\n\n\n", "msg_date": "Sun, 19 May 2024 13:30:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "State of pg_createsubscriber" }, { "msg_contents": "On Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:\n> I'm fairly disturbed about the readiness of pg_createsubscriber.\n> The 040_pg_createsubscriber.pl TAP test is moderately unstable\n> in the buildfarm [1], and there are various unaddressed complaints\n> at the end of the patch thread (pretty much everything below [2]).\n> I guess this is good enough to start beta with, but it's far from\n> being good enough to ship, IMO. If there were active work going\n> on to fix these things, I'd feel better, but neither the C code\n> nor the test script have been touched since 1 April.\n\nMy bad. :( I'll post patches soon to address all of the points.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:I'm fairly disturbed about the readiness of pg_createsubscriber.The 040_pg_createsubscriber.pl TAP test is moderately unstablein the buildfarm [1], and there are various unaddressed complaintsat the end of the patch thread (pretty much everything below [2]).I guess this is good enough to start beta with, but it's far frombeing good enough to ship, IMO.  If there were active work goingon to fix these things, I'd feel better, but neither the C codenor the test script have been touched since 1 April.My bad. :( I'll post patches soon to address all of the points.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Sun, 19 May 2024 14:49:22 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Sun, May 19, 2024 at 11:20 PM Euler Taveira <[email protected]> wrote:\n>\n> On Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:\n>\n> I'm fairly disturbed about the readiness of pg_createsubscriber.\n> The 040_pg_createsubscriber.pl TAP test is moderately unstable\n> in the buildfarm [1], and there are various unaddressed complaints\n> at the end of the patch thread (pretty much everything below [2]).\n> I guess this is good enough to start beta with, but it's far from\n> being good enough to ship, IMO. If there were active work going\n> on to fix these things, I'd feel better, but neither the C code\n> nor the test script have been touched since 1 April.\n>\n>\n> My bad. :( I'll post patches soon to address all of the points.\n>\n\nJust to summarize, apart from BF failures for which we had some\ndiscussion, I could recall the following open points:\n\n1. After promotion, the pre-existing replication objects should be\nremoved (either optionally or always), otherwise, it can lead to a new\nsubscriber not being able to restart or getting some unwarranted data.\n[1][2].\n\n2. Retaining synced slots on new subscribers can lead to unnecessary\nWAL retention and dead rows [3].\n\n3. We need to consider whether some of the messages displayed in\n--dry-run mode are useful or not [4].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1L6HOhT1qifTyuestXkPpkRwY9bOqFd4wydKsN6C3hePA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/flat/3fa9ef0f-b277-4c13-850a-8ccc04de1406%40eisentraut.org#152dacecfc8f0cf08cbd8ecb79d6d38f\n[3] - https://www.postgresql.org/message-id/CAA4eK1KdCb%2B5sjYu6qCMXXdCX1y_ihr8kFzMozq0%3DP%3DauYxgog%40mail.gmail.com\n[4] - https://www.postgresql.org/message-id/flat/3fa9ef0f-b277-4c13-850a-8ccc04de1406%40eisentraut.org#152dacecfc8f0cf08cbd8ecb79d6d38f\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 20 May 2024 12:12:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Sun, May 19, 2024 at 02:49:22PM -0300, Euler Taveira wrote:\n> My bad. :( I'll post patches soon to address all of the points.\n\nPlease note that I have added an open item pointing at this thread.\n--\nMichael", "msg_date": "Mon, 20 May 2024 16:29:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "> Just to summarize, apart from BF failures for which we had some\n> discussion, I could recall the following open points:\n>\n> 1. After promotion, the pre-existing replication objects should be\n> removed (either optionally or always), otherwise, it can lead to a new\n> subscriber not being able to restart or getting some unwarranted data.\n> [1][2].\n>\nI tried to reproduce the case and found a case where pre-existing\nreplication objects can cause unwanted scenario:\n\nSuppose we have a setup of nodes N1, N2 and N3.\nN1 and N2 are in streaming replication where N1 is primary and N2 is standby.\nN3 and N1 are in logical replication where N3 is publisher and N1 is subscriber.\nThe subscription created on N1 is replicated to N2 due to streaming replication.\n\nNow, after we run pg_createsubscriber on N2 and start the N2 server,\nwe get the following logs repetitively:\n2024-05-22 11:37:18.619 IST [27344] ERROR: could not start WAL\nstreaming: ERROR: replication slot \"test1\" is active for PID 27202\n2024-05-22 11:37:18.622 IST [27317] LOG: background worker \"logical\nreplication apply worker\" (PID 27344) exited with exit code 1\n2024-05-22 11:37:23.610 IST [27349] LOG: logical replication apply\nworker for subscription \"test1\" has started\n2024-05-22 11:37:23.624 IST [27349] ERROR: could not start WAL\nstreaming: ERROR: replication slot \"test1\" is active for PID 27202\n2024-05-22 11:37:23.627 IST [27317] LOG: background worker \"logical\nreplication apply worker\" (PID 27349) exited with exit code 1\n2024-05-22 11:37:28.616 IST [27382] LOG: logical replication apply\nworker for subscription \"test1\" has started\n\nNote: 'test1' is the name of the subscription created on N1 initially\nand by default, slot name is the same as the subscription name.\n\nOnce the N2 server is started after running pg_createsubscriber, the\nsubscription that was earlier replicated by streaming replication will\nnow try to connect to the publisher. Since the subscription name in N2\nis the same as the subscription created in N1, it will not be able to\nstart a replication slot as the slot with the same name is active for\nlogical replication between N3 and N1.\n\nAlso, there would be a case where N1 becomes down for some time. Then\nin that case subscription on N2 will connect to the publication on N3\nand now data from N3 will be replicated to N2 instead of N1. And once\nN1 is up again, subscription on N1 will not be able to connect to\npublication on N3 as it is already connected to N2. This can lead to\ndata inconsistency.\n\nThis error did not happen before running pg_createsubscriber on\nstandby node N2, because there is no 'logical replication launcher'\nprocess on standby node.\n\nThanks and Regards,\nShlok Kyal\n\n\n", "msg_date": "Wed, 22 May 2024 14:45:19 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Wed, May 22, 2024 at 2:45 PM Shlok Kyal <[email protected]> wrote:\n>\n> > Just to summarize, apart from BF failures for which we had some\n> > discussion, I could recall the following open points:\n> >\n> > 1. After promotion, the pre-existing replication objects should be\n> > removed (either optionally or always), otherwise, it can lead to a new\n> > subscriber not being able to restart or getting some unwarranted data.\n> > [1][2].\n> >\n> I tried to reproduce the case and found a case where pre-existing\n> replication objects can cause unwanted scenario:\n>\n> Suppose we have a setup of nodes N1, N2 and N3.\n> N1 and N2 are in streaming replication where N1 is primary and N2 is standby.\n> N3 and N1 are in logical replication where N3 is publisher and N1 is subscriber.\n> The subscription created on N1 is replicated to N2 due to streaming replication.\n>\n> Now, after we run pg_createsubscriber on N2 and start the N2 server,\n> we get the following logs repetitively:\n> 2024-05-22 11:37:18.619 IST [27344] ERROR: could not start WAL\n> streaming: ERROR: replication slot \"test1\" is active for PID 27202\n> 2024-05-22 11:37:18.622 IST [27317] LOG: background worker \"logical\n> replication apply worker\" (PID 27344) exited with exit code 1\n> 2024-05-22 11:37:23.610 IST [27349] LOG: logical replication apply\n> worker for subscription \"test1\" has started\n> 2024-05-22 11:37:23.624 IST [27349] ERROR: could not start WAL\n> streaming: ERROR: replication slot \"test1\" is active for PID 27202\n> 2024-05-22 11:37:23.627 IST [27317] LOG: background worker \"logical\n> replication apply worker\" (PID 27349) exited with exit code 1\n> 2024-05-22 11:37:28.616 IST [27382] LOG: logical replication apply\n> worker for subscription \"test1\" has started\n>\n> Note: 'test1' is the name of the subscription created on N1 initially\n> and by default, slot name is the same as the subscription name.\n>\n> Once the N2 server is started after running pg_createsubscriber, the\n> subscription that was earlier replicated by streaming replication will\n> now try to connect to the publisher. Since the subscription name in N2\n> is the same as the subscription created in N1, it will not be able to\n> start a replication slot as the slot with the same name is active for\n> logical replication between N3 and N1.\n>\n> Also, there would be a case where N1 becomes down for some time. Then\n> in that case subscription on N2 will connect to the publication on N3\n> and now data from N3 will be replicated to N2 instead of N1. And once\n> N1 is up again, subscription on N1 will not be able to connect to\n> publication on N3 as it is already connected to N2. This can lead to\n> data inconsistency.\n>\n\nSo, what shall we do about such cases? I think by default we can\nremove all pre-existing subscriptions and publications on the promoted\nstandby or instead we can remove them based on some switch. If we want\nto go with this idea then we might need to distinguish the between\npre-existing subscriptions and the ones created by this tool.\n\nThe other case I remember adding an option in this tool was to avoid\nspecifying slots, pubs, etc. for each database. See [1]. We can\nprobably leave if the same is not important but we never reached the\nconclusion of same.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2Br96SyHYHx7BaTtGX0eviqpbbkSu01MEzwV5b2VFXP6g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 22 May 2024 17:12:40 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Wed, May 22, 2024 at 7:42 AM Amit Kapila <[email protected]> wrote:\n> So, what shall we do about such cases? I think by default we can\n> remove all pre-existing subscriptions and publications on the promoted\n> standby or instead we can remove them based on some switch. If we want\n> to go with this idea then we might need to distinguish the between\n> pre-existing subscriptions and the ones created by this tool.\n>\n> The other case I remember adding an option in this tool was to avoid\n> specifying slots, pubs, etc. for each database. See [1]. We can\n> probably leave if the same is not important but we never reached the\n> conclusion of same.\n\nAnother option that we should at least consider is \"do nothing\". In a\ncase like the one Shlok describes, how are we supposed to know what\nthe right thing to do is? Is it unreasonable to say that if the user\ndoesn't want those publications or subscriptions to exist, the user\nshould drop them?\n\nMaybe it is unreasonable to say that, but it seems to me we should at\nleast talk about that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 May 2024 09:52:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Wed, May 22, 2024 at 9:52 AM Robert Haas <[email protected]> wrote:\n> Another option that we should at least consider is \"do nothing\". In a\n> case like the one Shlok describes, how are we supposed to know what\n> the right thing to do is? Is it unreasonable to say that if the user\n> doesn't want those publications or subscriptions to exist, the user\n> should drop them?\n>\n> Maybe it is unreasonable to say that, but it seems to me we should at\n> least talk about that.\n\nAs another option, maybe we could disable subscriptions, so that\nnothing happens when the server is first started, and then the user\ncould decide after that what they want to do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 May 2024 10:29:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Mon, May 20, 2024 at 2:42 AM Amit Kapila <[email protected]> wrote:\n> Just to summarize, apart from BF failures for which we had some\n> discussion, I could recall the following open points:\n>\n> 1. After promotion, the pre-existing replication objects should be\n> removed (either optionally or always), otherwise, it can lead to a new\n> subscriber not being able to restart or getting some unwarranted data.\n> [1][2].\n>\n> 2. Retaining synced slots on new subscribers can lead to unnecessary\n> WAL retention and dead rows [3].\n>\n> 3. We need to consider whether some of the messages displayed in\n> --dry-run mode are useful or not [4].\n\nAmit, thanks for summarzing your understanding of the situation. Tom,\nis this list complete, to your knowledge? The original thread is quite\ncomplex and it's hard to pick out what the open items actually are.\n:-(\n\nI would like to see this open item broken up into multiple open items,\none per issue.\n\nLink [4] goes to a message that doesn't seem to relate to --dry-run.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 22 May 2024 10:32:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Wed, May 22, 2024 at 8:00 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, May 22, 2024 at 9:52 AM Robert Haas <[email protected]> wrote:\n> > Another option that we should at least consider is \"do nothing\". In a\n> > case like the one Shlok describes, how are we supposed to know what\n> > the right thing to do is? Is it unreasonable to say that if the user\n> > doesn't want those publications or subscriptions to exist, the user\n> > should drop them?\n> >\n\nThis tool's intended purpose is to speed up the creation of a\nsubscriber node and for that one won't need the subscriptions already\npresent on the existing primary/publisher node from which the new\nsubscriber node is going to get the data. Additionally, the ERRORs\nshown by Shlok will occur even during the command (performed by\npg_createsubscriber) which will probably be confusing.\n\n> > Maybe it is unreasonable to say that, but it seems to me we should at\n> > least talk about that.\n>\n> As another option, maybe we could disable subscriptions, so that\n> nothing happens when the server is first started, and then the user\n> could decide after that what they want to do.\n>\n\nYeah, this would be worth considering. Note that even if the user\nwants to retain such pre-existing subscriptions and enable them, they\nneed more steps than just to enable these to avoid duplicate data\nissues or ERRORs as shown in Shlok's test.\n\nSo, we have the following options: (a) by default drop the\npre-existing subscriptions, (b) by default disable the pre-existing\nsubscriptions, and add a Note in the docs that users can take\nnecessary actions to enable or drop them. Now, we can even think of\nproviding a switch to retain the pre-existing subscriptions or\npublications as the user may have some use case where it can be\nhelpful for her. For example, retaining publications can help in\ncreating a bi-directional setup.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 23 May 2024 12:18:14 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "Dear Amit, Robert,\r\n\r\n> So, we have the following options: (a) by default drop the\r\n> pre-existing subscriptions, (b) by default disable the pre-existing\r\n> subscriptions, and add a Note in the docs that users can take\r\n> necessary actions to enable or drop them. Now, we can even think of\r\n> providing a switch to retain the pre-existing subscriptions or\r\n> publications as the user may have some use case where it can be\r\n> helpful for her. For example, retaining publications can help in\r\n> creating a bi-directional setup.\r\n\r\nAnother point we should consider is the replication slot. If standby server has\r\nhad slots and they were forgotten, WAL files won't be discarded so disk full\r\nfailure will happen. v2-0004 proposed in [1] drops replication slots when their\r\nfailover option is true. This can partially solve the issue, but what should be\r\nfor other slots?\r\n\r\n[1]: https://www.postgresql.org/message-id/CANhcyEV6q1Vhd37i1axUeScLi0UAGVxta1LDa0BV0Eh--TcPMg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Thu, 23 May 2024 06:55:40 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: State of pg_createsubscriber" }, { "msg_contents": "On Wed, May 22, 2024 at 8:02 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, May 20, 2024 at 2:42 AM Amit Kapila <[email protected]> wrote:\n> > Just to summarize, apart from BF failures for which we had some\n> > discussion, I could recall the following open points:\n> >\n> > 1. After promotion, the pre-existing replication objects should be\n> > removed (either optionally or always), otherwise, it can lead to a new\n> > subscriber not being able to restart or getting some unwarranted data.\n> > [1][2].\n> >\n> > 2. Retaining synced slots on new subscribers can lead to unnecessary\n> > WAL retention and dead rows [3].\n> >\n> > 3. We need to consider whether some of the messages displayed in\n> > --dry-run mode are useful or not [4].\n>\n> Amit, thanks for summarzing your understanding of the situation. Tom,\n> is this list complete, to your knowledge? The original thread is quite\n> complex and it's hard to pick out what the open items actually are.\n> :-(\n>\n> I would like to see this open item broken up into multiple open items,\n> one per issue.\n>\n> Link [4] goes to a message that doesn't seem to relate to --dry-run.\n>\n\nSorry, for the wrong link. See [1] for the correct link for --dry-run\nrelated suggestion:\n\n[1] https://www.postgresql.org/message-id/CAA4eK1J2fAvsJ2HihbWJ_GxETd6sdqSMrZdCVJEutRZRpm1MEQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 23 May 2024 14:10:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Thu, May 23, 2024 at 4:40 AM Amit Kapila <[email protected]> wrote:\n> Sorry, for the wrong link. See [1] for the correct link for --dry-run\n> related suggestion:\n>\n> [1] https://www.postgresql.org/message-id/CAA4eK1J2fAvsJ2HihbWJ_GxETd6sdqSMrZdCVJEutRZRpm1MEQ%40mail.gmail.com\n\nYeah, those should definitely be fixed. Seems simple enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 23 May 2024 09:49:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Mon, May 20, 2024 at 12:12 PM Amit Kapila <[email protected]> wrote:\n>\n> On Sun, May 19, 2024 at 11:20 PM Euler Taveira <[email protected]> wrote:\n> >\n> > On Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:\n> >\n> > I'm fairly disturbed about the readiness of pg_createsubscriber.\n> > The 040_pg_createsubscriber.pl TAP test is moderately unstable\n> > in the buildfarm [1], and there are various unaddressed complaints\n> > at the end of the patch thread (pretty much everything below [2]).\n> > I guess this is good enough to start beta with, but it's far from\n> > being good enough to ship, IMO. If there were active work going\n> > on to fix these things, I'd feel better, but neither the C code\n> > nor the test script have been touched since 1 April.\n> >\n> >\n> > My bad. :( I'll post patches soon to address all of the points.\n> >\n>\n> Just to summarize, apart from BF failures for which we had some\n> discussion, I could recall the following open points:\n>\n> 1. After promotion, the pre-existing replication objects should be\n> removed (either optionally or always), otherwise, it can lead to a new\n> subscriber not being able to restart or getting some unwarranted data.\n> [1][2].\n>\n> 2. Retaining synced slots on new subscribers can lead to unnecessary\n> WAL retention and dead rows [3].\n>\n> 3. We need to consider whether some of the messages displayed in\n> --dry-run mode are useful or not [4].\n>\n\nThe recent commits should silence BF failures and resolve point number\n2. But we haven't done anything yet for 1 and 3. For 3, we have a\npatch in email [1] (v3-0005-Avoid*) which can be reviewed and\ncommitted but point number 1 needs discussion. Apart from that\nsomewhat related to point 1, Kuroda-San has raised a point in an email\n[2] for replication slots. Shlok has presented a case in this thread\n[3] where the problem due to point 1 can cause ERRORs or can cause\ndata inconsistency.\n\nNow, the easiest way out here is that we accept the issues with the\npre-existing subscriptions and replication slots cases and just\ndocument them for now with the intention to work on those in the next\nversion. OTOH, if there are no major challenges, we can try to\nimplement a patch for them as well as see how it goes.\n\n[1] https://www.postgresql.org/message-id/CANhcyEWGfp7_AGg2zZUgJF_VYTCix01yeY8ZX9woz%2B03WCMPRg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/OSBPR01MB25525C17E2EF5FC81152F6C8F5F42%40OSBPR01MB2552.jpnprd01.prod.outlook.com\n[3] https://www.postgresql.org/message-id/CANhcyEWvimA1-f6hSrA%3D9qkfR5SonFb56b36M%2B%2BvT%3DLiFj%3D76g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 18 Jun 2024 09:29:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On 18.06.24 05:59, Amit Kapila wrote:\n>> 1. After promotion, the pre-existing replication objects should be\n>> removed (either optionally or always), otherwise, it can lead to a new\n>> subscriber not being able to restart or getting some unwarranted data.\n>> [1][2].\n>>\n>> 2. Retaining synced slots on new subscribers can lead to unnecessary\n>> WAL retention and dead rows [3].\n>>\n>> 3. We need to consider whether some of the messages displayed in\n>> --dry-run mode are useful or not [4].\n>>\n> \n> The recent commits should silence BF failures and resolve point number\n> 2. But we haven't done anything yet for 1 and 3. For 3, we have a\n> patch in email [1] (v3-0005-Avoid*) which can be reviewed and\n> committed but point number 1 needs discussion. Apart from that\n> somewhat related to point 1, Kuroda-San has raised a point in an email\n> [2] for replication slots. Shlok has presented a case in this thread\n> [3] where the problem due to point 1 can cause ERRORs or can cause\n> data inconsistency.\n> \n> Now, the easiest way out here is that we accept the issues with the\n> pre-existing subscriptions and replication slots cases and just\n> document them for now with the intention to work on those in the next\n> version. OTOH, if there are no major challenges, we can try to\n> implement a patch for them as well as see how it goes.\n\nThis has gotten much too confusing to keep track of.\n\nI suggest, if anyone has anything they want considered for \npg_createsubscriber for PG17 at this point, they start a new thread, one \nfor each topic, ideally with a subject like \"pg_createsubscriber: \nImprove this thing\", provide a self-contained description of the issue, \nand include a patch if one is available.\n\n\n\n", "msg_date": "Tue, 18 Jun 2024 08:43:50 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Tue, Jun 18, 2024, at 12:59 AM, Amit Kapila wrote:\n> On Mon, May 20, 2024 at 12:12 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sun, May 19, 2024 at 11:20 PM Euler Taveira <[email protected]> wrote:\n> > >\n> > > On Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:\n> > >\n> > > I'm fairly disturbed about the readiness of pg_createsubscriber.\n> > > The 040_pg_createsubscriber.pl TAP test is moderately unstable\n> > > in the buildfarm [1], and there are various unaddressed complaints\n> > > at the end of the patch thread (pretty much everything below [2]).\n> > > I guess this is good enough to start beta with, but it's far from\n> > > being good enough to ship, IMO. If there were active work going\n> > > on to fix these things, I'd feel better, but neither the C code\n> > > nor the test script have been touched since 1 April.\n> > >\n> > >\n> > > My bad. :( I'll post patches soon to address all of the points.\n> > >\n> >\n> > Just to summarize, apart from BF failures for which we had some\n> > discussion, I could recall the following open points:\n> >\n> > 1. After promotion, the pre-existing replication objects should be\n> > removed (either optionally or always), otherwise, it can lead to a new\n> > subscriber not being able to restart or getting some unwarranted data.\n> > [1][2].\n> >\n> > 2. Retaining synced slots on new subscribers can lead to unnecessary\n> > WAL retention and dead rows [3].\n> >\n> > 3. We need to consider whether some of the messages displayed in\n> > --dry-run mode are useful or not [4].\n> >\n> \n> The recent commits should silence BF failures and resolve point number\n> 2. But we haven't done anything yet for 1 and 3. For 3, we have a\n> patch in email [1] (v3-0005-Avoid*) which can be reviewed and\n> committed but point number 1 needs discussion. Apart from that\n> somewhat related to point 1, Kuroda-San has raised a point in an email\n> [2] for replication slots. Shlok has presented a case in this thread\n> [3] where the problem due to point 1 can cause ERRORs or can cause\n> data inconsistency.\n\nI read v3-0005 and it seems to silence (almost) all \"write\" messages. Does it\nintend to avoid the misinterpretation that the dry run mode is writing\nsomething? It is dry run mode! If I run a tool in dry run mode, I expect it to\nexecute some verifications and print useful messages so I can evaluate if it is\nok to run it. Maybe it is not your expectation for dry run mode.\n\nI think if it not clear, let's inform that it changes nothing in dry run mode.\n\npg_createsubscriber: no modifications are done\n\nas a first message in dry run mode. I agree with you when you pointed out that\nsome messages are misleading.\n\npg_createsubscriber: hint: If pg_createsubscriber fails after this\npoint, you must recreate the physical replica before continuing.\n\nMaybe for this one, we omit the fake information, like:\n\npg_createsubscriber: setting the replication progress on database \"postgres\"\n\nI will post a patch to address the messages once we agree what needs to be\nchanged.\n\nRegarding 3, publications and subscriptions are ok to remove. You are not\nallowed to create them on standby, hence, all publications and subscriptions\nare streamed from primary. However, I'm wondering if you want to remove the\npublications. Replication slots on a standby server are \"invalidated\" despite\nof the wal_status is saying \"reserved\" (I think it is an oversight in the\ndesign that implements slot invalidation), however, required WAL files have\nalready been removed because of pg_resetwal (see modify_subscriber_sysid()).\nThe scenario is to promote a standby server, run pg_resetwal on it and check\npg_replication_slots.\n\nDo you have any other scenarios in mind?\n\n> Now, the easiest way out here is that we accept the issues with the\n> pre-existing subscriptions and replication slots cases and just\n> document them for now with the intention to work on those in the next\n> version. OTOH, if there are no major challenges, we can try to\n> implement a patch for them as well as see how it goes.\n\nAgree.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Jun 18, 2024, at 12:59 AM, Amit Kapila wrote:On Mon, May 20, 2024 at 12:12 PM Amit Kapila <[email protected]> wrote:>> On Sun, May 19, 2024 at 11:20 PM Euler Taveira <[email protected]> wrote:> >> > On Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:> >> > I'm fairly disturbed about the readiness of pg_createsubscriber.> > The 040_pg_createsubscriber.pl TAP test is moderately unstable> > in the buildfarm [1], and there are various unaddressed complaints> > at the end of the patch thread (pretty much everything below [2]).> > I guess this is good enough to start beta with, but it's far from> > being good enough to ship, IMO.  If there were active work going> > on to fix these things, I'd feel better, but neither the C code> > nor the test script have been touched since 1 April.> >> >> > My bad. :( I'll post patches soon to address all of the points.> >>> Just to summarize, apart from BF failures for which we had some> discussion, I could recall the following open points:>> 1. After promotion, the pre-existing replication objects should be> removed (either optionally or always), otherwise, it can lead to a new> subscriber not being able to restart or getting some unwarranted data.> [1][2].>> 2. Retaining synced slots on new subscribers can lead to unnecessary> WAL retention and dead rows [3].>> 3. We need to consider whether some of the messages displayed in> --dry-run mode are useful or not [4].>The recent commits should silence BF failures and resolve point number2. But we haven't done anything yet for 1 and 3. For 3, we have apatch in email [1] (v3-0005-Avoid*) which can be reviewed andcommitted but point number 1 needs discussion. Apart from thatsomewhat related to point 1, Kuroda-San has raised a point in an email[2] for replication slots. Shlok has presented a case in this thread[3] where the problem due to point 1 can cause ERRORs or can causedata inconsistency.I read v3-0005 and it seems to silence (almost) all \"write\" messages. Does itintend to avoid the misinterpretation that the dry run mode is writingsomething? It is dry run mode! If I run a tool in dry run mode, I expect it toexecute some verifications and print useful messages so I can evaluate if it isok to run it. Maybe it is not your expectation for dry run mode.I think if it not clear, let's inform that it changes nothing in dry run mode.pg_createsubscriber: no modifications are doneas a first message in dry run mode. I agree with you when you pointed out thatsome messages are misleading.pg_createsubscriber: hint: If pg_createsubscriber fails after thispoint, you must recreate the physical replica before continuing.Maybe for this one, we omit the fake information, like:pg_createsubscriber: setting the replication progress on database \"postgres\"I will post a patch to address the messages once we agree what needs to bechanged.Regarding 3, publications and subscriptions are ok to remove. You are notallowed to create them on standby, hence, all publications and subscriptionsare streamed from primary. However, I'm wondering if you want to remove thepublications. Replication slots on a standby server are \"invalidated\" despiteof the wal_status is saying \"reserved\" (I think it is an oversight in thedesign that implements slot invalidation), however, required WAL files havealready been removed because of pg_resetwal (see modify_subscriber_sysid()).The scenario is to promote a standby server, run pg_resetwal on it and checkpg_replication_slots.Do you have any other scenarios in mind?Now, the easiest way out here is that we accept the issues with thepre-existing subscriptions and replication slots cases and justdocument them for now with the intention to work on those in the nextversion. OTOH, if there are no major challenges, we can try toimplement a patch for them as well as see how it goes.Agree.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 18 Jun 2024 04:10:51 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Tue, Jun 18, 2024 at 12:41 PM Euler Taveira <[email protected]> wrote:\n>\n> On Tue, Jun 18, 2024, at 12:59 AM, Amit Kapila wrote:\n>\n> On Mon, May 20, 2024 at 12:12 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sun, May 19, 2024 at 11:20 PM Euler Taveira <[email protected]> wrote:\n> > >\n> > > On Sun, May 19, 2024, at 2:30 PM, Tom Lane wrote:\n> > >\n> > > I'm fairly disturbed about the readiness of pg_createsubscriber.\n> > > The 040_pg_createsubscriber.pl TAP test is moderately unstable\n> > > in the buildfarm [1], and there are various unaddressed complaints\n> > > at the end of the patch thread (pretty much everything below [2]).\n> > > I guess this is good enough to start beta with, but it's far from\n> > > being good enough to ship, IMO. If there were active work going\n> > > on to fix these things, I'd feel better, but neither the C code\n> > > nor the test script have been touched since 1 April.\n> > >\n> > >\n> > > My bad. :( I'll post patches soon to address all of the points.\n> > >\n> >\n> > Just to summarize, apart from BF failures for which we had some\n> > discussion, I could recall the following open points:\n> >\n> > 1. After promotion, the pre-existing replication objects should be\n> > removed (either optionally or always), otherwise, it can lead to a new\n> > subscriber not being able to restart or getting some unwarranted data.\n> > [1][2].\n> >\n> > 2. Retaining synced slots on new subscribers can lead to unnecessary\n> > WAL retention and dead rows [3].\n> >\n> > 3. We need to consider whether some of the messages displayed in\n> > --dry-run mode are useful or not [4].\n> >\n>\n> The recent commits should silence BF failures and resolve point number\n> 2. But we haven't done anything yet for 1 and 3. For 3, we have a\n> patch in email [1] (v3-0005-Avoid*) which can be reviewed and\n> committed but point number 1 needs discussion. Apart from that\n> somewhat related to point 1, Kuroda-San has raised a point in an email\n> [2] for replication slots. Shlok has presented a case in this thread\n> [3] where the problem due to point 1 can cause ERRORs or can cause\n> data inconsistency.\n>\n>\n> I read v3-0005 and it seems to silence (almost) all \"write\" messages. Does it\n> intend to avoid the misinterpretation that the dry run mode is writing\n> something? It is dry run mode! If I run a tool in dry run mode, I expect it to\n> execute some verifications and print useful messages so I can evaluate if it is\n> ok to run it. Maybe it is not your expectation for dry run mode.\n>\n\nI haven't studied the patch so can't comment but the intention was to\nnot print some unrelated write messages. I have shared my observation\nin the email [1].\n\n> I think if it not clear, let's inform that it changes nothing in dry run mode.\n>\n> pg_createsubscriber: no modifications are done\n>\n> as a first message in dry run mode. I agree with you when you pointed out that\n> some messages are misleading.\n>\n> pg_createsubscriber: hint: If pg_createsubscriber fails after this\n> point, you must recreate the physical replica before continuing.\n>\n> Maybe for this one, we omit the fake information, like:\n>\n> pg_createsubscriber: setting the replication progress on database \"postgres\"\n>\n\nI think we don't need to display this message as we are not going to\ndo anything for this in the --dry-run mode. We can even move the\nrelated code in (!dry_run) check.\n\n> I will post a patch to address the messages once we agree what needs to be\n> changed.\n>\n\nI suggest we can start a new thread with the messages shared in the\nemail [1] and your response for each one of those.\n\n> Regarding 3, publications and subscriptions are ok to remove. You are not\n> allowed to create them on standby, hence, all publications and subscriptions\n> are streamed from primary. However, I'm wondering if you want to remove the\n> publications.\n>\n\nI am not so sure of publications but we should remove subscriptions as\nthere are clear problems with those as shown by Shlok in this thread.\n\n> Replication slots on a standby server are \"invalidated\" despite\n> of the wal_status is saying \"reserved\" (I think it is an oversight in the\n> design that implements slot invalidation), however, required WAL files have\n> already been removed because of pg_resetwal (see modify_subscriber_sysid()).\n> The scenario is to promote a standby server, run pg_resetwal on it and check\n> pg_replication_slots.\n>\n\nIdeally, invalidated slots shouldn't create any problems but it is\nbetter that we discuss this also as a separate problem in new thread.\n\n> Do you have any other scenarios in mind?\n>\n\nNo, so we have three issues to discuss (a) some unwarranted messages\nin --dry-run mode; (b) whether to remove pre-existing subscriptions\nduring conversion; (c) whether to remove pre-existing replication\nslots.\n\nWould you like to start three new threads for each of these or would\nyou like Kuroda-San or me to start some or all?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1J2fAvsJ2HihbWJ_GxETd6sdqSMrZdCVJEutRZRpm1MEQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:21:50 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Tue, Jun 18, 2024 at 12:13 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 18.06.24 05:59, Amit Kapila wrote:\n> >> 1. After promotion, the pre-existing replication objects should be\n> >> removed (either optionally or always), otherwise, it can lead to a new\n> >> subscriber not being able to restart or getting some unwarranted data.\n> >> [1][2].\n> >>\n> >> 2. Retaining synced slots on new subscribers can lead to unnecessary\n> >> WAL retention and dead rows [3].\n> >>\n> >> 3. We need to consider whether some of the messages displayed in\n> >> --dry-run mode are useful or not [4].\n> >>\n> >\n> > The recent commits should silence BF failures and resolve point number\n> > 2. But we haven't done anything yet for 1 and 3. For 3, we have a\n> > patch in email [1] (v3-0005-Avoid*) which can be reviewed and\n> > committed but point number 1 needs discussion. Apart from that\n> > somewhat related to point 1, Kuroda-San has raised a point in an email\n> > [2] for replication slots. Shlok has presented a case in this thread\n> > [3] where the problem due to point 1 can cause ERRORs or can cause\n> > data inconsistency.\n> >\n> > Now, the easiest way out here is that we accept the issues with the\n> > pre-existing subscriptions and replication slots cases and just\n> > document them for now with the intention to work on those in the next\n> > version. OTOH, if there are no major challenges, we can try to\n> > implement a patch for them as well as see how it goes.\n>\n> This has gotten much too confusing to keep track of.\n>\n> I suggest, if anyone has anything they want considered for\n> pg_createsubscriber for PG17 at this point, they start a new thread, one\n> for each topic, ideally with a subject like \"pg_createsubscriber:\n> Improve this thing\", provide a self-contained description of the issue,\n> and include a patch if one is available.\n>\n\nFair enough. In my mind, we have three pending issues to discuss and I\nhave responded to an email to see if Euler can start individual\nthreads for those, otherwise, I'll do it.\n\nWe can close the open item pointing to this thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 19 Jun 2024 09:25:24 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Tue, Jun 18, 2024 at 11:55 PM Amit Kapila <[email protected]> wrote:\n> We can close the open item pointing to this thread.\n\nDone, and for the record I also asked for the thread to be split, back\non May 22.\n\nIMHO, we shouldn't add open items pointing to general complaints like\nthe one that started this thread. Open items need to be specific and\nactionable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Jun 2024 08:04:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Wed, Jun 19, 2024, at 12:51 AM, Amit Kapila wrote:\n> Ideally, invalidated slots shouldn't create any problems but it is\n> better that we discuss this also as a separate problem in new thread.\n\nOk.\n\n> > Do you have any other scenarios in mind?\n> >\n> \n> No, so we have three issues to discuss (a) some unwarranted messages\n> in --dry-run mode; (b) whether to remove pre-existing subscriptions\n> during conversion; (c) whether to remove pre-existing replication\n> slots.\n> \n> Would you like to start three new threads for each of these or would\n> you like Kuroda-San or me to start some or all?\n\nI will open new threads soon if you don't.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Jun 19, 2024, at 12:51 AM, Amit Kapila wrote:Ideally, invalidated slots shouldn't create any problems but it isbetter that we discuss this also as a separate problem in new thread.Ok.> Do you have any other scenarios in mind?>No, so we have three issues to discuss (a) some unwarranted messagesin --dry-run mode; (b) whether to remove pre-existing subscriptionsduring conversion; (c) whether to remove pre-existing replicationslots.Would you like to start three new threads for each of these or wouldyou like Kuroda-San or me to start some or all?I will open new threads soon if you don't.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 19 Jun 2024 18:05:12 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" }, { "msg_contents": "On Thu, Jun 20, 2024 at 2:35 AM Euler Taveira <[email protected]> wrote:\n>\n> I will open new threads soon if you don't.\n>\n\nOkay, thanks. I'll wait for you to start new threads and then we can\ndiscuss the respective problems in those threads.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 Jun 2024 09:22:34 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: State of pg_createsubscriber" } ]
[ { "msg_contents": "Hi all,\n\nI have a couple of extra toys for injection points in my bucket that\nI'd like to propose for integration in v18, based on some feedback I\nhave received:\n1) Preload an injection point into the backend-level cache without\nrunning it. This has come up because an injection point run for the\nfirst time needs to be loaded with load_external_function that\ninternally does some allocations, and this would not work if the\ninjection point is in a critical section. Being able to preload an \ninjection point requires an extra macro, called\nINJECTION_POINT_PRELOAD. Perhaps \"load\" makes more sense as keyword,\nhere.\n2) Grab values at runtime from the code path where an injection point\nis run and give them to the callback. The case here is to be able to\ndo some dynamic manipulation or a stack, reads of some runtime data or\neven decide of a different set of actions in a callback based on what\nthe input has provided. One case that I've been playing with here is\nthe dynamic manipulation of pages in specific code paths to stress\ninternal checks, as one example. This introduces a 1-argument\nversion, as multiple args could always be passed down to the callback\nwithin a structure.\n\nThe in-core module injection_points is extended to provide a SQL\ninterface to be able to do the preloading or define a callback with\narguments. The two changes are split into patches of their own.\n\nThese new facilities could be backpatched if there is a need for them\nin the future in stable branches, as these are aimed for tests and the\nchanges do not introduce any ABI breakages with the existing APIs or\nthe in-core module.\n\nThoughts and comments are welcome.\n--\nMichael", "msg_date": "Mon, 20 May 2024 12:18:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Injection points: preloading and runtime arguments" }, { "msg_contents": "Hi!\n\n> On 20 May 2024, at 08:18, Michael Paquier <[email protected]> wrote:\n\nBoth features look useful to me.\nI've tried to rebase my test of CV sleep during multixact generation[0]. I used it like this:\n\n INJECTION_POINT_PRELOAD(\"GetNewMultiXactId-done\");\n multi = GetNewMultiXactId(nmembers, &offset); // starts critsection\n INJECTION_POINT(\"GetNewMultiXactId-done\");\n\nAnd it fails like this:\n\n2024-05-20 16:50:40.430 +05 [21830] 001_multixact.pl LOG: statement: select test_create_multixact();\nTRAP: failed Assert(\"CritSectionCount == 0 || (context)->allowInCritSection\"), File: \"mcxt.c\", Line: 1185, PID: 21830\n0 postgres 0x0000000101452ed0 ExceptionalCondition + 220\n1 postgres 0x00000001014a6050 MemoryContextAlloc + 208\n2 postgres 0x00000001011c3bf0 dsm_create_descriptor + 72\n3 postgres 0x00000001011c3ef4 dsm_attach + 400\n4 postgres 0x00000001014990d8 dsa_attach + 24\n5 postgres 0x00000001011c716c init_dsm_registry + 240\n6 postgres 0x00000001011c6e60 GetNamedDSMSegment + 456\n7 injection_points.dylib 0x0000000101c871f8 injection_init_shmem + 60\n8 injection_points.dylib 0x0000000101c86f1c injection_wait + 64\n9 postgres 0x000000010148e228 InjectionPointRunInternal + 376\n10 postgres 0x000000010148e0a4 InjectionPointRun + 32\n11 postgres 0x0000000100cab798 MultiXactIdCreateFromMembers + 344\n12 postgres 0x0000000100cab604 MultiXactIdCreate + 312\n\nAm I doing something wrong? Seems like extension have to know too that it is preloaded.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/0925F9A9-4D53-4B27-A87E-3D83A757B0E0%40yandex-team.ru\n\n", "msg_date": "Mon, 20 May 2024 17:01:15 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "> On 20 May 2024, at 17:01, Andrey M. Borodin <[email protected]> wrote:\n\nUgh, accidentally send without attaching the patch itself. Sorry for the noise.\n\n\nBest regards, Andrey Borodin.", "msg_date": "Mon, 20 May 2024 17:03:18 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Mon, May 20, 2024 at 05:01:15PM +0500, Andrey M. Borodin wrote:\n> Both features look useful to me.\n> I've tried to rebase my test of CV sleep during multixact generation[0]. I used it like this:\n> \n> INJECTION_POINT_PRELOAD(\"GetNewMultiXactId-done\");\n> multi = GetNewMultiXactId(nmembers, &offset); // starts critsection\n> INJECTION_POINT(\"GetNewMultiXactId-done\");\n\nThanks for the feedback.\n\n> And it fails like this:\n> \n> 2024-05-20 16:50:40.430 +05 [21830] 001_multixact.pl LOG: statement: select test_create_multixact();\n> TRAP: failed Assert(\"CritSectionCount == 0 || (context)->allowInCritSection\"), File: \"mcxt.c\", Line: 1185, PID: 21830\n> 0 postgres 0x0000000101452ed0 ExceptionalCondition + 220\n> 1 postgres 0x00000001014a6050 MemoryContextAlloc + 208\n> 2 postgres 0x00000001011c3bf0 dsm_create_descriptor + 72\n> 3 postgres 0x00000001011c3ef4 dsm_attach + 400\n> 4 postgres 0x00000001014990d8 dsa_attach + 24\n> 5 postgres 0x00000001011c716c init_dsm_registry + 240\n> 6 postgres 0x00000001011c6e60 GetNamedDSMSegment + 456\n> 7 injection_points.dylib 0x0000000101c871f8 injection_init_shmem + 60\n> 8 injection_points.dylib 0x0000000101c86f1c injection_wait + 64\n> 9 postgres 0x000000010148e228 InjectionPointRunInternal + 376\n> 10 postgres 0x000000010148e0a4 InjectionPointRun + 32\n> 11 postgres 0x0000000100cab798 MultiXactIdCreateFromMembers + 344\n> 12 postgres 0x0000000100cab604 MultiXactIdCreate + 312\n> \n> Am I doing something wrong? Seems like extension have to know too that it is preloaded.\n\nYour stack is pointing at the shared memory section initialized in the\nmodule injection_points, which is a bit of a chicken-and-egg problem\nbecause you'd want an extra preload to happen before even that, like a\npre-preload. From what I can see, you have a good point about the\nshmem initialized in the module: injection_points_preload() should\ncall injection_init_shmem() so as this area would not trigger the\nassertion.\n\nHowever, there is a second thing here inherent to your test: shouldn't\nthe script call injection_points_preload() to make sure that the local\ncache behind GetNewMultiXactId-done is fully allocated and prepared\nfor the moment where injection point will be run?\n\nSo I agree that 0002 ought to call injection_init_shmem() when calling\ninjection_points_preload(), but it also seems to me that the test is\nmissing the fact that it should heat the backend cache to avoid the\nallocations in the critical sections.\n\nNote that I disagree with taking a shortcut in the backend-side\ninjection point code where we would bypass CritSectionCount or\nallowInCritSection. These states should stay consistent for the sake\nof the callbacks registered so as these can rely on the same stack and\nconditions as the code where they are called.\n--\nMichael", "msg_date": "Tue, 21 May 2024 10:31:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "\n\n> On 21 May 2024, at 06:31, Michael Paquier <[email protected]> wrote:\n> \n> So I agree that 0002 ought to call injection_init_shmem() when calling\n> injection_points_preload(), but it also seems to me that the test is\n> missing the fact that it should heat the backend cache to avoid the\n> allocations in the critical sections.\n> \n> Note that I disagree with taking a shortcut in the backend-side\n> injection point code where we would bypass CritSectionCount or\n> allowInCritSection. These states should stay consistent for the sake\n> of the callbacks registered so as these can rely on the same stack and\n> conditions as the code where they are called.\n\nCurrently I'm working on the test using this\n$creator->query_until(qr/start/, q(\n \\echo start\n select injection_points_wakeup('');\n select test_create_multixact();\n));\n\nI'm fine if instead of injection_points_wakeup('') I'll have to use select injection_points_preload('point name');.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n\n\n", "msg_date": "Tue, 21 May 2024 16:29:54 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Tue, May 21, 2024 at 04:29:54PM +0500, Andrey M. Borodin wrote:\n> Currently I'm working on the test using this\n> $creator->query_until(qr/start/, q(\n> \\echo start\n> select injection_points_wakeup('');\n> select test_create_multixact();\n> ));\n> \n> I'm fine if instead of injection_points_wakeup('') I'll have to use\n> select injection_points_preload('point name');.\n\nBased on our discussion of last week, please find attached the\npromised patch set to allow your SLRU tests to work. I have reversed\nthe order of the patches, moving the loading part in 0001 and the\naddition of the runtime arguments in 0002 as we have a use-case for\nthe loading, nothing yet for the runtime arguments.\n\nI have also come back to the naming, feeling that \"preload\" was\novercomplicated. So I have used the word \"load\" instead across the\nboard for 0001.\n\nNote that the SQL function injection_points_load() does now an \ninitialization of the shmem area when a process plugs into the module\nfor the first time, fixing the issue you have mentioned with your SLRU\ntest. Hence, you should be able to do a load(), then a wait in the\ncritical section as there would be no memory allocation done when the\npoint runs. Another thing you could do is to define a\nINJECTION_POINT_LOAD() in the code path you're stressing outside the \ncritical section where the point is run. This should save from a call\nto the SQL function. This choice is up to the one implementing the\ntest, both can be useful depending on what one is trying to achieve.\n--\nMichael", "msg_date": "Wed, 5 Jun 2024 07:52:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "\n\n> On 5 Jun 2024, at 03:52, Michael Paquier <[email protected]> wrote:\n> \n> Another thing you could do is to define a\n> INJECTION_POINT_LOAD() in the code path you're stressing outside the \n> critical section where the point is run. This should save from a call\n> to the SQL function. This choice is up to the one implementing the\n> test, both can be useful depending on what one is trying to achieve.\n\nThanks!\n\nInterestingly, previously having INJECTION_POINT_PRELOAD() was not enough.\nBut now both INJECTION_POINT_LOAD() or injection_points_load() do the trick, so for me any of them is enough.\n\nMy test works with current version, but I have one slight problem, I need to call\n$node->safe_psql('postgres', q(select injection_points_detach('GetMultiXactIdMembers-CV-sleep')));\nBefore\n$node->safe_psql('postgres', q(select injection_points_wakeup('GetMultiXactIdMembers-CV-sleep')));\n\nIs it OK to detach() before wakeup()? Or, perhaps, can a detach() do a wakeup() automatically?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Thu, 6 Jun 2024 15:47:47 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Thu, Jun 06, 2024 at 03:47:47PM +0500, Andrey M. Borodin wrote:\n> Is it OK to detach() before wakeup()? Or, perhaps, can a detach() do a wakeup() automatically?\n\nIt is OK to do a detach before a wakeup. Noah has been relying on\nthis behavior in an isolation test for a patch he's worked on. See\ninplace110-successors-v1.patch here:\nhttps://www.postgresql.org/message-id/[email protected]\n\nThat's also something we've discussed for 33181b48fd0e, where Noah\nwanted to emulate in an automated fashion what one can do with a\ndebugger and one or more breakpoints.\n\nNot sure that wakeup() involving a automated detach() is the behavior\nto hide long-term, actually, as there is also an argument for waking\nup a point and *not* detach it to force multiple waits.\n--\nMichael", "msg_date": "Fri, 7 Jun 2024 08:38:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "> On 7 Jun 2024, at 04:38, Michael Paquier <[email protected]> wrote:\n\nThanks Michael! Tests of injection points with injection points are neat :)\n\n\nAlvaro, here’s the test for multixact CV sleep that I was talking about on PGConf.\nIt is needed to test [0]. It is based on loaded injection points. This technique is not committed yet, but the patch looks good. When all prerequisites are ready I will post it to corresponding thread and create CF item.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a0e0fb1ba", "msg_date": "Sat, 8 Jun 2024 16:52:25 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Sat, Jun 08, 2024 at 04:52:25PM +0500, Andrey M. Borodin wrote:\n> Alvaro, here’s the test for multixact CV sleep that I was talking\n> about on PGConf.\n> It is needed to test [0]. It is based on loaded injection\n> points.\n\n> This technique is not committed yet, but the patch looks good.\n\nOK, cool. I'll try to get that into the tree once v18 opens up. I\ncan see that GetNewMultiXactId() opens a critical section. I am\nslightly surprised that you need both the SQL function\ninjection_points_load() and the macro INJECTION_POINT_LOAD().\nWouldn't one or the other be sufficient?\n\nThe test takes 20ms to run here, which is good enough.\n\n+ INJECTION_POINT_LOAD(\"GetNewMultiXactId-done\");\n[...]\n+ INJECTION_POINT(\"GetNewMultiXactId-done\");\n[...]\n+ INJECTION_POINT(\"GetMultiXactIdMembers-CV-sleep\"); \n\nBe careful about the naming here. All the points use lower case\ncharacters currently.\n\n+# and another multixact have no offest yet, we must wait until this offset \n\ns/offest/offset/.\n\n> When all prerequisites are ready I will post it to corresponding\n> thread and create CF item.\n\nOK, let's do that.\n--\nMichael", "msg_date": "Mon, 10 Jun 2024 15:10:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Mon, Jun 10, 2024 at 03:10:33PM +0900, Michael Paquier wrote:\n> OK, cool. I'll try to get that into the tree once v18 opens up.\n\nAnd I've spent more time on this one, and applied it to v18 after some\nslight tweaks. Please feel free to re-post your tests with\nmultixacts, Andrey.\n--\nMichael", "msg_date": "Fri, 5 Jul 2024 18:16:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On 05/07/2024 12:16, Michael Paquier wrote:\n> On Mon, Jun 10, 2024 at 03:10:33PM +0900, Michael Paquier wrote:\n>> OK, cool. I'll try to get that into the tree once v18 opens up.\n> \n> And I've spent more time on this one, and applied it to v18 after some\n> slight tweaks.\n\nIf you do:\n\nINJECTION_POINT_LOAD(foo);\n\nSTART_CRIT_SECTION();\nINJECTION_POINT(foo);\nEND_CRIT_SECTION();\n\nAnd the injection point is attached in between the \nINJECTION_POINT_LOAD() and INJECTION_POINT() calls, you will still get \nan assertion failure. For a testing facility, maybe that's acceptable, \nbut it could be fixed pretty easily.\n\nI propose we introduce an INJECTION_POINT_CACHED(name) macro that *only* \nuses the local cache. We could then also add an assertion in \nInjectionPointRun() to check that it's not used in a critical section, \nto enforce correct usage.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 9 Jul 2024 12:08:26 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Tue, Jul 09, 2024 at 12:08:26PM +0300, Heikki Linnakangas wrote:\n> And the injection point is attached in between the INJECTION_POINT_LOAD()\n> and INJECTION_POINT() calls, you will still get an assertion failure. For a\n> testing facility, maybe that's acceptable, but it could be fixed pretty\n> easily.\n> \n> I propose we introduce an INJECTION_POINT_CACHED(name) macro that *only*\n> uses the local cache.\n\nYou mean with something that does a injection_point_cache_get()\nfollowed by a callback run if anything is found in the local cache?\nWhy not. Based on what you have posted at [1], it looks like this had\nbetter check the contents of the cache's generation with what's in\nshmem, as well as destroying InjectionPointCache if there is nothing\nelse, so there's a possible dependency here depending on how much\nmaintenance this should do with the cache to keep it consistent.\n\n> We could then also add an assertion in\n> InjectionPointRun() to check that it's not used in a critical section, to\n> enforce correct usage.\n\nThat would be the same as what we do currently with a palloc() coming\nfrom load_external_function() or hash_create(), whichever comes first.\nOkay, the stack reported is deeper in this case.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Wed, 10 Jul 2024 13:16:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Wed, Jul 10, 2024 at 01:16:15PM +0900, Michael Paquier wrote:\n> You mean with something that does a injection_point_cache_get()\n> followed by a callback run if anything is found in the local cache?\n> Why not. Based on what you have posted at [1], it looks like this had\n> better check the contents of the cache's generation with what's in\n> shmem, as well as destroying InjectionPointCache if there is nothing\n> else, so there's a possible dependency here depending on how much\n> maintenance this should do with the cache to keep it consistent.\n\nNow that 86db52a5062a is in the tree, this could be done with a\nshortcut in InjectionPointCacheRefresh(). What do you think about\nsomething like the attached, with your suggested naming?\n--\nMichael", "msg_date": "Tue, 16 Jul 2024 13:09:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On 16/07/2024 07:09, Michael Paquier wrote:\n> On Wed, Jul 10, 2024 at 01:16:15PM +0900, Michael Paquier wrote:\n>> You mean with something that does a injection_point_cache_get()\n>> followed by a callback run if anything is found in the local cache?\n>> Why not. Based on what you have posted at [1], it looks like this had\n>> better check the contents of the cache's generation with what's in\n>> shmem, as well as destroying InjectionPointCache if there is nothing\n>> else, so there's a possible dependency here depending on how much\n>> maintenance this should do with the cache to keep it consistent.\n> \n> Now that 86db52a5062a is in the tree, this could be done with a\n> shortcut in InjectionPointCacheRefresh(). What do you think about\n> something like the attached, with your suggested naming?\n\nYes, +1 for something like that.\n\nThe \"direct\" argument to InjectionPointCacheRefresh() feels a bit weird. \nAlso weird that it still checks ActiveInjectionPoints->max_inuse, even \nthough it otherwise operates on the cached version only. I think you can \njust call injection_point_cache_get() directly from \nInjectionPointCached(), per attached.\n\nI also rephrased the docs section a bit, focusing more on why and how \nyou use the LOAD/CACHED pair, and less on the mechanics of how it works.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 16 Jul 2024 11:20:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Tue, Jul 16, 2024 at 11:20:57AM +0300, Heikki Linnakangas wrote:\n> The \"direct\" argument to InjectionPointCacheRefresh() feels a bit weird.\n> Also weird that it still checks ActiveInjectionPoints->max_inuse, even\n> though it otherwise operates on the cached version only. I think you can\n> just call injection_point_cache_get() directly from InjectionPointCached(),\n> per attached.\n\nMy point was just to be more aggressive with the cache correctness\neven in this context. You've also mentioned upthread the point that \nwe should worry about a concurrent detach, which is something that\ninjection_point_cache_get() alone is not able to do as we would not\ncross-check the generation with what's in the shared area, so I also\nsaw a point about being more aggressive with the check here.\n\nIt works for me to do as you are proposing at the end, that could\nalways be changed if there are more arguments in favor of a different\nbehavior that plays more with the shmem data.\n\n> I also rephrased the docs section a bit, focusing more on why and how you\n> use the LOAD/CACHED pair, and less on the mechanics of how it works.\n\nI'm OK with that, as well. Thanks.\n--\nMichael", "msg_date": "Wed, 17 Jul 2024 11:19:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Wed, Jul 17, 2024 at 11:19:41AM +0900, Michael Paquier wrote:\n> It works for me to do as you are proposing at the end, that could\n> always be changed if there are more arguments in favor of a different\n> behavior that plays more with the shmem data.\n\nI have taken some time this morning and applied that after a second\nlookup. Thanks!\n\nIf there is anything else you would like to see adjusted in this area,\nplease let me know.\n--\nMichael", "msg_date": "Thu, 18 Jul 2024 09:55:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "> On 18 Jul 2024, at 03:55, Michael Paquier <[email protected]> wrote:\n> \n> If there is anything else you would like to see adjusted in this area,\n> please let me know.\n\nI’ve tried to switch my multixact test to new INJECTION_POINT_CACHED… and it does not work for me. Could you please take a look?\n\n2024-08-02 18:52:32.244 MSK [53155] 001_multixact.pl LOG: statement: select test_create_multixact();\nTRAP: failed Assert(\"CritSectionCount == 0 || (context)->allowInCritSection\"), File: \"mcxt.c\", Line: 1186, PID: 53155\n0 postgres 0x00000001031212f0 ExceptionalCondition + 236\n1 postgres 0x000000010317a01c MemoryContextAlloc + 240\n2 postgres 0x0000000102e66158 dsm_create_descriptor + 80\n3 postgres 0x0000000102e66474 dsm_attach + 416\n4 postgres 0x000000010316c264 dsa_attach + 24\n5 postgres 0x0000000102e69994 init_dsm_registry + 256\n6 postgres 0x0000000102e6965c GetNamedDSMSegment + 492\n7 injection_points.dylib 0x000000010388f2cc injection_init_shmem + 68\n8 injection_points.dylib 0x000000010388efbc injection_wait + 72\n9 postgres 0x00000001031606bc InjectionPointCached + 72\n10 postgres 0x00000001028ffc70 MultiXactIdCreateFromMembers + 360\n11 postgres 0x00000001028ffac8 MultiXactIdCreate + 344\n12 test_slru.dylib 0x000000010376fa04 test_create_multixact + 52\n\n\nThe test works fine with SQL interface “select injection_points_load('get-new-multixact-id');”.\nThanks!\n\n\nBest regards, Andrey Borodin.", "msg_date": "Fri, 2 Aug 2024 19:03:58 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Fri, Aug 02, 2024 at 07:03:58PM +0300, Andrey M. Borodin wrote:\n> The test works fine with SQL interface “select\n> injection_points_load('get-new-multixact-id');”.\n\nYes, just use a load() here to make sure that the DSM required by the\nwaits are properly initialized before entering in the critical section\nwhere the wait of the point get-new-multixact-id happens.\n--\nMichael", "msg_date": "Sun, 4 Aug 2024 01:02:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "On Sun, Aug 04, 2024 at 01:02:22AM +0900, Michael Paquier wrote:\n> On Fri, Aug 02, 2024 at 07:03:58PM +0300, Andrey M. Borodin wrote:\n> > The test works fine with SQL interface “select\n> > injection_points_load('get-new-multixact-id');”.\n> \n> Yes, just use a load() here to make sure that the DSM required by the\n> waits are properly initialized before entering in the critical section\n> where the wait of the point get-new-multixact-id happens.\n\nHmm. How about loading injection_points with shared_preload_libraries\nnow that it has a _PG_init() thanks to 75534436a477 to take care of\nthe initialization you need here? We could add two hooks to request\nsome shmem based on a size and to do the shmem initialization.\n--\nMichael", "msg_date": "Tue, 6 Aug 2024 16:47:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Injection points: preloading and runtime arguments" }, { "msg_contents": "\n\n> On 6 Aug 2024, at 12:47, Michael Paquier <[email protected]> wrote:\n> \n> Hmm. How about loading injection_points with shared_preload_libraries\n> now that it has a _PG_init() thanks to 75534436a477 to take care of\n> the initialization you need here? We could add two hooks to request\n> some shmem based on a size and to do the shmem initialization.\n\nSQL initialisation is fine for test purposes. I just considered that I'd better share that doing the same from C code is non-trivial.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 6 Aug 2024 13:40:59 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Injection points: preloading and runtime arguments" } ]
[ { "msg_contents": "This patch converts the compile-time settings\n\n COPY_PARSE_PLAN_TREES\n WRITE_READ_PARSE_PLAN_TREES\n RAW_EXPRESSION_COVERAGE_TEST\n\ninto run-time parameters\n\n debug_copy_parse_plan_trees\n debug_write_read_parse_plan_trees\n debug_raw_expression_coverage_test\n\nThey can be activated for tests using PG_TEST_INITDB_EXTRA_OPTS.\n\nThe effect is the same, but now you don't need to recompile in order to \nuse these checks.\n\nThe compile-time symbols are kept for build farm compatibility, but they \nnow just determine the default value of the run-time settings.\n\nPossible concerns:\n\n- Performance? Looking for example at pg_parse_query() and its \nsiblings, they also check for other debugging settings like \nlog_parser_stats in the main code path, so it doesn't seem to be a concern.\n\n- Access control? I have these settings as PGC_USERSET for now. Maybe \nthey should be PGC_SUSET?\n\nAnother thought: Do we really need three separate settings?", "msg_date": "Mon, 20 May 2024 09:28:39 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "Em seg., 20 de mai. de 2024 às 04:28, Peter Eisentraut <[email protected]>\nescreveu:\n\n> This patch converts the compile-time settings\n>\n> COPY_PARSE_PLAN_TREES\n> WRITE_READ_PARSE_PLAN_TREES\n> RAW_EXPRESSION_COVERAGE_TEST\n>\n> into run-time parameters\n>\n> debug_copy_parse_plan_trees\n> debug_write_read_parse_plan_trees\n> debug_raw_expression_coverage_test\n>\n> They can be activated for tests using PG_TEST_INITDB_EXTRA_OPTS.\n>\n> The effect is the same, but now you don't need to recompile in order to\n> use these checks.\n>\n> The compile-time symbols are kept for build farm compatibility, but they\n> now just determine the default value of the run-time settings.\n>\n> Possible concerns:\n>\n> - Performance? Looking for example at pg_parse_query() and its\n> siblings, they also check for other debugging settings like\n> log_parser_stats in the main code path, so it doesn't seem to be a concern.\n>\n> - Access control? I have these settings as PGC_USERSET for now. Maybe\n> they should be PGC_SUSET?\n>\n> Another thought: Do we really need three separate settings?\n>\nWhat is the use for production use?\n\nbest regards,\nRanier Vilela\n\nEm seg., 20 de mai. de 2024 às 04:28, Peter Eisentraut <[email protected]> escreveu:This patch converts the compile-time settings\n\n     COPY_PARSE_PLAN_TREES\n     WRITE_READ_PARSE_PLAN_TREES\n     RAW_EXPRESSION_COVERAGE_TEST\n\ninto run-time parameters\n\n     debug_copy_parse_plan_trees\n     debug_write_read_parse_plan_trees\n     debug_raw_expression_coverage_test\n\nThey can be activated for tests using PG_TEST_INITDB_EXTRA_OPTS.\n\nThe effect is the same, but now you don't need to recompile in order to \nuse these checks.\n\nThe compile-time symbols are kept for build farm compatibility, but they \nnow just determine the default value of the run-time settings.\n\nPossible concerns:\n\n- Performance?  Looking for example at pg_parse_query() and its \nsiblings, they also check for other debugging settings like \nlog_parser_stats in the main code path, so it doesn't seem to be a concern.\n\n- Access control?  I have these settings as PGC_USERSET for now. Maybe \nthey should be PGC_SUSET?\n\nAnother thought:  Do we really need three separate settings?What is the use for production use?best regards,Ranier Vilela", "msg_date": "Mon, 20 May 2024 08:35:57 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> This patch converts the compile-time settings\n> COPY_PARSE_PLAN_TREES\n> WRITE_READ_PARSE_PLAN_TREES\n> RAW_EXPRESSION_COVERAGE_TEST\n\n> into run-time parameters\n\n> debug_copy_parse_plan_trees\n> debug_write_read_parse_plan_trees\n> debug_raw_expression_coverage_test\n\nI'm kind of down on this. It seems like forcing a bunch of\nuseless-in-production debug support into the standard build.\nWhat of this would be of any use to any non-developer?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 May 2024 09:59:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "On 20.05.24 15:59, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> This patch converts the compile-time settings\n>> COPY_PARSE_PLAN_TREES\n>> WRITE_READ_PARSE_PLAN_TREES\n>> RAW_EXPRESSION_COVERAGE_TEST\n> \n>> into run-time parameters\n> \n>> debug_copy_parse_plan_trees\n>> debug_write_read_parse_plan_trees\n>> debug_raw_expression_coverage_test\n> \n> I'm kind of down on this. It seems like forcing a bunch of\n> useless-in-production debug support into the standard build.\n> What of this would be of any use to any non-developer?\n\nWe have a bunch of other debug_* settings that are available in \nproduction builds, such as\n\ndebug_print_parse\ndebug_print_rewritten\ndebug_print_plan\ndebug_pretty_print\ndebug_discard_caches\ndebug_io_direct\ndebug_parallel_query\ndebug_logical_replication_streaming\n\nMaybe we could hide all of them behind some #ifdef DEBUG_OPTIONS, but in \nany case, I don't think the ones being proposed here are substantially \ndifferent from those existing ones that they would require a separate \ntreatment.\n\nMy goal is to make these facilities easier to use, avoiding hand-editing \npg_config_manual.h and having to recompile.\n\n\n\n", "msg_date": "Tue, 21 May 2024 14:25:21 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "Em ter., 21 de mai. de 2024 às 09:25, Peter Eisentraut <[email protected]>\nescreveu:\n\n> On 20.05.24 15:59, Tom Lane wrote:\n> > Peter Eisentraut <[email protected]> writes:\n> >> This patch converts the compile-time settings\n> >> COPY_PARSE_PLAN_TREES\n> >> WRITE_READ_PARSE_PLAN_TREES\n> >> RAW_EXPRESSION_COVERAGE_TEST\n> >\n> >> into run-time parameters\n> >\n> >> debug_copy_parse_plan_trees\n> >> debug_write_read_parse_plan_trees\n> >> debug_raw_expression_coverage_test\n> >\n> > I'm kind of down on this. It seems like forcing a bunch of\n> > useless-in-production debug support into the standard build.\n> > What of this would be of any use to any non-developer?\n>\n> We have a bunch of other debug_* settings that are available in\n> production builds, such as\n>\n> debug_print_parse\n> debug_print_rewritten\n> debug_print_plan\n> debug_pretty_print\n> debug_discard_caches\n> debug_io_direct\n> debug_parallel_query\n> debug_logical_replication_streaming\n>\nIf some of this is useful for non-developer users,\nit shouldn't be called debug, or in this category.\n\n\n> Maybe we could hide all of them behind some #ifdef DEBUG_OPTIONS, but in\n> any case, I don't think the ones being proposed here are substantially\n> different from those existing ones that they would require a separate\n> treatment.\n>\n> My goal is to make these facilities easier to use, avoiding hand-editing\n> pg_config_manual.h and having to recompile.\n>\nAlthough there are some developer users.\nI believe that anything that is not useful for common users and is not used\nfor production\nshould not be compiled at runtime.\n\nbest regards,\nRanier Vilela\n\nEm ter., 21 de mai. de 2024 às 09:25, Peter Eisentraut <[email protected]> escreveu:On 20.05.24 15:59, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> This patch converts the compile-time settings\n>>       COPY_PARSE_PLAN_TREES\n>>       WRITE_READ_PARSE_PLAN_TREES\n>>       RAW_EXPRESSION_COVERAGE_TEST\n> \n>> into run-time parameters\n> \n>>       debug_copy_parse_plan_trees\n>>       debug_write_read_parse_plan_trees\n>>       debug_raw_expression_coverage_test\n> \n> I'm kind of down on this.  It seems like forcing a bunch of\n> useless-in-production debug support into the standard build.\n> What of this would be of any use to any non-developer?\n\nWe have a bunch of other debug_* settings that are available in \nproduction builds, such as\n\ndebug_print_parse\ndebug_print_rewritten\ndebug_print_plan\ndebug_pretty_print\ndebug_discard_caches\ndebug_io_direct\ndebug_parallel_query\ndebug_logical_replication_streamingIf some of this is useful for non-developer users, it shouldn't be called debug, or in this category. \n\nMaybe we could hide all of them behind some #ifdef DEBUG_OPTIONS, but in \nany case, I don't think the ones being proposed here are substantially \ndifferent from those existing ones that they would require a separate \ntreatment.\n\nMy goal is to make these facilities easier to use, avoiding hand-editing \npg_config_manual.h and having to recompile.Although there are some developer users.I believe that anything that is not useful for common users and is not used for productionshould not be compiled at runtime. best regards,Ranier Vilela", "msg_date": "Tue, 21 May 2024 09:32:42 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "Hi,\n\nOn 2024-05-20 09:28:39 +0200, Peter Eisentraut wrote:\n> - Performance? Looking for example at pg_parse_query() and its siblings,\n> they also check for other debugging settings like log_parser_stats in the\n> main code path, so it doesn't seem to be a concern.\n\nI don't think we can conclude that. Just because we've not been that careful\nabout performance in a few spots doesn't mean we shouldn't be careful in other\nareas. And I think something like log_parser_stats is a lot more generally\nuseful than debug_copy_parse_plan_trees.\n\nThe branch itself isn't necessarily the issue, the branch predictor can handle\nthat to a good degree. The reduction in code density is a bigger concern - and\nalso very hard to measure, because the cost is very incremental and\ndistributed.\n\nAt the very least I'd add unlikely() to all of the branches, so the debug code\ncan be placed separately from the \"normal\" portions.\n\n\nWhere I'd be more concerned about peformance is the added branch in\nREAD_LOCATION_FIELD. There are a lot of calls to that, addding runtime\nbranches to each, with external function calls inside, is somewhat likely to\nbe measurable.\n\n\n> - Access control? I have these settings as PGC_USERSET for now. Maybe they\n> should be PGC_SUSET?\n\nThat probably would be right.\n\n\n> Another thought: Do we really need three separate settings?\n\nMaybe not three settings, but a single setting, with multiple values, like\ndebug_io_direct?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 May 2024 11:48:17 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "On 21.05.24 20:48, Andres Freund wrote:\n> Where I'd be more concerned about peformance is the added branch in\n> READ_LOCATION_FIELD. There are a lot of calls to that, addding runtime\n> branches to each, with external function calls inside, is somewhat likely to\n> be measurable.\n\nOk, I have an improved plan. I'm wrapping all the code related to this \nin #ifdef DEBUG_NODE_TESTS_ENABLED. This in turn is defined in \nassert-enabled builds, or if you define it explicitly, or if you define \none of the legacy individual symbols. That way you get the run-time \nsettings in a normal development build, but there is no new run-time \noverhead. This is the same scheme that we use for debug_discard_caches.\n\n(An argument could be made to enable this code if and only if assertions \nare enabled, since these tests are themselves kind of assertions. But I \nthink having a separate symbol documents the purpose of the various code \nsections better.)\n\n>> Another thought: Do we really need three separate settings?\n> \n> Maybe not three settings, but a single setting, with multiple values, like\n> debug_io_direct?\n\nYeah, good idea. Let's get some more feedback on this before I code up \na complicated list parser.\n\nAnother approach might be levels. My testing showed that the overhead \nof the copy_parse_plan_trees and raw_expression_coverage_tests flags is \nhardly noticeable, but write_read_parse_plan_trees has some noticeable \nimpact. So you could do 0=off, 1=only the cheap ones, 2=all tests.\n\nIn fact, if we could make \"only the cheap ones\" the default for \nassert-enabled builds, then most people won't even need to worry about \nthis setting: The only way to mess up the write_read_parse_plan_trees is \nif you write custom node support, which is rare. But the raw expression \ncoverage still needs to be maintained by hand, so it's more often \nvaluable to have it checked automatically.", "msg_date": "Fri, 24 May 2024 11:58:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Ok, I have an improved plan. I'm wrapping all the code related to this \n> in #ifdef DEBUG_NODE_TESTS_ENABLED. This in turn is defined in \n> assert-enabled builds, or if you define it explicitly, or if you define \n> one of the legacy individual symbols. That way you get the run-time \n> settings in a normal development build, but there is no new run-time \n> overhead. This is the same scheme that we use for debug_discard_caches.\n\n+1; this addresses my concern about not adding effectively-dead code\nto production builds. Your point upthread about debug_print_plan and\nother legacy debug switches was not without merit; should we also fold\nthose into this plan? (In that case we'd need a symbol named more\ngenerically than DEBUG_NODE_TESTS_ENABLED.)\n\n> (An argument could be made to enable this code if and only if assertions \n> are enabled, since these tests are themselves kind of assertions. But I \n> think having a separate symbol documents the purpose of the various code \n> sections better.)\n\nAgreed.\n\n>> Maybe not three settings, but a single setting, with multiple values, like\n>> debug_io_direct?\n\n> Yeah, good idea. Let's get some more feedback on this before I code up \n> a complicated list parser.\n\nKinda doubt it's worth the trouble, either to code the GUC support or\nto use it. I don't object to having the booleans in a debug build,\nI was just concerned about whether they should exist in production.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 10:39:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" }, { "msg_contents": "On 24.05.24 16:39, Tom Lane wrote:\n>>> Maybe not three settings, but a single setting, with multiple values, like\n>>> debug_io_direct?\n>> Yeah, good idea. Let's get some more feedback on this before I code up\n>> a complicated list parser.\n> Kinda doubt it's worth the trouble, either to code the GUC support or\n> to use it. I don't object to having the booleans in a debug build,\n> I was just concerned about whether they should exist in production.\n\nRight. My inclination is to go ahead with the patch as proposed at this \ntime. There might be other ideas for tweaks in this area, but they \ncould be applied as new patches on top of this. The main goal here was \nto do $subject, and without overhead for production builds, and this \naccomplishes that.\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 09:51:35 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Convert node test compile-time settings into run-time parameters" } ]
[ { "msg_contents": "I noticed that there are slightly inconsistent messages regarding\nquoting policies.\n\n> This happens if you temporarily set \"wal_level=minimal\" on the server.\n> WAL generated with \"full_page_writes=off\" was replayed during online backup\n \n> pg_log_standby_snapshot() can only be used if \"wal_level\" >= \"replica\"\n\n> WAL streaming (\"max_wal_senders\" > 0) requires \"wal_level\" to be \"replica\" or \"logical\"\n\nI think it's best to quote variable names and values separately, like\n\"wal_level\" = \"minimal\" (but not use quotes for numeric values), as it\nseems to be the most common practice. Anyway, we might want to unify\nthem.\n\n\nLikewise, I saw two different versions of values with units.\n\n> \"max_stack_depth\" must not exceed %ldkB.\n> \"vacuum_buffer_usage_limit\" must be 0 or between %d kB and %d kB\n\nI'm not sure, but it seems like the latter version is more common.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 May 2024 16:56:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "inconsistent quoting in error messages" }, { "msg_contents": "On Tue, May 21, 2024 at 2:56 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> I noticed that there are slightly inconsistent messages regarding\n> quoting policies.\n>\n> > This happens if you temporarily set \"wal_level=minimal\" on the server.\n> > WAL generated with \"full_page_writes=off\" was replayed during online backup\n>\n> > pg_log_standby_snapshot() can only be used if \"wal_level\" >= \"replica\"\n>\n> > WAL streaming (\"max_wal_senders\" > 0) requires \"wal_level\" to be \"replica\" or \"logical\"\n>\n> I think it's best to quote variable names and values separately, like\n> \"wal_level\" = \"minimal\" (but not use quotes for numeric values), as it\n> seems to be the most common practice. Anyway, we might want to unify\n> them.\n>\n>\n> Likewise, I saw two different versions of values with units.\n>\n> > \"max_stack_depth\" must not exceed %ldkB.\n> > \"vacuum_buffer_usage_limit\" must be 0 or between %d kB and %d kB\n>\n> I'm not sure, but it seems like the latter version is more common.\n>\n> regards.\n>\n\nHi,\n\nI think it might be better to keep all the discussions about GUC\nquoting and related topics like this confined to the main thread here\n[1]. Otherwise, we might end up with a bunch of competing patches.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 21 May 2024 15:14:09 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inconsistent quoting in error messages" } ]
[ { "msg_contents": "Hi\n\nI'm working on updating the build of PostgreSQL that pgAdmin uses in its\nWindows installers to use Meson ready for the v17 release. I'm using Visual\nStudio 2022, on Windows Server 2022.\n\nI've been unable to persuade Meson to detect zlib, whilst OpenSSL seems to\nbe fine.\n\nThe dependencies have been built and installed as follows:\n\n mkdir c:\\build64\n\n wget https://zlib.net/zlib-1.3.2.tar.gz\n tar -zxvf zlib-1.3.2.tar.gz\n cd zlib-1.3.2\n cmake -DCMAKE_INSTALL_PREFIX=C:/build64/zlib -G \"Visual Studio 17 2022\" .\n msbuild ALL_BUILD.vcxproj /p:Configuration=Release\n msbuild RUN_TESTS.vcxproj /p:Configuration=Release\n msbuild INSTALL.vcxproj /p:Configuration=Release\n cd ..\n\n wget https://www.openssl.org/source/openssl-3.0.13.tar.gz\n tar -zxvf openssl-3.0.13.tar.gz\n cd openssl-3.0.013\n perl Configure VC-WIN64A no-asm --prefix=C:\\build64\\openssl no-ssl3 no-comp\n nmake\n nmake test\n nmake install\n cd ..\n\nThis results in the following headers and libraries being installed for\nzlib:\n\nC:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\include\n Volume in drive C has no label.\n Volume Serial Number is 3AAD-5864\n\n Directory of C:\\build64\\zlib\\include\n\n17/05/2024 15:56 <DIR> .\n17/05/2024 15:56 <DIR> ..\n17/05/2024 15:54 17,096 zconf.h\n22/01/2024 19:32 96,829 zlib.h\n 2 File(s) 113,925 bytes\n 2 Dir(s) 98,842,726,400 bytes free\n\nC:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\lib\n Volume in drive C has no label.\n Volume Serial Number is 3AAD-5864\n\n Directory of C:\\build64\\zlib\\lib\n\n17/05/2024 17:01 <DIR> .\n17/05/2024 15:56 <DIR> ..\n17/05/2024 15:55 16,638 zlib.lib\n17/05/2024 15:55 184,458 zlibstatic.lib\n 2 File(s) 201,096 bytes\n 2 Dir(s) 98,842,726,400 bytes free\n\nI then attempt to build PostgreSQL:\n\n meson setup build\n-Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include\n-Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl\n-Dzlib=enabled --prefix=c:/build64/pgsql\n\nWhich results in the output in output.txt, indicating that OpenSSL was\ncorrectly found, but zlib was not. I've also attached the meson log.\n\nI have very little experience with Meson, and even less interpreting it's\nlogs, but it seems to me that it's not including the extra lib and include\ndirectories when it runs the test compile, given the command line it's\nreporting:\n\ncl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n/nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n\nBug, or am I doing something silly?\n\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 20 May 2024 11:58:05 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "zlib detection in Meson on Windows broken?" }, { "msg_contents": "On 2024-05-20 Mo 06:58, Dave Page wrote:\n> Hi\n>\n> I'm working on updating the build of PostgreSQL that pgAdmin uses in \n> its Windows installers to use Meson ready for the v17 release. I'm \n> using Visual Studio 2022, on Windows Server 2022.\n>\n> I've been unable to persuade Meson to detect zlib, whilst OpenSSL \n> seems to be fine.\n>\n> The dependencies have been built and installed as follows:\n>\n>  mkdir c:\\build64\n>\n>  wget https://zlib.net/zlib-1.3.2.tar.gz\n>  tar -zxvf zlib-1.3.2.tar.gz\n>  cd zlib-1.3.2\n>  cmake -DCMAKE_INSTALL_PREFIX=C:/build64/zlib -G \"Visual Studio 17 2022\" .\n>  msbuild ALL_BUILD.vcxproj /p:Configuration=Release\n>  msbuild RUN_TESTS.vcxproj /p:Configuration=Release\n>  msbuild INSTALL.vcxproj /p:Configuration=Release\n>  cd ..\n>\n>  wget https://www.openssl.org/source/openssl-3.0.13.tar.gz\n>  tar -zxvf openssl-3.0.13.tar.gz\n>  cd openssl-3.0.013\n>  perl Configure VC-WIN64A no-asm --prefix=C:\\build64\\openssl no-ssl3 \n> no-comp\n>  nmake\n>  nmake test\n>  nmake install\n>  cd ..\n>\n> This results in the following headers and libraries being installed \n> for zlib:\n>\n> C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\include\n>  Volume in drive C has no label.\n>  Volume Serial Number is 3AAD-5864\n>\n>  Directory of C:\\build64\\zlib\\include\n>\n> 17/05/2024  15:56    <DIR>          .\n> 17/05/2024  15:56    <DIR>          ..\n> 17/05/2024  15:54            17,096 zconf.h\n> 22/01/2024  19:32            96,829 zlib.h\n>                2 File(s)        113,925 bytes\n>                2 Dir(s)  98,842,726,400 bytes free\n>\n> C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\lib\n>  Volume in drive C has no label.\n>  Volume Serial Number is 3AAD-5864\n>\n>  Directory of C:\\build64\\zlib\\lib\n>\n> 17/05/2024  17:01    <DIR>          .\n> 17/05/2024  15:56    <DIR>          ..\n> 17/05/2024  15:55            16,638 zlib.lib\n> 17/05/2024  15:55           184,458 zlibstatic.lib\n>                2 File(s)        201,096 bytes\n>                2 Dir(s)  98,842,726,400 bytes free\n>\n> I then attempt to build PostgreSQL:\n>\n>  meson setup build \n> -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include \n> -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib \n> -Dssl=openssl -Dzlib=enabled --prefix=c:/build64/pgsql\n>\n> Which results in the output in output.txt, indicating that OpenSSL was \n> correctly found, but zlib was not. I've also attached the meson log.\n>\n> I have very little experience with Meson, and even less interpreting \n> it's logs, but it seems to me that it's not including the extra lib \n> and include directories when it runs the test compile, given the \n> command line it's reporting:\n>\n> cl \n> C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c \n> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n>\n> Bug, or am I doing something silly?\n>\n>\n>\n\nHi Dave!\n\n\nNot sure ;-) But this works for the buildfarm animal drongo, so we \nshould be able to make it work for you. I'll contact you offlist and see \nif I can help.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-20 Mo 06:58, Dave Page\n wrote:\n\n\n\nHi\n\n I'm working on updating the build of PostgreSQL that pgAdmin\n uses in its Windows installers to use Meson ready for the v17\n release. I'm using Visual Studio 2022, on Windows Server 2022.\n\n I've been unable to persuade Meson to detect zlib, whilst\n OpenSSL seems to be fine.\n\n The dependencies have been built and installed as follows:\n\n  mkdir c:\\build64\n\n  wget https://zlib.net/zlib-1.3.2.tar.gz\n  tar -zxvf zlib-1.3.2.tar.gz\n  cd zlib-1.3.2\n  cmake -DCMAKE_INSTALL_PREFIX=C:/build64/zlib -G \"Visual Studio\n 17 2022\" .\n  msbuild ALL_BUILD.vcxproj /p:Configuration=Release\n  msbuild RUN_TESTS.vcxproj /p:Configuration=Release\n  msbuild INSTALL.vcxproj /p:Configuration=Release\n  cd ..\n\n  wget https://www.openssl.org/source/openssl-3.0.13.tar.gz\n  tar -zxvf openssl-3.0.13.tar.gz\n  cd openssl-3.0.013\n  perl Configure VC-WIN64A no-asm --prefix=C:\\build64\\openssl\n no-ssl3 no-comp\n  nmake\n  nmake test\n  nmake install\n  cd ..\n\n This results in the following headers and libraries being\n installed for zlib:\n\n C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\include\n  Volume in drive C has no label.\n  Volume Serial Number is 3AAD-5864\n\n  Directory of C:\\build64\\zlib\\include\n\n 17/05/2024  15:56    <DIR>          .\n 17/05/2024  15:56    <DIR>          ..\n 17/05/2024  15:54            17,096 zconf.h\n 22/01/2024  19:32            96,829 zlib.h\n                2 File(s)        113,925 bytes\n                2 Dir(s)  98,842,726,400 bytes free\n\n C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\lib\n  Volume in drive C has no label.\n  Volume Serial Number is 3AAD-5864\n\n  Directory of C:\\build64\\zlib\\lib\n\n 17/05/2024  17:01    <DIR>          .\n 17/05/2024  15:56    <DIR>          ..\n 17/05/2024  15:55            16,638 zlib.lib\n 17/05/2024  15:55           184,458 zlibstatic.lib\n                2 File(s)        201,096 bytes\n                2 Dir(s)  98,842,726,400 bytes free\n\n I then attempt to build PostgreSQL:\n\n  meson setup build\n -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include\n -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib\n -Dssl=openssl -Dzlib=enabled --prefix=c:/build64/pgsql\n\n Which results in the output in output.txt, indicating that\n OpenSSL was correctly found, but zlib was not. I've also\n attached the meson log.\n\n I have very little experience with Meson, and even less\n interpreting it's logs, but it seems to me that it's not\n including the extra lib and include directories when it runs the\n test compile, given the command line it's reporting:\n\n cl\n C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8\n /EP /Od /Oi-\n\n Bug, or am I doing something silly?\n\n\n\n\n\n\n\n\n\n\n\n\nHi Dave!\n\n\n\nNot sure ;-) But this works for the buildfarm animal drongo, so\n we should be able to make it work for you. I'll contact you\n offlist and see if I can help.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 20 May 2024 11:52:24 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi Dave,\n\nIs the .pc file generated after the successful build of zlib? If yes, then\nmeson should be able to detect the installation ideally\n\nOn Mon, May 20, 2024 at 4:28 PM Dave Page <[email protected]> wrote:\n\n> Hi\n>\n> I'm working on updating the build of PostgreSQL that pgAdmin uses in its\n> Windows installers to use Meson ready for the v17 release. I'm using Visual\n> Studio 2022, on Windows Server 2022.\n>\n> I've been unable to persuade Meson to detect zlib, whilst OpenSSL seems to\n> be fine.\n>\n> The dependencies have been built and installed as follows:\n>\n> mkdir c:\\build64\n>\n> wget https://zlib.net/zlib-1.3.2.tar.gz\n> tar -zxvf zlib-1.3.2.tar.gz\n> cd zlib-1.3.2\n> cmake -DCMAKE_INSTALL_PREFIX=C:/build64/zlib -G \"Visual Studio 17 2022\" .\n> msbuild ALL_BUILD.vcxproj /p:Configuration=Release\n> msbuild RUN_TESTS.vcxproj /p:Configuration=Release\n> msbuild INSTALL.vcxproj /p:Configuration=Release\n> cd ..\n>\n> wget https://www.openssl.org/source/openssl-3.0.13.tar.gz\n> tar -zxvf openssl-3.0.13.tar.gz\n> cd openssl-3.0.013\n> perl Configure VC-WIN64A no-asm --prefix=C:\\build64\\openssl no-ssl3\n> no-comp\n> nmake\n> nmake test\n> nmake install\n> cd ..\n>\n> This results in the following headers and libraries being installed for\n> zlib:\n>\n> C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\include\n> Volume in drive C has no label.\n> Volume Serial Number is 3AAD-5864\n>\n> Directory of C:\\build64\\zlib\\include\n>\n> 17/05/2024 15:56 <DIR> .\n> 17/05/2024 15:56 <DIR> ..\n> 17/05/2024 15:54 17,096 zconf.h\n> 22/01/2024 19:32 96,829 zlib.h\n> 2 File(s) 113,925 bytes\n> 2 Dir(s) 98,842,726,400 bytes free\n>\n> C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\lib\n> Volume in drive C has no label.\n> Volume Serial Number is 3AAD-5864\n>\n> Directory of C:\\build64\\zlib\\lib\n>\n> 17/05/2024 17:01 <DIR> .\n> 17/05/2024 15:56 <DIR> ..\n> 17/05/2024 15:55 16,638 zlib.lib\n> 17/05/2024 15:55 184,458 zlibstatic.lib\n> 2 File(s) 201,096 bytes\n> 2 Dir(s) 98,842,726,400 bytes free\n>\n> I then attempt to build PostgreSQL:\n>\n> meson setup build\n> -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include\n> -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl\n> -Dzlib=enabled --prefix=c:/build64/pgsql\n>\n> Which results in the output in output.txt, indicating that OpenSSL was\n> correctly found, but zlib was not. I've also attached the meson log.\n>\n> I have very little experience with Meson, and even less interpreting it's\n> logs, but it seems to me that it's not including the extra lib and include\n> directories when it runs the test compile, given the command line it's\n> reporting:\n>\n> cl\n> C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n>\n> Bug, or am I doing something silly?\n>\n>\n> --\n> Dave Page\n> pgAdmin: https://www.pgadmin.org\n> PostgreSQL: https://www.postgresql.org\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nSandeep Thakkar\n\nHi Dave,Is the .pc file generated after the successful build of zlib? If yes, then meson should be able to detect the installation ideallyOn Mon, May 20, 2024 at 4:28 PM Dave Page <[email protected]> wrote:HiI'm working on updating the build of PostgreSQL that pgAdmin uses in its Windows installers to use Meson ready for the v17 release. I'm using Visual Studio 2022, on Windows Server 2022.I've been unable to persuade Meson to detect zlib, whilst OpenSSL seems to be fine.The dependencies have been built and installed as follows: mkdir c:\\build64 wget https://zlib.net/zlib-1.3.2.tar.gz tar -zxvf zlib-1.3.2.tar.gz cd zlib-1.3.2 cmake -DCMAKE_INSTALL_PREFIX=C:/build64/zlib -G \"Visual Studio 17 2022\" . msbuild ALL_BUILD.vcxproj /p:Configuration=Release msbuild RUN_TESTS.vcxproj /p:Configuration=Release msbuild INSTALL.vcxproj /p:Configuration=Release cd .. wget https://www.openssl.org/source/openssl-3.0.13.tar.gz tar -zxvf openssl-3.0.13.tar.gz cd openssl-3.0.013 perl Configure VC-WIN64A no-asm --prefix=C:\\build64\\openssl no-ssl3 no-comp nmake nmake test nmake install cd ..This results in the following headers and libraries being installed for zlib:C:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\include Volume in drive C has no label. Volume Serial Number is 3AAD-5864 Directory of C:\\build64\\zlib\\include17/05/2024  15:56    <DIR>          .17/05/2024  15:56    <DIR>          ..17/05/2024  15:54            17,096 zconf.h22/01/2024  19:32            96,829 zlib.h               2 File(s)        113,925 bytes               2 Dir(s)  98,842,726,400 bytes freeC:\\Users\\dpage\\git\\postgresql>dir C:\\build64\\zlib\\lib Volume in drive C has no label. Volume Serial Number is 3AAD-5864 Directory of C:\\build64\\zlib\\lib17/05/2024  17:01    <DIR>          .17/05/2024  15:56    <DIR>          ..17/05/2024  15:55            16,638 zlib.lib17/05/2024  15:55           184,458 zlibstatic.lib               2 File(s)        201,096 bytes               2 Dir(s)  98,842,726,400 bytes freeI then attempt to build PostgreSQL: meson setup build -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl -Dzlib=enabled --prefix=c:/build64/pgsqlWhich results in the output in output.txt, indicating that OpenSSL was correctly found, but zlib was not. I've also attached the meson log.I have very little experience with Meson, and even less interpreting it's logs, but it seems to me that it's not including the extra lib and include directories when it runs the test compile, given the command line it's reporting:cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-Bug, or am I doing something silly?-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com\n-- Sandeep Thakkar", "msg_date": "Tue, 21 May 2024 12:49:33 +0530", "msg_from": "Sandeep Thakkar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi,\n\nOn Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n<[email protected]> wrote:\n>\n> Hi Dave,\n>\n> Is the .pc file generated after the successful build of zlib? If yes, then meson should be able to detect the installation ideally\n\nIf meson is not able to find the .pc file automatically, using 'meson\nsetup ... --pkg-config-path $ZLIB_PC_PATH' might help.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Tue, 21 May 2024 12:14:43 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi Sandeep, Nazir,\n\nOn Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> Hi,\n>\n> On Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n> <[email protected]> wrote:\n> >\n> > Hi Dave,\n> >\n> > Is the .pc file generated after the successful build of zlib? If yes,\n> then meson should be able to detect the installation ideally\n>\n> If meson is not able to find the .pc file automatically, using 'meson\n> setup ... --pkg-config-path $ZLIB_PC_PATH' might help.\n>\n\nThe problem is that on Windows there are no standard locations for a\nUnix-style development library installation such as this, so the chances\nare that the .pc file will point to entirely the wrong location.\n\nFor example, please see\nhttps://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a\nGithub action that builds a completely vanilla zlib using VC++. If you look\nat the uploaded artefact containing the build output and example the .pc\nfile, you'll see it references /zlib as the location, which is simply where\nI built it in that action. On a developer's machine that's almost certainly\nnot going to be where it actually ends up. For example, on the pgAdmin\nbuild farm, the dependencies all end up in C:\\build64\\[whatever]. On the\nsimilar Github action I'm building for PostgreSQL, that artefact will be\nunpacked into /build/zlib.\n\nOf course, for my own builds I can easily make everything use consistent\ndirectories, however most people who are likely to want to build PostgreSQL\nmay not want to also build all the dependencies themselves as well, as some\nare a lot more difficult than zlib. So what tends to happen is people find\nthird party builds or upstream official builds.\n\nI would therefore argue that if the .pc file that's found doesn't provide\ncorrect paths for us, then Meson should fall back to searching in the paths\nspecified on its command line for the appropriate libraries/headers (which\nis what it does for OpenSSL for example, as that doesn't include a .pc\nfile). This is also what happens with PG16 and earlier.\n\nOne other thing I will note is that PG16 and earlier try to use the wrong\nfilename for the import library. For years, it's been a requirement to do\nsomething like this: \"copy \\zlib\\lib\\zlib.lib \\zlib\\lib\\zdll.lib\" to make a\nbuild succeed against a \"vanilla\" zlib build. I haven't got as far as\nfiguring out if the same is true with Meson yet.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHi Sandeep, Nazir,On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz <[email protected]> wrote:Hi,\n\nOn Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n<[email protected]> wrote:\n>\n> Hi Dave,\n>\n> Is the .pc file generated after the successful build of zlib? If yes, then meson should be able to detect the installation ideally\n\nIf meson is not able to find the .pc file automatically, using 'meson\nsetup ... --pkg-config-path $ZLIB_PC_PATH' might help.The problem is that on Windows there are no standard locations for a Unix-style development library installation such as this, so the chances are that the .pc file will point to entirely the wrong location.For example, please see https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a Github action that builds a completely vanilla zlib using VC++. If you look at the uploaded artefact containing the build output and example the .pc file, you'll see it references /zlib as the location, which is simply where I built it in that action. On a developer's machine that's almost certainly not going to be where it actually ends up. For example, on the pgAdmin build farm, the dependencies all end up in C:\\build64\\[whatever]. On the similar Github action I'm building for PostgreSQL, that artefact will be unpacked into /build/zlib.Of course, for my own builds I can easily make everything use consistent directories, however most people who are likely to want to build PostgreSQL may not want to also build all the dependencies themselves as well, as some are a lot more difficult than zlib. So what tends to happen is people find third party builds or upstream official builds. I would therefore argue that if the .pc file that's found doesn't provide correct paths for us, then Meson should fall back to searching in the paths specified on its command line for the appropriate libraries/headers (which is what it does for OpenSSL for example, as that doesn't include a .pc file). This is also what happens with PG16 and earlier.One other thing I will note is that PG16 and earlier try to use the wrong filename for the import library. For years, it's been a requirement to do something like this: \"copy \\zlib\\lib\\zlib.lib \\zlib\\lib\\zdll.lib\" to make a build succeed against a \"vanilla\" zlib build. I haven't got as far as figuring out if the same is true with Meson yet.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 21 May 2024 10:41:45 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi Dave,\n\n\nOn Tue, May 21, 2024 at 3:12 PM Dave Page <[email protected]> wrote:\n\n> Hi Sandeep, Nazir,\n>\n> On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> On Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n>> <[email protected]> wrote:\n>> >\n>> > Hi Dave,\n>> >\n>> > Is the .pc file generated after the successful build of zlib? If yes,\n>> then meson should be able to detect the installation ideally\n>>\n>> If meson is not able to find the .pc file automatically, using 'meson\n>> setup ... --pkg-config-path $ZLIB_PC_PATH' might help.\n>>\n>\n> The problem is that on Windows there are no standard locations for a\n> Unix-style development library installation such as this, so the chances\n> are that the .pc file will point to entirely the wrong location.\n>\n> For example, please see\n> https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a\n> Github action that builds a completely vanilla zlib using VC++. If you look\n> at the uploaded artefact containing the build output and example the .pc\n> file, you'll see it references /zlib as the location, which is simply where\n> I built it in that action. On a developer's machine that's almost certainly\n> not going to be where it actually ends up. For example, on the pgAdmin\n> build farm, the dependencies all end up in C:\\build64\\[whatever]. On the\n> similar Github action I'm building for PostgreSQL, that artefact will be\n> unpacked into /build/zlib.\n>\n>\nThe above link returned 404. But I found a successful build at\nhttps://github.com/dpage/winpgbuild/actions/runs/9175426807. I downloaded\nthe artifact but didn't find .pc file as I wanted to look into the content\nof that file.\n\nI had a word with Murali who mentioned he encountered a similar issue while\nbuilding PG17 on windows. He worked-around is by using a template .pc file\nthat includes these lines:\n--\nprefix=${pcfiledir}/../..\nexec_prefix=${prefix}\nlibdir=${prefix}/lib\nsharedlibdir=${prefix}/lib\nincludedir=${prefix}/include\n--\n\nBut in general I agree with you on the issue of Meson's dependency on\npkgconfig files to detect the third party libraries.\n\nOf course, for my own builds I can easily make everything use consistent\n> directories, however most people who are likely to want to build PostgreSQL\n> may not want to also build all the dependencies themselves as well, as some\n> are a lot more difficult than zlib. So what tends to happen is people find\n> third party builds or upstream official builds.\n>\n> I would therefore argue that if the .pc file that's found doesn't provide\n> correct paths for us, then Meson should fall back to searching in the paths\n> specified on its command line for the appropriate libraries/headers (which\n> is what it does for OpenSSL for example, as that doesn't include a .pc\n> file). This is also what happens with PG16 and earlier.\n>\n> One other thing I will note is that PG16 and earlier try to use the wrong\n> filename for the import library. For years, it's been a requirement to do\n> something like this: \"copy \\zlib\\lib\\zlib.lib \\zlib\\lib\\zdll.lib\" to make a\n> build succeed against a \"vanilla\" zlib build. I haven't got as far as\n> figuring out if the same is true with Meson yet.\n>\n> --\n> Dave Page\n> pgAdmin: https://www.pgadmin.org\n> PostgreSQL: https://www.postgresql.org\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nSandeep Thakkar\n\nHi Dave,On Tue, May 21, 2024 at 3:12 PM Dave Page <[email protected]> wrote:Hi Sandeep, Nazir,On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz <[email protected]> wrote:Hi,\n\nOn Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n<[email protected]> wrote:\n>\n> Hi Dave,\n>\n> Is the .pc file generated after the successful build of zlib? If yes, then meson should be able to detect the installation ideally\n\nIf meson is not able to find the .pc file automatically, using 'meson\nsetup ... --pkg-config-path $ZLIB_PC_PATH' might help.The problem is that on Windows there are no standard locations for a Unix-style development library installation such as this, so the chances are that the .pc file will point to entirely the wrong location.For example, please see https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a Github action that builds a completely vanilla zlib using VC++. If you look at the uploaded artefact containing the build output and example the .pc file, you'll see it references /zlib as the location, which is simply where I built it in that action. On a developer's machine that's almost certainly not going to be where it actually ends up. For example, on the pgAdmin build farm, the dependencies all end up in C:\\build64\\[whatever]. On the similar Github action I'm building for PostgreSQL, that artefact will be unpacked into /build/zlib.The above link returned 404. But I found a successful build at https://github.com/dpage/winpgbuild/actions/runs/9175426807. I downloaded the artifact but didn't find .pc file as I wanted to look into the content of that file.I had a word with Murali who mentioned he encountered a similar issue while building PG17 on windows. He worked-around is by using a template .pc file that includes these lines:--prefix=${pcfiledir}/../..exec_prefix=${prefix}libdir=${prefix}/libsharedlibdir=${prefix}/libincludedir=${prefix}/include--But in general I agree with you on the issue of Meson's dependency on pkgconfig files to detect the third party libraries. Of course, for my own builds I can easily make everything use consistent directories, however most people who are likely to want to build PostgreSQL may not want to also build all the dependencies themselves as well, as some are a lot more difficult than zlib. So what tends to happen is people find third party builds or upstream official builds. I would therefore argue that if the .pc file that's found doesn't provide correct paths for us, then Meson should fall back to searching in the paths specified on its command line for the appropriate libraries/headers (which is what it does for OpenSSL for example, as that doesn't include a .pc file). This is also what happens with PG16 and earlier.One other thing I will note is that PG16 and earlier try to use the wrong filename for the import library. For years, it's been a requirement to do something like this: \"copy \\zlib\\lib\\zlib.lib \\zlib\\lib\\zdll.lib\" to make a build succeed against a \"vanilla\" zlib build. I haven't got as far as figuring out if the same is true with Meson yet.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com\n-- Sandeep Thakkar", "msg_date": "Tue, 21 May 2024 19:42:30 +0530", "msg_from": "Sandeep Thakkar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi\n\nOn Tue, 21 May 2024 at 15:12, Sandeep Thakkar <\[email protected]> wrote:\n\n> Hi Dave,\n>\n>\n> On Tue, May 21, 2024 at 3:12 PM Dave Page <[email protected]> wrote:\n>\n>> Hi Sandeep, Nazir,\n>>\n>> On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz <[email protected]>\n>> wrote:\n>>\n>>> Hi,\n>>>\n>>> On Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n>>> <[email protected]> wrote:\n>>> >\n>>> > Hi Dave,\n>>> >\n>>> > Is the .pc file generated after the successful build of zlib? If yes,\n>>> then meson should be able to detect the installation ideally\n>>>\n>>> If meson is not able to find the .pc file automatically, using 'meson\n>>> setup ... --pkg-config-path $ZLIB_PC_PATH' might help.\n>>>\n>>\n>> The problem is that on Windows there are no standard locations for a\n>> Unix-style development library installation such as this, so the chances\n>> are that the .pc file will point to entirely the wrong location.\n>>\n>> For example, please see\n>> https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a\n>> Github action that builds a completely vanilla zlib using VC++. If you look\n>> at the uploaded artefact containing the build output and example the .pc\n>> file, you'll see it references /zlib as the location, which is simply where\n>> I built it in that action. On a developer's machine that's almost certainly\n>> not going to be where it actually ends up. For example, on the pgAdmin\n>> build farm, the dependencies all end up in C:\\build64\\[whatever]. On the\n>> similar Github action I'm building for PostgreSQL, that artefact will be\n>> unpacked into /build/zlib.\n>>\n>>\n> The above link returned 404. But I found a successful build at\n> https://github.com/dpage/winpgbuild/actions/runs/9175426807. I downloaded\n> the artifact but didn't find .pc file as I wanted to look into the content\n> of that file.\n>\n\nYeah, sorry - that was an old one.\n\nFollowing some offline discussion with Andrew I realised I was building\nzlib incorrectly - using cmake/msbuild instead of nmake and some manual\ncopying. Whilst the former does create a working library and a sane looking\ninstallation, it's not the recommended method, which is documented in a\nvery non-obvious way - far less obvious than \"oh look, there's a cmake\nconfig - let's use that\".\n\nThe new build is done using the recommended method, which works with PG16\nand below out of the box with no need to rename any files.\n\n\n>\n> I had a word with Murali who mentioned he encountered a similar issue\n> while building PG17 on windows. He worked-around is by using a template .pc\n> file that includes these lines:\n> --\n> prefix=${pcfiledir}/../..\n> exec_prefix=${prefix}\n> libdir=${prefix}/lib\n> sharedlibdir=${prefix}/lib\n> includedir=${prefix}/include\n> --\n>\n\nThe issue here is that there is no .pc file created with the correct way of\nbuilding zlib, and even if there were (or I created a dummy one),\npkg-config isn't really a thing on Windows.\n\nI'd also note that from what Andrew has shown me of the zlib installation\non the buildfarm member drongo, there is no .pc file there either, and yet\nseems to work fine (and my zlib installation now has the exact same set of\nfiles as his does).\n\n\n>\n> But in general I agree with you on the issue of Meson's dependency on\n> pkgconfig files to detect the third party libraries.\n>\n> Of course, for my own builds I can easily make everything use consistent\n>> directories, however most people who are likely to want to build PostgreSQL\n>> may not want to also build all the dependencies themselves as well, as some\n>> are a lot more difficult than zlib. So what tends to happen is people find\n>> third party builds or upstream official builds.\n>>\n>> I would therefore argue that if the .pc file that's found doesn't provide\n>> correct paths for us, then Meson should fall back to searching in the paths\n>> specified on its command line for the appropriate libraries/headers (which\n>> is what it does for OpenSSL for example, as that doesn't include a .pc\n>> file). This is also what happens with PG16 and earlier.\n>>\n>> One other thing I will note is that PG16 and earlier try to use the wrong\n>> filename for the import library. For years, it's been a requirement to do\n>> something like this: \"copy \\zlib\\lib\\zlib.lib \\zlib\\lib\\zdll.lib\" to make a\n>> build succeed against a \"vanilla\" zlib build. I haven't got as far as\n>> figuring out if the same is true with Meson yet.\n>>\n>> --\n>> Dave Page\n>> pgAdmin: https://www.pgadmin.org\n>> PostgreSQL: https://www.postgresql.org\n>> EDB: https://www.enterprisedb.com\n>>\n>>\n>\n> --\n> Sandeep Thakkar\n>\n>\n>\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, 21 May 2024 at 15:12, Sandeep Thakkar <[email protected]> wrote:Hi Dave,On Tue, May 21, 2024 at 3:12 PM Dave Page <[email protected]> wrote:Hi Sandeep, Nazir,On Tue, 21 May 2024 at 10:14, Nazir Bilal Yavuz <[email protected]> wrote:Hi,\n\nOn Tue, 21 May 2024 at 10:20, Sandeep Thakkar\n<[email protected]> wrote:\n>\n> Hi Dave,\n>\n> Is the .pc file generated after the successful build of zlib? If yes, then meson should be able to detect the installation ideally\n\nIf meson is not able to find the .pc file automatically, using 'meson\nsetup ... --pkg-config-path $ZLIB_PC_PATH' might help.The problem is that on Windows there are no standard locations for a Unix-style development library installation such as this, so the chances are that the .pc file will point to entirely the wrong location.For example, please see https://github.com/dpage/winpgbuild/actions/runs/9172187335 which is a Github action that builds a completely vanilla zlib using VC++. If you look at the uploaded artefact containing the build output and example the .pc file, you'll see it references /zlib as the location, which is simply where I built it in that action. On a developer's machine that's almost certainly not going to be where it actually ends up. For example, on the pgAdmin build farm, the dependencies all end up in C:\\build64\\[whatever]. On the similar Github action I'm building for PostgreSQL, that artefact will be unpacked into /build/zlib.The above link returned 404. But I found a successful build at https://github.com/dpage/winpgbuild/actions/runs/9175426807. I downloaded the artifact but didn't find .pc file as I wanted to look into the content of that file.Yeah, sorry - that was an old one.Following some offline discussion with Andrew I realised I was building zlib incorrectly - using cmake/msbuild instead of nmake and some manual copying. Whilst the former does create a working library and a sane looking installation, it's not the recommended method, which is documented in a very non-obvious way - far less obvious than \"oh look, there's a cmake config - let's use that\".The new build is done using the recommended method, which works with PG16 and below out of the box with no need to rename any files. I had a word with Murali who mentioned he encountered a similar issue while building PG17 on windows. He worked-around is by using a template .pc file that includes these lines:--prefix=${pcfiledir}/../..exec_prefix=${prefix}libdir=${prefix}/libsharedlibdir=${prefix}/libincludedir=${prefix}/include--The issue here is that there is no .pc file created with the correct way of building zlib, and even if there were (or I created a dummy one), pkg-config isn't really a thing on Windows.I'd also note that from what Andrew has shown me of the zlib installation on the buildfarm member drongo, there is no .pc file there either, and yet seems to work fine (and my zlib installation now has the exact same set of files as his does). But in general I agree with you on the issue of Meson's dependency on pkgconfig files to detect the third party libraries. Of course, for my own builds I can easily make everything use consistent directories, however most people who are likely to want to build PostgreSQL may not want to also build all the dependencies themselves as well, as some are a lot more difficult than zlib. So what tends to happen is people find third party builds or upstream official builds. I would therefore argue that if the .pc file that's found doesn't provide correct paths for us, then Meson should fall back to searching in the paths specified on its command line for the appropriate libraries/headers (which is what it does for OpenSSL for example, as that doesn't include a .pc file). This is also what happens with PG16 and earlier.One other thing I will note is that PG16 and earlier try to use the wrong filename for the import library. For years, it's been a requirement to do something like this: \"copy \\zlib\\lib\\zlib.lib \\zlib\\lib\\zdll.lib\" to make a build succeed against a \"vanilla\" zlib build. I haven't got as far as figuring out if the same is true with Meson yet.-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com\n-- Sandeep Thakkar\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 21 May 2024 15:24:05 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi,\n\nOn 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> I have very little experience with Meson, and even less interpreting it's\n> logs, but it seems to me that it's not including the extra lib and include\n> directories when it runs the test compile, given the command line it's\n> reporting:\n> \n> cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n> \n> Bug, or am I doing something silly?\n\nIt's a buglet. We rely on meson's internal fallback detection of zlib, if it's\nnot provided via pkg-config or cmake. But it doesn't know about our\nextra_include_dirs parameter. We should probably fix that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 May 2024 08:04:04 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "On Tue, 21 May 2024 at 16:04, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> > I have very little experience with Meson, and even less interpreting it's\n> > logs, but it seems to me that it's not including the extra lib and\n> include\n> > directories when it runs the test compile, given the command line it's\n> > reporting:\n> >\n> > cl\n> C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> > /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od\n> /Oi-\n> >\n> > Bug, or am I doing something silly?\n>\n> It's a buglet. We rely on meson's internal fallback detection of zlib, if\n> it's\n> not provided via pkg-config or cmake. But it doesn't know about our\n> extra_include_dirs parameter. We should probably fix that...\n>\n\nOh good, then I'm not going bonkers. I'm still curious about how it works\nfor Andrew but not me, however fixing that buglet should solve my issue,\nand would be sensible behaviour.\n\nThanks!\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 21 May 2024 at 16:04, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> I have very little experience with Meson, and even less interpreting it's\n> logs, but it seems to me that it's not including the extra lib and include\n> directories when it runs the test compile, given the command line it's\n> reporting:\n> \n> cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n> \n> Bug, or am I doing something silly?\n\nIt's a buglet. We rely on meson's internal fallback detection of zlib, if it's\nnot provided via pkg-config or cmake. But it doesn't know about our\nextra_include_dirs parameter. We should probably fix that...Oh good, then I'm not going bonkers. I'm still curious about how it works for Andrew but not me, however fixing that buglet should solve my issue, and would be sensible behaviour.Thanks! -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Tue, 21 May 2024 16:24:23 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi,\n\nOn 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> I then attempt to build PostgreSQL:\n> \n> meson setup build\n> -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include\n> -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl\n> -Dzlib=enabled --prefix=c:/build64/pgsql\n> \n> Which results in the output in output.txt, indicating that OpenSSL was\n> correctly found, but zlib was not. I've also attached the meson log.\n\nI forgot to mention that earlier: Assuming you're building something to be\ndistributed, I'd recommend --auto-features=enabled/disabled and specifying\nspecifically which dependencies you want to be used.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 May 2024 10:00:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "\nOn 2024-05-21 Tu 11:04, Andres Freund wrote:\n> Hi,\n>\n> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n>> I have very little experience with Meson, and even less interpreting it's\n>> logs, but it seems to me that it's not including the extra lib and include\n>> directories when it runs the test compile, given the command line it's\n>> reporting:\n>>\n>> cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n>> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n>>\n>> Bug, or am I doing something silly?\n> It's a buglet. We rely on meson's internal fallback detection of zlib, if it's\n> not provided via pkg-config or cmake. But it doesn't know about our\n> extra_include_dirs parameter. We should probably fix that...\n>\n\nYeah. Meanwhile, what I got working on a totally fresh Windows + VS \ninstall was instead of using extra_include_dirs etc to add the relevant \ndirectories to the environment LIB and INCLUDE settings before calling \n`\"meson setup\".\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 21 May 2024 15:54:52 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "On Tue May 21, 2024 at 10:04 AM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> > I have very little experience with Meson, and even less interpreting it's\n> > logs, but it seems to me that it's not including the extra lib and include\n> > directories when it runs the test compile, given the command line it's\n> > reporting:\n> > \n> > cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> > /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n> > \n> > Bug, or am I doing something silly?\n>\n> It's a buglet. We rely on meson's internal fallback detection of zlib, if it's\n> not provided via pkg-config or cmake. But it doesn't know about our\n> extra_include_dirs parameter. We should probably fix that...\n\nHere is the relevant Meson code for finding zlib in the Postgres tree:\n\n\tpostgres_inc_d = ['src/include']\n\tpostgres_inc_d += get_option('extra_include_dirs')\n\t...\n\tpostgres_inc = [include_directories(postgres_inc_d)]\n\t...\n\tzlibopt = get_option('zlib')\n\tzlib = not_found_dep\n\tif not zlibopt.disabled()\n\t zlib_t = dependency('zlib', required: zlibopt)\n\n\t if zlib_t.type_name() == 'internal'\n\t\t# if fallback was used, we don't need to test if headers are present (they\n\t\t# aren't built yet, so we can't test)\n\t\tzlib = zlib_t\n\t elif not zlib_t.found()\n\t\twarning('did not find zlib')\n\t elif not cc.has_header('zlib.h',\n\t\t args: test_c_args, include_directories: postgres_inc,\n\t\t dependencies: [zlib_t], required: zlibopt)\n\t\twarning('zlib header not found')\n\t elif not cc.has_type('z_streamp',\n\t\t dependencies: [zlib_t], prefix: '#include <zlib.h>',\n\t\t args: test_c_args, include_directories: postgres_inc)\n\t\tif zlibopt.enabled()\n\t\t error('zlib version is too old')\n\t\telse\n\t\t warning('zlib version is too old')\n\t\tendif\n\t else\n\t\tzlib = zlib_t\n\t endif\n\n\t if zlib.found()\n\t\tcdata.set('HAVE_LIBZ', 1)\n\t endif\n\tendif\n\nYou can see that we do pass the include dirs to the has_header check. \nSomething seems to be going wrong here since your extra_include_dirs \nisn't being properly translated to include arguments.\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n\n\n", "msg_date": "Tue, 21 May 2024 17:09:54 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "On Tue, 21 May 2024 at 18:00, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> > I then attempt to build PostgreSQL:\n> >\n> > meson setup build\n> > -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include\n> > -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl\n> > -Dzlib=enabled --prefix=c:/build64/pgsql\n> >\n> > Which results in the output in output.txt, indicating that OpenSSL was\n> > correctly found, but zlib was not. I've also attached the meson log.\n>\n> I forgot to mention that earlier: Assuming you're building something to be\n> distributed, I'd recommend --auto-features=enabled/disabled and specifying\n> specifically which dependencies you want to be used.\n>\n\nGood idea - thanks.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 21 May 2024 at 18:00, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> I then attempt to build PostgreSQL:\n> \n>  meson setup build\n> -Dextra_include_dirs=C:/build64/openssl/include,C:/build64/zlib/include\n> -Dextra_lib_dirs=C:/build64/openssl/lib,C:/build64/zlib/lib -Dssl=openssl\n> -Dzlib=enabled --prefix=c:/build64/pgsql\n> \n> Which results in the output in output.txt, indicating that OpenSSL was\n> correctly found, but zlib was not. I've also attached the meson log.\n\nI forgot to mention that earlier: Assuming you're building something to be\ndistributed, I'd recommend --auto-features=enabled/disabled and specifying\nspecifically which dependencies you want to be used.Good idea - thanks. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 22 May 2024 12:12:01 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "On Tue, 21 May 2024 at 20:54, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-05-21 Tu 11:04, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> >> I have very little experience with Meson, and even less interpreting\n> it's\n> >> logs, but it seems to me that it's not including the extra lib and\n> include\n> >> directories when it runs the test compile, given the command line it's\n> >> reporting:\n> >>\n> >> cl\n> C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> >> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od\n> /Oi-\n> >>\n> >> Bug, or am I doing something silly?\n> > It's a buglet. We rely on meson's internal fallback detection of zlib,\n> if it's\n> > not provided via pkg-config or cmake. But it doesn't know about our\n> > extra_include_dirs parameter. We should probably fix that...\n> >\n>\n> Yeah. Meanwhile, what I got working on a totally fresh Windows + VS\n> install was instead of using extra_include_dirs etc to add the relevant\n> directories to the environment LIB and INCLUDE settings before calling\n> `\"meson setup\".\n>\n\nYes, that works for me too.\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nOn Tue, 21 May 2024 at 20:54, Andrew Dunstan <[email protected]> wrote:\nOn 2024-05-21 Tu 11:04, Andres Freund wrote:\n> Hi,\n>\n> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n>> I have very little experience with Meson, and even less interpreting it's\n>> logs, but it seems to me that it's not including the extra lib and include\n>> directories when it runs the test compile, given the command line it's\n>> reporting:\n>>\n>> cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n>> /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n>>\n>> Bug, or am I doing something silly?\n> It's a buglet. We rely on meson's internal fallback detection of zlib, if it's\n> not provided via pkg-config or cmake. But it doesn't know about our\n> extra_include_dirs parameter. We should probably fix that...\n>\n\nYeah. Meanwhile, what I got working on a totally fresh Windows + VS \ninstall was instead of using extra_include_dirs etc to add the relevant \ndirectories to the environment LIB and INCLUDE settings before calling \n`\"meson setup\".Yes, that works for me too. -- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 22 May 2024 12:12:48 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi,\n\nOn Tue, 21 May 2024 at 18:24, Dave Page <[email protected]> wrote:\n>\n>\n>\n> On Tue, 21 May 2024 at 16:04, Andres Freund <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n>> > I have very little experience with Meson, and even less interpreting it's\n>> > logs, but it seems to me that it's not including the extra lib and include\n>> > directories when it runs the test compile, given the command line it's\n>> > reporting:\n>> >\n>> > cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n>> > /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n>> >\n>> > Bug, or am I doing something silly?\n>>\n>> It's a buglet. We rely on meson's internal fallback detection of zlib, if it's\n>> not provided via pkg-config or cmake. But it doesn't know about our\n>> extra_include_dirs parameter. We should probably fix that...\n>\n>\n> Oh good, then I'm not going bonkers. I'm still curious about how it works for Andrew but not me, however fixing that buglet should solve my issue, and would be sensible behaviour.\n>\n> Thanks!\n\nI tried to install your latest zlib artifact (nmake one) to the\nWindows CI images (not the official ones) [1]. Then, I used the\ndefault meson.build file to build but meson could not find the zlib.\nAfter that, I modified it like you suggested before; I used a\n'cc.find_library()' to find zlib as a fallback method and it seems it\nworked [2]. Please see meson setup logs below [3], does something\nsimilar to the attached solve your problem?\n\nThe interesting thing is, I also tried this 'cc.find_library' method\nwith your old artifact (cmake one). It was able to find zlib but all\ntests failed [4].\n\nExperimental zlib meson.build diff is attached.\n\n[1] https://cirrus-ci.com/task/6736867247259648\n[2] https://cirrus-ci.com/build/5286228755480576\n[3]\nRun-time dependency zlib found: NO (tried pkgconfig, cmake and system)\nHas header \"zlib.h\" : YES\nLibrary zlib found: YES\n...\n External libraries\n...\n zlib : YES\n...\n[4] https://cirrus-ci.com/task/5208433811521536\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 22 May 2024 16:11:09 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi\n\nOn Wed, 22 May 2024 at 14:11, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> Hi,\n>\n> On Tue, 21 May 2024 at 18:24, Dave Page <[email protected]> wrote:\n> >\n> >\n> >\n> > On Tue, 21 May 2024 at 16:04, Andres Freund <[email protected]> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n> >> > I have very little experience with Meson, and even less interpreting\n> it's\n> >> > logs, but it seems to me that it's not including the extra lib and\n> include\n> >> > directories when it runs the test compile, given the command line it's\n> >> > reporting:\n> >> >\n> >> > cl\n> C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n> >> > /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od\n> /Oi-\n> >> >\n> >> > Bug, or am I doing something silly?\n> >>\n> >> It's a buglet. We rely on meson's internal fallback detection of zlib,\n> if it's\n> >> not provided via pkg-config or cmake. But it doesn't know about our\n> >> extra_include_dirs parameter. We should probably fix that...\n> >\n> >\n> > Oh good, then I'm not going bonkers. I'm still curious about how it\n> works for Andrew but not me, however fixing that buglet should solve my\n> issue, and would be sensible behaviour.\n> >\n> > Thanks!\n>\n> I tried to install your latest zlib artifact (nmake one) to the\n> Windows CI images (not the official ones) [1]. Then, I used the\n> default meson.build file to build but meson could not find the zlib.\n> After that, I modified it like you suggested before; I used a\n> 'cc.find_library()' to find zlib as a fallback method and it seems it\n> worked [2]. Please see meson setup logs below [3], does something\n> similar to the attached solve your problem?\n>\n\nThat patch does solve my problem - thank you!\n\n\n>\n> The interesting thing is, I also tried this 'cc.find_library' method\n> with your old artifact (cmake one). It was able to find zlib but all\n> tests failed [4].\n>\n\nVery odd. Whilst I haven't used that particular build elsewhere, we've been\nbuilding PostgreSQL and shipping client utilities with pgAdmin using\ncmake-built zlib for years.\n\n\n>\n> Experimental zlib meson.build diff is attached.\n>\n> [1] https://cirrus-ci.com/task/6736867247259648\n> [2] https://cirrus-ci.com/build/5286228755480576\n> [3]\n> Run-time dependency zlib found: NO (tried pkgconfig, cmake and system)\n> Has header \"zlib.h\" : YES\n> Library zlib found: YES\n> ...\n> External libraries\n> ...\n> zlib : YES\n> ...\n> [4] https://cirrus-ci.com/task/5208433811521536\n>\n> --\n> Regards,\n> Nazir Bilal Yavuz\n> Microsoft\n>\n\n\n-- \nDave Page\npgAdmin: https://www.pgadmin.org\nPostgreSQL: https://www.postgresql.org\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, 22 May 2024 at 14:11, Nazir Bilal Yavuz <[email protected]> wrote:Hi,\n\nOn Tue, 21 May 2024 at 18:24, Dave Page <[email protected]> wrote:\n>\n>\n>\n> On Tue, 21 May 2024 at 16:04, Andres Freund <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> On 2024-05-20 11:58:05 +0100, Dave Page wrote:\n>> > I have very little experience with Meson, and even less interpreting it's\n>> > logs, but it seems to me that it's not including the extra lib and include\n>> > directories when it runs the test compile, given the command line it's\n>> > reporting:\n>> >\n>> > cl C:\\Users\\dpage\\git\\postgresql\\build\\meson-private\\tmpg_h4xcue\\testfile.c\n>> > /nologo /showIncludes /utf-8 /EP /nologo /showIncludes /utf-8 /EP /Od /Oi-\n>> >\n>> > Bug, or am I doing something silly?\n>>\n>> It's a buglet. We rely on meson's internal fallback detection of zlib, if it's\n>> not provided via pkg-config or cmake. But it doesn't know about our\n>> extra_include_dirs parameter. We should probably fix that...\n>\n>\n> Oh good, then I'm not going bonkers. I'm still curious about how it works for Andrew but not me, however fixing that buglet should solve my issue, and would be sensible behaviour.\n>\n> Thanks!\n\nI tried to install your latest zlib artifact (nmake one) to the\nWindows CI images (not the official ones) [1]. Then, I used the\ndefault meson.build file to build but meson could not find the zlib.\nAfter that, I modified it like you suggested before; I used a\n'cc.find_library()' to find zlib as a fallback method and it seems it\nworked [2]. Please see meson setup logs below [3], does something\nsimilar to the attached solve your problem?That patch does solve my problem - thank you! \n\nThe interesting thing is, I also tried this 'cc.find_library' method\nwith your old artifact (cmake one). It was able to find zlib but all\ntests failed [4].Very odd. Whilst I haven't used that particular build elsewhere, we've been building PostgreSQL and shipping client utilities with pgAdmin using cmake-built zlib for years. \n\nExperimental zlib meson.build diff is attached.\n\n[1] https://cirrus-ci.com/task/6736867247259648\n[2] https://cirrus-ci.com/build/5286228755480576\n[3]\nRun-time dependency zlib found: NO (tried pkgconfig, cmake and system)\nHas header \"zlib.h\" : YES\nLibrary zlib found: YES\n...\n  External libraries\n...\n    zlib                   : YES\n...\n[4] https://cirrus-ci.com/task/5208433811521536\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n-- Dave PagepgAdmin: https://www.pgadmin.orgPostgreSQL: https://www.postgresql.orgEDB: https://www.enterprisedb.com", "msg_date": "Wed, 22 May 2024 15:20:56 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "Hi,\n\nOn Wed, 22 May 2024 at 17:21, Dave Page <[email protected]> wrote:\n>\n> Hi\n>\n> On Wed, 22 May 2024 at 14:11, Nazir Bilal Yavuz <[email protected]> wrote:\n>>\n>>\n>> I tried to install your latest zlib artifact (nmake one) to the\n>> Windows CI images (not the official ones) [1]. Then, I used the\n>> default meson.build file to build but meson could not find the zlib.\n>> After that, I modified it like you suggested before; I used a\n>> 'cc.find_library()' to find zlib as a fallback method and it seems it\n>> worked [2]. Please see meson setup logs below [3], does something\n>> similar to the attached solve your problem?\n>\n>\n> That patch does solve my problem - thank you!\n\nI am glad that it worked!\n\nDo you think that we need to have this patch in the upstream Postgres?\nI am not sure because:\n- There is a case that meson is able to find zlib but tests fail.\n- This might be a band-aid fix rather than a permanent fix.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Wed, 22 May 2024 19:49:50 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: zlib detection in Meson on Windows broken?" }, { "msg_contents": "On Wed, 22 May 2024 at 17:50, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> Hi,\n>\n> On Wed, 22 May 2024 at 17:21, Dave Page <[email protected]> wrote:\n> >\n> > Hi\n> >\n> > On Wed, 22 May 2024 at 14:11, Nazir Bilal Yavuz <[email protected]>\n> wrote:\n> >>\n> >>\n> >> I tried to install your latest zlib artifact (nmake one) to the\n> >> Windows CI images (not the official ones) [1]. Then, I used the\n> >> default meson.build file to build but meson could not find the zlib.\n> >> After that, I modified it like you suggested before; I used a\n> >> 'cc.find_library()' to find zlib as a fallback method and it seems it\n> >> worked [2]. Please see meson setup logs below [3], does something\n> >> similar to the attached solve your problem?\n> >\n> >\n> > That patch does solve my problem - thank you!\n>\n> I am glad that it worked!\n>\n> Do you think that we need to have this patch in the upstream Postgres?\n> I am not sure because:\n> - There is a case that meson is able to find zlib but tests fail.\n> - This might be a band-aid fix rather than a permanent fix.\n\n\nYes I do:\n\n- This is the documented way to build/install zlib on Windows.\n- The behaviour with the patch matches <= PG16\n- The behaviour with the patch is consistent with OpenSSL detection, and\n(from a quick, unrelated test), libxml2 detection.\n\nThanks!\n\n>\n\nOn Wed, 22 May 2024 at 17:50, Nazir Bilal Yavuz <[email protected]> wrote:Hi,\n\nOn Wed, 22 May 2024 at 17:21, Dave Page <[email protected]> wrote:\n>\n> Hi\n>\n> On Wed, 22 May 2024 at 14:11, Nazir Bilal Yavuz <[email protected]> wrote:\n>>\n>>\n>> I tried to install your latest zlib artifact (nmake one) to the\n>> Windows CI images (not the official ones) [1]. Then, I used the\n>> default meson.build file to build but meson could not find the zlib.\n>> After that, I modified it like you suggested before; I used a\n>> 'cc.find_library()' to find zlib as a fallback method and it seems it\n>> worked [2]. Please see meson setup logs below [3], does something\n>> similar to the attached solve your problem?\n>\n>\n> That patch does solve my problem - thank you!\n\nI am glad that it worked!\n\nDo you think that we need to have this patch in the upstream Postgres?\nI am not sure because:\n- There is a case that meson is able to find zlib but tests fail.\n- This might be a band-aid fix rather than a permanent fix.Yes I do:- This is the documented way to build/install zlib on Windows.- The behaviour with the patch matches <= PG16- The behaviour with the patch is consistent with OpenSSL detection, and (from a quick, unrelated test), libxml2 detection.Thanks!", "msg_date": "Wed, 22 May 2024 18:18:07 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": true, "msg_subject": "Re: zlib detection in Meson on Windows broken?" } ]
[ { "msg_contents": "Hey,\n\nI'm trying to read a timestamp column as EPOCH.\nMy query is as follows.\n```\nSELECT EXTRACT(EPOCH FROM timestamp_column) FROM table;\n\ncolumn\n----------\n\n1716213097.86486\n```\nWhen running in the console this query gives valid epoch output which\nappears to be of type double.\n\nWhen trying to read the query response from the Datum, I get garbage values.\nI've tried various types and none of them read the correct value.\n```\n\nDatum current_timestamp = SPI_getbinval(SPI_tuptable->vals[i],\nSPI_tuptable->tupdesc, 5, &isnull);\n\ndouble current_time = DatumGetFloat8(current_timestamp); // prints 0\n\nint64 time = DatumGetUint64(current_timestamp); // prints 5293917674\n```\n\nCan you help me out with the correct way to read EPOCH values from datums?\n\nThanks,\nSushrut\n\nHey,I'm trying to read a timestamp column as EPOCH.My query is as follows.```SELECT EXTRACT(EPOCH FROM timestamp_column) FROM table;column----------1716213097.86486```When running in the console this query gives valid epoch output which appears to be of type double.When trying to read the query response from the Datum, I get garbage values.I've tried various types and none of them read the correct value.```Datum current_timestamp = SPI_getbinval(SPI_tuptable->vals[i], SPI_tuptable->tupdesc, 5, &isnull);\ndouble current_time = DatumGetFloat8(current_timestamp); // prints 0int64 time = DatumGetUint64(current_timestamp); // prints 5293917674```Can you help me out with the correct way to read EPOCH values from datums?Thanks,Sushrut", "msg_date": "Mon, 20 May 2024 20:07:13 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Reading timestamp values from Datums gives garbage values" }, { "msg_contents": "On 5/20/24 16:37, Sushrut Shivaswamy wrote:\n> Hey,\n> \n> I'm trying to read a timestamp column as EPOCH.\n> My query is as follows.\n> ```\n> SELECT EXTRACT(EPOCH FROM timestamp_column) FROM table;\n> \n> column\n> ----------\n> \n> 1716213097.86486\n> ```\n> When running in the console this query gives valid epoch output which\n> appears to be of type double.\n> \n> When trying to read the query response from the Datum, I get garbage values.\n> I've tried various types and none of them read the correct value.\n> ```\n> \n> Datum current_timestamp = SPI_getbinval(SPI_tuptable->vals[i],\n> SPI_tuptable->tupdesc, 5, &isnull);\n> \n> double current_time = DatumGetFloat8(current_timestamp); // prints 0\n> \n> int64 time = DatumGetUint64(current_timestamp); // prints 5293917674\n> ```\n> \n\nTimestampTz is int64, so using DatumGetInt64 is probably the simplest\nsolution. And it's the number of microseconds, so X/1e6 should give you\nthe epoch.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 20 May 2024 17:39:18 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reading timestamp values from Datums gives garbage values" }, { "msg_contents": "On 05/20/24 11:39, Tomas Vondra wrote:\n> On 5/20/24 16:37, Sushrut Shivaswamy wrote:\n>> I've tried various types and none of them read the correct value.\n>> ```\n>> ...\n>> double current_time = DatumGetFloat8(current_timestamp); // prints 0\n>>\n>> int64 time = DatumGetUint64(current_timestamp); // prints 5293917674\n>> ```\n> \n> TimestampTz is int64, so using DatumGetInt64 is probably the simplest\n> solution. And it's the number of microseconds, so X/1e6 should give you\n> the epoch.\n\nIndeed, the \"Postgres epoch\" is a fairly modern date (1 January 2000),\nso a signed representation is needed to express earlier dates.\n\nPossibly of interest for questions like these, some ongoing work in PL/Java\nis to capture knowledge like this in simple Java functional interfaces\nthat are (intended to be) sufficiently clear and documented to serve as\na parallel source of reference matter.\n\nFor example, what's there for TimestampTZ:\n\nhttps://tada.github.io/pljava/preview1.7/pljava-api/apidocs/org.postgresql.pljava/org/postgresql/pljava/adt/Datetime.TimestampTZ.html#method-detail\n\nA separation of concerns is involved, where these functional interfaces\nexpose and document a logical structure and, ideally, whatever semantic\nsubtleties may be inherent in it, but not physical details of how those\nbits might be shoehorned into the Datum. Physical layouts are encapsulated\nin Adapter classes as internal details. TimeTZ is a good example:\n\nhttps://tada.github.io/pljava/preview1.7/pljava-api/apidocs/org.postgresql.pljava/org/postgresql/pljava/adt/Datetime.TimeTZ.html#method-detail\n\nIt tells you of the µsSinceMidnight component, and secsWestOfPrimeMeridian\ncomponent, and the sign flip needed for other common representations of\nzone offsets that are positive _east_ of the prime meridian. It doesn't\nexpose the exact layout of those components in a Datum.\n\nFor your purposes, of course, you need the physical layout details too,\nmost easily found by reading the PG source. But my hope is that this\nparallel documentation of the logical structure may help in making\neffective use of what you find there.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 20 May 2024 12:42:57 -0400", "msg_from": "Chapman Flack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reading timestamp values from Datums gives garbage values" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 5/20/24 16:37, Sushrut Shivaswamy wrote:\n>> When trying to read the query response from the Datum, I get garbage values.\n>> I've tried various types and none of them read the correct value.\n\n> TimestampTz is int64, so using DatumGetInt64 is probably the simplest\n> solution. And it's the number of microseconds, so X/1e6 should give you\n> the epoch.\n\nDon't forget that TimestampTz uses an epoch (time zero) of 2000-01-01.\nIf you want a Unix-convention value where the epoch is 1970-01-01,\nyou'll need to add 30 years to the result.\n\nThe reported values seem pretty substantially off, though ---\n5293917674 would be barely an hour and a half later than the\nepoch, which seems unlikely to be the value Sushrut intended\nto test with. I suspect a mistake that's outside the fragment\nof code we were shown.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 May 2024 12:44:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reading timestamp values from Datums gives garbage values" }, { "msg_contents": "Hi,\n\n> When trying to read the query response from the Datum, I get garbage values.\n> I've tried various types and none of them read the correct value.\n> ```\n>\n> Datum current_timestamp = SPI_getbinval(SPI_tuptable->vals[i], SPI_tuptable->tupdesc, 5, &isnull);\n>\n> double current_time = DatumGetFloat8(current_timestamp); // prints 0\n>\n> int64 time = DatumGetUint64(current_timestamp); // prints 5293917674\n>\n> ```\n>\n> Can you help me out with the correct way to read EPOCH values from datums?\n\nI don't entirely understand why you are using DatumGetFloat8() /\nDatumGetUint64() and double / int64 types. There are\nDatumGetTimestamp() / DatumGetTimestampTz() and Timestamp /\nTimestampTz.\n\nI recommend using the PostgreSQL code as a source of more examples of\nhow to deal with the given types. The file pg_proc.dat is a good entry\npoint. See also commit 260a1f18 [1] and PostgreSQL documentation [2].\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=260a1f18\n[2]: https://www.postgresql.org/docs/16/xfunc-c.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 21 May 2024 13:20:22 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reading timestamp values from Datums gives garbage values" }, { "msg_contents": "Thank you everyone for your responses.\n\nI was a bit thrown off by the timestamp value the first time I printed it\nby how small it was.\nThe revelation that postgres TimestampTz uses an epoch (time zero) of\n2000-01-01 helped clarify\nthat value would indeed be smaller than regular UNIX epoch.\n\nIn my case I was trying to convert a diff of two timestamps into epoch\nseconds which explains why the value\nwas just 1.5hr. My issue is now resolved.\n\nThanks again\n - Sushrut\n\nThank you everyone for your responses.I was a bit thrown off by the timestamp value the first time I printed it by how small it was.The revelation that postgres TimestampTz uses an epoch (time zero) of 2000-01-01 helped clarifythat value would indeed be smaller than regular UNIX epoch.In my case I was trying to convert a diff of two timestamps into epoch seconds which explains why the valuewas just 1.5hr. My issue is now resolved.Thanks again - Sushrut", "msg_date": "Tue, 21 May 2024 19:00:10 +0530", "msg_from": "Sushrut Shivaswamy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reading timestamp values from Datums gives garbage values" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\nDescription:\n\nThe documentation on this page mentions:\r\n\r\n\"If no column list is specified, any columns added later are automatically\nreplicated.\"\r\n\r\nIt feels ambiguous what this could mean. Does it mean:\r\n\r\n1/ That if you alter the table on the publisher and add a new column, it\nwill be replicated\r\n\r\n2/ If you add a column list later and add a column to it, it will be\nreplicated\r\n\r\nIn both cases, does the subscriber automatically create this column if it\nwasn't there before? I recall reading that the initial data synchronization\nrequires the schema of the publisher database to be created on the\nsubscriber first. But then later updates sync newly created columns? I don't\nrecall any pages on logical replication mentioning this, up to this point.\r\n\r\nRegards,\r\nKoen De Groote", "msg_date": "Mon, 20 May 2024 15:26:27 +0000", "msg_from": "PG Doc comments form <[email protected]>", "msg_from_op": true, "msg_subject": "Ambiguous description on new columns" }, { "msg_contents": "Hi,\n\nLe mar. 21 mai 2024 à 12:40, PG Doc comments form <[email protected]>\na écrit :\n\n> The following documentation comment has been logged on the website:\n>\n> Page:\n> https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> Description:\n>\n> The documentation on this page mentions:\n>\n> \"If no column list is specified, any columns added later are automatically\n> replicated.\"\n>\n> It feels ambiguous what this could mean. Does it mean:\n>\n> 1/ That if you alter the table on the publisher and add a new column, it\n> will be replicated\n>\n> 2/ If you add a column list later and add a column to it, it will be\n> replicated\n>\n> In both cases, does the subscriber automatically create this column if it\n> wasn't there before? I recall reading that the initial data synchronization\n> requires the schema of the publisher database to be created on the\n> subscriber first. But then later updates sync newly created columns? I\n> don't\n> recall any pages on logical replication mentioning this, up to this point.\n>\n>\nIt feels ambiguous. DDL commands are not replicated, so the new columns\ndon't appear automagically on the subscriber. You have to add them to the\nsubscriber. But values of new columns are replicated, whether or not you\nhave added the new columns on the subscriber.\n\nRegards.\n\n\n-- \nGuillaume.\n\nHi,Le mar. 21 mai 2024 à 12:40, PG Doc comments form <[email protected]> a écrit :The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\nDescription:\n\nThe documentation on this page mentions:\n\n\"If no column list is specified, any columns added later are automatically\nreplicated.\"\n\nIt feels ambiguous what this could mean. Does it mean:\n\n1/ That if you alter the table on the publisher and add a new column, it\nwill be replicated\n\n2/ If you add a column list later and add a column to it, it will be\nreplicated\n\nIn both cases, does the subscriber automatically create this column if it\nwasn't there before? I recall reading that the initial data synchronization\nrequires the schema of the publisher database to be created on the\nsubscriber first. But then later updates sync newly created columns? I don't\nrecall any pages on logical replication mentioning this, up to this point.\nIt feels ambiguous. DDL commands are not replicated, so the new columns don't appear automagically on the subscriber. You have to add them to the subscriber. But values of new columns are replicated, whether or not you have added the new columns on the subscriber.Regards.-- Guillaume.", "msg_date": "Tue, 21 May 2024 14:43:30 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Tue, May 21, 2024 at 8:40 PM PG Doc comments form\n<[email protected]> wrote:\n>\n> The following documentation comment has been logged on the website:\n>\n> Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> Description:\n>\n> The documentation on this page mentions:\n>\n> \"If no column list is specified, any columns added later are automatically\n> replicated.\"\n>\n> It feels ambiguous what this could mean. Does it mean:\n>\n> 1/ That if you alter the table on the publisher and add a new column, it\n> will be replicated\n>\n> 2/ If you add a column list later and add a column to it, it will be\n> replicated\n>\n> In both cases, does the subscriber automatically create this column if it\n> wasn't there before?\n\nNo, the subscriber will not automatically create the column. That is\nalready clearly said at the top of the same page you linked \"The table\non the subscriber side must have at least all the columns that are\npublished.\"\n\nAll that \"If no column list...\" paragraph was trying to say is:\n\nCREATE PUBLICATION pub FOR TABLE T;\n\nis not quite the same as:\n\nCREATE PUBLICATION pub FOR TABLE T(a,b,c);\n\nThe difference is, in the 1st case if you then ALTER the TABLE T to\nhave a new column 'd' then that will automatically start replicating\nthe 'd' data without having to do anything to either the PUBLICATION\nor the SUBSCRIPTION. Of course, if TABLE T at the subscriber side does\nnot have a column 'd' then you'll get an error because your subscriber\ntable needs to have *at least* all the replicated columns. (I\ndemonstrate this error below)\n\nWhereas in the 2nd case, even though you ALTER'ed the TABLE T to have\na new column 'd' then that won't be replicated because 'd' was not\nnamed in the PUBLICATION's column list.\n\n~~~~\n\nHere's an example where you can see this in action\n\nHere is an example of the 1st case -- it shows 'd' is automatically\nreplicated and also shows the subscriber-side error caused by the\nmissing column:\n\ntest_pub=# CREATE TABLE T(a int,b int, c int);\ntest_pub=# CREATE PUBLICATION pub FOR TABLE T;\n\ntest_sub=# CREATE TABLE T(a int,b int, c int);\ntest_sub=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=test_pub' PUBLICATION pub;\n\nSee the replication happening\ntest_pub=# INSERT INTO T VALUES (1,2,3);\ntest_sub=# SELECT * FROM t;\n a | b | c\n---+---+---\n 1 | 2 | 3\n(1 row)\n\nNow alter the publisher table T and insert some new data\ntest_pub=# ALTER TABLE T ADD COLUMN d int;\ntest_pub=# INSERT INTO T VALUES (5,6,7,8);\n\nThis will cause subscription errors like:\n2024-05-22 11:53:19.098 AEST [16226] ERROR: logical replication\ntarget relation \"public.t\" is missing replicated column: \"d\"\n\n~~~~\n\nI think the following small change will remove any ambiguity:\n\nBEFORE\nIf no column list is specified, any columns added later are\nautomatically replicated.\n\nSUGGESTION\nIf no column list is specified, any columns added to the table later\nare automatically replicated.\n\n~~\n\nI attached a small patch to make the above change.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 22 May 2024 12:26:46 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Tue, May 21, 2024 at 8:40 PM PG Doc comments form\n<[email protected]> wrote:\n>\n> The following documentation comment has been logged on the website:\n>\n> Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> Description:\n>\n> The documentation on this page mentions:\n>\n> \"If no column list is specified, any columns added later are automatically\n> replicated.\"\n>\n> It feels ambiguous what this could mean. Does it mean:\n>\n> 1/ That if you alter the table on the publisher and add a new column, it\n> will be replicated\n>\n> 2/ If you add a column list later and add a column to it, it will be\n> replicated\n>\n> In both cases, does the subscriber automatically create this column if it\n> wasn't there before?\n\nNo, the subscriber will not automatically create the column. That is\nalready clearly said at the top of the same page you linked \"The table\non the subscriber side must have at least all the columns that are\npublished.\"\n\nAll that \"If no column list...\" paragraph was trying to say is:\n\nCREATE PUBLICATION pub FOR TABLE T;\n\nis not quite the same as:\n\nCREATE PUBLICATION pub FOR TABLE T(a,b,c);\n\nThe difference is, in the 1st case if you then ALTER the TABLE T to\nhave a new column 'd' then that will automatically start replicating\nthe 'd' data without having to do anything to either the PUBLICATION\nor the SUBSCRIPTION. Of course, if TABLE T at the subscriber side does\nnot have a column 'd' then you'll get an error because your subscriber\ntable needs to have *at least* all the replicated columns. (I\ndemonstrate this error below)\n\nWhereas in the 2nd case, even though you ALTER'ed the TABLE T to have\na new column 'd' then that won't be replicated because 'd' was not\nnamed in the PUBLICATION's column list.\n\n~~~~\n\nHere's an example where you can see this in action\n\nHere is an example of the 1st case -- it shows 'd' is automatically\nreplicated and also shows the subscriber-side error caused by the\nmissing column:\n\ntest_pub=# CREATE TABLE T(a int,b int, c int);\ntest_pub=# CREATE PUBLICATION pub FOR TABLE T;\n\ntest_sub=# CREATE TABLE T(a int,b int, c int);\ntest_sub=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=test_pub' PUBLICATION pub;\n\nSee the replication happening\ntest_pub=# INSERT INTO T VALUES (1,2,3);\ntest_sub=# SELECT * FROM t;\n a | b | c\n---+---+---\n 1 | 2 | 3\n(1 row)\n\nNow alter the publisher table T and insert some new data\ntest_pub=# ALTER TABLE T ADD COLUMN d int;\ntest_pub=# INSERT INTO T VALUES (5,6,7,8);\n\nThis will cause subscription errors like:\n2024-05-22 11:53:19.098 AEST [16226] ERROR: logical replication\ntarget relation \"public.t\" is missing replicated column: \"d\"\n\n~~~~\n\nI think the following small change will remove any ambiguity:\n\nBEFORE\nIf no column list is specified, any columns added later are\nautomatically replicated.\n\nSUGGESTION\nIf no column list is specified, any columns added to the table later\nare automatically replicated.\n\n~~\n\nI attached a small patch to make the above change.\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 22 May 2024 12:47:39 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Tue, May 21, 2024 at 3:40 AM PG Doc comments form <[email protected]>\nwrote:\n\n> The following documentation comment has been logged on the website:\n>\n> Page:\n> https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> Description:\n>\n> The documentation on this page mentions:\n>\n> \"If no column list is specified, any columns added later are automatically\n> replicated.\"\n>\n> It feels ambiguous what this could mean. Does it mean:\n>\n> 1/ That if you alter the table on the publisher and add a new column, it\n> will be replicated\n>\n\nYes, this is the only thing in scope you can \"add columns to later\".\n\n\n> 2/ If you add a column list later and add a column to it, it will be\n> replicated\n>\n\nI feel like we failed somewhere if the reader believes that it is possible\nto alter a publication in this way.\n\nDavid J.\n\nOn Tue, May 21, 2024 at 3:40 AM PG Doc comments form <[email protected]> wrote:The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\nDescription:\n\nThe documentation on this page mentions:\n\n\"If no column list is specified, any columns added later are automatically\nreplicated.\"\n\nIt feels ambiguous what this could mean. Does it mean:\n\n1/ That if you alter the table on the publisher and add a new column, it\nwill be replicatedYes, this is the only thing in scope you can \"add columns to later\".\n\n2/ If you add a column list later and add a column to it, it will be\nreplicatedI feel like we failed somewhere if the reader believes that it is possible to alter a publication in this way.David J.", "msg_date": "Tue, 21 May 2024 20:05:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Tue, May 21, 2024 at 7:48 PM Peter Smith <[email protected]> wrote:\n\n>\n> I think the following small change will remove any ambiguity:\n>\n> BEFORE\n> If no column list is specified, any columns added later are\n> automatically replicated.\n>\n> SUGGESTION\n> If no column list is specified, any columns added to the table later\n> are automatically replicated.\n>\n> ~~\n>\n>\nExtended Before:\n\nEach publication can optionally specify which columns of each table are\nreplicated to subscribers. The table on the subscriber side must have at\nleast all the columns that are published. If no column list is specified,\nthen all columns on the publisher are replicated. See CREATE PUBLICATION\nfor details on the syntax.\n\nThe choice of columns can be based on behavioral or performance reasons.\nHowever, do not rely on this feature for security: a malicious subscriber\nis able to obtain data from columns that are not specifically published. If\nsecurity is a consideration, protections can be applied at the publisher\nside.\n\nIf no column list is specified, any columns added later are automatically\nreplicated. This means that having a column list which names all columns is\nnot the same as having no column list at all.\n\nI'd suggest:\n\nEach publication can optionally specify which columns of each table are\nreplicated to subscribers. The table on the subscriber side must have at\nleast all the columns that are published. If no column list is specified,\nthen all columns on the publisher[, present and future,] are replicated.\nSee CREATE PUBLICATION for details on the syntax.\n\n...security...\n\n...delete the entire \"ambiguous\" paragraph...\n\nDavid J.\n\nOn Tue, May 21, 2024 at 7:48 PM Peter Smith <[email protected]> wrote:\nI think the following small change will remove any ambiguity:\n\nBEFORE\nIf no column list is specified, any columns added later are\nautomatically replicated.\n\nSUGGESTION\nIf no column list is specified, any columns added to the table later\nare automatically replicated.\n\n~~Extended Before:Each publication can optionally specify which columns of each table are replicated to subscribers. The table on the subscriber side must have at least all the columns that are published. If no column list is specified, then all columns on the publisher are replicated. See CREATE PUBLICATION for details on the syntax.The choice of columns can be based on behavioral or performance reasons. However, do not rely on this feature for security: a malicious subscriber is able to obtain data from columns that are not specifically published. If security is a consideration, protections can be applied at the publisher side.If no column list is specified, any columns added later are automatically replicated. This means that having a column list which names all columns is not the same as having no column list at all.I'd suggest:Each publication can optionally specify which columns of each table are replicated to subscribers. The table on the subscriber side must have at least all the columns that are published. If no column list is specified, then all columns on the publisher[, present and future,] are replicated. See CREATE PUBLICATION for details on the syntax....security......delete the entire \"ambiguous\" paragraph...David J.", "msg_date": "Tue, 21 May 2024 20:21:32 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Wed, May 22, 2024 at 1:22 PM David G. Johnston\n<[email protected]> wrote:\n>\n> On Tue, May 21, 2024 at 7:48 PM Peter Smith <[email protected]> wrote:\n>>\n>>\n>> I think the following small change will remove any ambiguity:\n>>\n>> BEFORE\n>> If no column list is specified, any columns added later are\n>> automatically replicated.\n>>\n>> SUGGESTION\n>> If no column list is specified, any columns added to the table later\n>> are automatically replicated.\n>>\n>> ~~\n>>\n>\n> Extended Before:\n>\n> Each publication can optionally specify which columns of each table are replicated to subscribers. The table on the subscriber side must have at least all the columns that are published. If no column list is specified, then all columns on the publisher are replicated. See CREATE PUBLICATION for details on the syntax.\n>\n> The choice of columns can be based on behavioral or performance reasons. However, do not rely on this feature for security: a malicious subscriber is able to obtain data from columns that are not specifically published. If security is a consideration, protections can be applied at the publisher side.\n>\n> If no column list is specified, any columns added later are automatically replicated. This means that having a column list which names all columns is not the same as having no column list at all.\n>\n> I'd suggest:\n>\n> Each publication can optionally specify which columns of each table are replicated to subscribers. The table on the subscriber side must have at least all the columns that are published. If no column list is specified, then all columns on the publisher[, present and future,] are replicated. See CREATE PUBLICATION for details on the syntax.\n>\n> ...security...\n>\n> ...delete the entire \"ambiguous\" paragraph...\n>\n\nThe \"ambiguous\" paragraph was trying to make the point that although\n(a) having no column-list at all and\n(b) having a column list that names every table column\n\nstarts off looking and working the same, don't be tricked into\nthinking they are exactly equivalent, because if the table ever gets\nALTERED later then the behaviour of those PUBLICATIONs begins to\ndiffer.\n\n~\n\nYour suggested text doesn't seem quite as explicit about that subtle\npoint, but I guess since you can still infer the same meaning it is\nfine.\n\nBut, maybe say \"all columns on the published table\" instead of \"all\ncolumns on the publisher\".\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 22 May 2024 14:12:48 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Tuesday, May 21, 2024, Peter Smith <[email protected]> wrote:\n\n>\n> >\n> > Each publication can optionally specify which columns of each table are\n> replicated to subscribers. The table on the subscriber side must have at\n> least all the columns that are published. If no column list is specified,\n> then all columns on the publisher[, present and future,] are replicated.\n> See CREATE PUBLICATION for details on the syntax.\n> >\n> > ...security...\n> >\n> > ...delete the entire \"ambiguous\" paragraph...\n> >\n>\n> Your suggested text doesn't seem quite as explicit about that subtle\n> point, but I guess since you can still infer the same meaning it is\n> fine.\n\n\nRight, it doesn’t seem that subtle so long as we point out what an absent\ncolumn list means. if you specify a column list you get exactly what you\nasked for. It’s like listing columns in select. But if you don’t specify\na column list you get whatever is there at runtime. Which I presume also\nmeans dropped columns no longer get replicated, but I haven’t tested and\nthe docs don’t seem to cover column removal…\n\nIn contrast, if we don’t say this, one might reasonably assume that it\nbehaves like:\nCreate view vw select * from tbl;\nwhen it doesn’t.\n\nSo yes, I do think saying “present and future” sufficiently covers the\nintent of the removed paragraph and clearly ties that to the table columns\nin response to this complaint.\n\n>\n> But, maybe say \"all columns on the published table\" instead of \"all\n> columns on the publisher\".\n>\n\nAgreed.\n\nDavid J.\n\nOn Tuesday, May 21, 2024, Peter Smith <[email protected]> wrote:\n>\n> Each publication can optionally specify which columns of each table are replicated to subscribers. The table on the subscriber side must have at least all the columns that are published. If no column list is specified, then all columns on the publisher[, present and future,] are replicated. See CREATE PUBLICATION for details on the syntax.\n>\n> ...security...\n>\n> ...delete the entire \"ambiguous\" paragraph...\n>\n\nYour suggested text doesn't seem quite as explicit about that subtle\npoint, but I guess since you can still infer the same meaning it is\nfine.Right, it doesn’t seem that subtle so long as we point out what an absent column list means. if you specify a column list you get exactly what you asked for.  It’s like listing columns in select.  But if you don’t specify a column list you get whatever is there at runtime. Which I presume also means dropped columns no longer get replicated, but I haven’t tested and the docs don’t seem to cover column removal…In contrast, if we don’t say this, one might reasonably assume that it behaves like:Create view vw select * from tbl;when it doesn’t.So yes, I do think saying “present and future” sufficiently covers the intent of the removed paragraph and clearly ties that to the table columns in response to this complaint.\n\nBut, maybe say \"all columns on the published table\" instead of \"all\ncolumns on the publisher\".\nAgreed.David J.", "msg_date": "Tue, 21 May 2024 21:31:07 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Wed, 2024-05-22 at 12:47 +1000, Peter Smith wrote:\n> I think the following small change will remove any ambiguity:\n> \n> BEFORE\n> If no column list is specified, any columns added later are\n> automatically replicated.\n> \n> SUGGESTION\n> If no column list is specified, any columns added to the table later\n> are automatically replicated.\n> \n> ~~\n> \n> I attached a small patch to make the above change.\n\n+1 on that change.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 22 May 2024 09:21:10 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Wed, 22 May 2024 at 08:18, Peter Smith <[email protected]> wrote:\n>\n> On Tue, May 21, 2024 at 8:40 PM PG Doc comments form\n> <[email protected]> wrote:\n> >\n> > The following documentation comment has been logged on the website:\n> >\n> > Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > Description:\n> >\n> > The documentation on this page mentions:\n> >\n> > \"If no column list is specified, any columns added later are automatically\n> > replicated.\"\n> >\n> > It feels ambiguous what this could mean. Does it mean:\n> >\n> > 1/ That if you alter the table on the publisher and add a new column, it\n> > will be replicated\n> >\n> > 2/ If you add a column list later and add a column to it, it will be\n> > replicated\n> >\n> > In both cases, does the subscriber automatically create this column if it\n> > wasn't there before?\n>\n> No, the subscriber will not automatically create the column. That is\n> already clearly said at the top of the same page you linked \"The table\n> on the subscriber side must have at least all the columns that are\n> published.\"\n>\n> All that \"If no column list...\" paragraph was trying to say is:\n>\n> CREATE PUBLICATION pub FOR TABLE T;\n>\n> is not quite the same as:\n>\n> CREATE PUBLICATION pub FOR TABLE T(a,b,c);\n>\n> The difference is, in the 1st case if you then ALTER the TABLE T to\n> have a new column 'd' then that will automatically start replicating\n> the 'd' data without having to do anything to either the PUBLICATION\n> or the SUBSCRIPTION. Of course, if TABLE T at the subscriber side does\n> not have a column 'd' then you'll get an error because your subscriber\n> table needs to have *at least* all the replicated columns. (I\n> demonstrate this error below)\n>\n> Whereas in the 2nd case, even though you ALTER'ed the TABLE T to have\n> a new column 'd' then that won't be replicated because 'd' was not\n> named in the PUBLICATION's column list.\n>\n> ~~~~\n>\n> Here's an example where you can see this in action\n>\n> Here is an example of the 1st case -- it shows 'd' is automatically\n> replicated and also shows the subscriber-side error caused by the\n> missing column:\n>\n> test_pub=# CREATE TABLE T(a int,b int, c int);\n> test_pub=# CREATE PUBLICATION pub FOR TABLE T;\n>\n> test_sub=# CREATE TABLE T(a int,b int, c int);\n> test_sub=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=test_pub' PUBLICATION pub;\n>\n> See the replication happening\n> test_pub=# INSERT INTO T VALUES (1,2,3);\n> test_sub=# SELECT * FROM t;\n> a | b | c\n> ---+---+---\n> 1 | 2 | 3\n> (1 row)\n>\n> Now alter the publisher table T and insert some new data\n> test_pub=# ALTER TABLE T ADD COLUMN d int;\n> test_pub=# INSERT INTO T VALUES (5,6,7,8);\n>\n> This will cause subscription errors like:\n> 2024-05-22 11:53:19.098 AEST [16226] ERROR: logical replication\n> target relation \"public.t\" is missing replicated column: \"d\"\n>\n> ~~~~\n>\n> I think the following small change will remove any ambiguity:\n>\n> BEFORE\n> If no column list is specified, any columns added later are\n> automatically replicated.\n>\n> SUGGESTION\n> If no column list is specified, any columns added to the table later\n> are automatically replicated.\n>\n> ~~\n>\n> I attached a small patch to make the above change.\n>\n> Thoughts?\n\nA minor suggestion, the rest looks good:\nIt would enhance clarity to include a line break following \"If no\ncolumn list is specified, any columns added to the table later are\":\n- If no column list is specified, any columns added later are automatically\n+ If no column list is specified, any columns added to the table\nlater are automatically\n replicated. This means that having a column list which names all columns\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 29 May 2024 15:25:52 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Wed, May 29, 2024 at 8:04 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 22 May 2024 at 14:26, Peter Smith <[email protected]> wrote:\n> >\n> > On Tue, May 21, 2024 at 8:40 PM PG Doc comments form\n> > <[email protected]> wrote:\n> > >\n> > > The following documentation comment has been logged on the website:\n> > >\n> > > Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > > Description:\n> > >\n> > > The documentation on this page mentions:\n> > >\n> > > \"If no column list is specified, any columns added later are automatically\n> > > replicated.\"\n> > >\n> > > It feels ambiguous what this could mean. Does it mean:\n> > >\n> > > 1/ That if you alter the table on the publisher and add a new column, it\n> > > will be replicated\n> > >\n> > > 2/ If you add a column list later and add a column to it, it will be\n> > > replicated\n> > >\n> > > In both cases, does the subscriber automatically create this column if it\n> > > wasn't there before?\n> >\n> > No, the subscriber will not automatically create the column. That is\n> > already clearly said at the top of the same page you linked \"The table\n> > on the subscriber side must have at least all the columns that are\n> > published.\"\n> >\n> > All that \"If no column list...\" paragraph was trying to say is:\n> >\n> > CREATE PUBLICATION pub FOR TABLE T;\n> >\n> > is not quite the same as:\n> >\n> > CREATE PUBLICATION pub FOR TABLE T(a,b,c);\n> >\n> > The difference is, in the 1st case if you then ALTER the TABLE T to\n> > have a new column 'd' then that will automatically start replicating\n> > the 'd' data without having to do anything to either the PUBLICATION\n> > or the SUBSCRIPTION. Of course, if TABLE T at the subscriber side does\n> > not have a column 'd' then you'll get an error because your subscriber\n> > table needs to have *at least* all the replicated columns. (I\n> > demonstrate this error below)\n> >\n> > Whereas in the 2nd case, even though you ALTER'ed the TABLE T to have\n> > a new column 'd' then that won't be replicated because 'd' was not\n> > named in the PUBLICATION's column list.\n> >\n> > ~~~~\n> >\n> > Here's an example where you can see this in action\n> >\n> > Here is an example of the 1st case -- it shows 'd' is automatically\n> > replicated and also shows the subscriber-side error caused by the\n> > missing column:\n> >\n> > test_pub=# CREATE TABLE T(a int,b int, c int);\n> > test_pub=# CREATE PUBLICATION pub FOR TABLE T;\n> >\n> > test_sub=# CREATE TABLE T(a int,b int, c int);\n> > test_sub=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=test_pub' PUBLICATION pub;\n> >\n> > See the replication happening\n> > test_pub=# INSERT INTO T VALUES (1,2,3);\n> > test_sub=# SELECT * FROM t;\n> > a | b | c\n> > ---+---+---\n> > 1 | 2 | 3\n> > (1 row)\n> >\n> > Now alter the publisher table T and insert some new data\n> > test_pub=# ALTER TABLE T ADD COLUMN d int;\n> > test_pub=# INSERT INTO T VALUES (5,6,7,8);\n> >\n> > This will cause subscription errors like:\n> > 2024-05-22 11:53:19.098 AEST [16226] ERROR: logical replication\n> > target relation \"public.t\" is missing replicated column: \"d\"\n> >\n> > ~~~~\n> >\n> > I think the following small change will remove any ambiguity:\n> >\n> > BEFORE\n> > If no column list is specified, any columns added later are\n> > automatically replicated.\n> >\n> > SUGGESTION\n> > If no column list is specified, any columns added to the table later\n> > are automatically replicated.\n> >\n> > ~~\n> >\n> > I attached a small patch to make the above change.\n>\n> A small recommendation:\n> It would enhance clarity to include a line break following \"If no\n> column list is specified, any columns added to the table later are\":\n> - If no column list is specified, any columns added later are automatically\n> + If no column list is specified, any columns added to the table\n> later are automatically\n> replicated. This means that having a column list which names all columns\n\nHi Vignesh,\n\nIIUC you're saying my v1 patch *content* and rendering is OK, but you\nonly wanted the SGML text to have better wrapping for < 80 chars\nlines. So I have attached a patch v2 with improved wrapping. If you\nmeant something different then please explain.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 30 May 2024 10:50:49 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Thu, 30 May 2024 at 06:21, Peter Smith <[email protected]> wrote:\n>\n> On Wed, May 29, 2024 at 8:04 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 22 May 2024 at 14:26, Peter Smith <[email protected]> wrote:\n> > >\n> > > On Tue, May 21, 2024 at 8:40 PM PG Doc comments form\n> > > <[email protected]> wrote:\n> > > >\n> > > > The following documentation comment has been logged on the website:\n> > > >\n> > > > Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > > > Description:\n> > > >\n> > > > The documentation on this page mentions:\n> > > >\n> > > > \"If no column list is specified, any columns added later are automatically\n> > > > replicated.\"\n> > > >\n> > > > It feels ambiguous what this could mean. Does it mean:\n> > > >\n> > > > 1/ That if you alter the table on the publisher and add a new column, it\n> > > > will be replicated\n> > > >\n> > > > 2/ If you add a column list later and add a column to it, it will be\n> > > > replicated\n> > > >\n> > > > In both cases, does the subscriber automatically create this column if it\n> > > > wasn't there before?\n> > >\n> > > No, the subscriber will not automatically create the column. That is\n> > > already clearly said at the top of the same page you linked \"The table\n> > > on the subscriber side must have at least all the columns that are\n> > > published.\"\n> > >\n> > > All that \"If no column list...\" paragraph was trying to say is:\n> > >\n> > > CREATE PUBLICATION pub FOR TABLE T;\n> > >\n> > > is not quite the same as:\n> > >\n> > > CREATE PUBLICATION pub FOR TABLE T(a,b,c);\n> > >\n> > > The difference is, in the 1st case if you then ALTER the TABLE T to\n> > > have a new column 'd' then that will automatically start replicating\n> > > the 'd' data without having to do anything to either the PUBLICATION\n> > > or the SUBSCRIPTION. Of course, if TABLE T at the subscriber side does\n> > > not have a column 'd' then you'll get an error because your subscriber\n> > > table needs to have *at least* all the replicated columns. (I\n> > > demonstrate this error below)\n> > >\n> > > Whereas in the 2nd case, even though you ALTER'ed the TABLE T to have\n> > > a new column 'd' then that won't be replicated because 'd' was not\n> > > named in the PUBLICATION's column list.\n> > >\n> > > ~~~~\n> > >\n> > > Here's an example where you can see this in action\n> > >\n> > > Here is an example of the 1st case -- it shows 'd' is automatically\n> > > replicated and also shows the subscriber-side error caused by the\n> > > missing column:\n> > >\n> > > test_pub=# CREATE TABLE T(a int,b int, c int);\n> > > test_pub=# CREATE PUBLICATION pub FOR TABLE T;\n> > >\n> > > test_sub=# CREATE TABLE T(a int,b int, c int);\n> > > test_sub=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=test_pub' PUBLICATION pub;\n> > >\n> > > See the replication happening\n> > > test_pub=# INSERT INTO T VALUES (1,2,3);\n> > > test_sub=# SELECT * FROM t;\n> > > a | b | c\n> > > ---+---+---\n> > > 1 | 2 | 3\n> > > (1 row)\n> > >\n> > > Now alter the publisher table T and insert some new data\n> > > test_pub=# ALTER TABLE T ADD COLUMN d int;\n> > > test_pub=# INSERT INTO T VALUES (5,6,7,8);\n> > >\n> > > This will cause subscription errors like:\n> > > 2024-05-22 11:53:19.098 AEST [16226] ERROR: logical replication\n> > > target relation \"public.t\" is missing replicated column: \"d\"\n> > >\n> > > ~~~~\n> > >\n> > > I think the following small change will remove any ambiguity:\n> > >\n> > > BEFORE\n> > > If no column list is specified, any columns added later are\n> > > automatically replicated.\n> > >\n> > > SUGGESTION\n> > > If no column list is specified, any columns added to the table later\n> > > are automatically replicated.\n> > >\n> > > ~~\n> > >\n> > > I attached a small patch to make the above change.\n> >\n> > A small recommendation:\n> > It would enhance clarity to include a line break following \"If no\n> > column list is specified, any columns added to the table later are\":\n> > - If no column list is specified, any columns added later are automatically\n> > + If no column list is specified, any columns added to the table\n> > later are automatically\n> > replicated. This means that having a column list which names all columns\n>\n> Hi Vignesh,\n>\n> IIUC you're saying my v1 patch *content* and rendering is OK, but you\n> only wanted the SGML text to have better wrapping for < 80 chars\n> lines. So I have attached a patch v2 with improved wrapping. If you\n> meant something different then please explain.\n\nYes, that is what I meant and the updated patch looks good.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 31 May 2024 08:58:16 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Fri, 31 May 2024 at 08:58, vignesh C <[email protected]> wrote:\n>\n> On Thu, 30 May 2024 at 06:21, Peter Smith <[email protected]> wrote:\n> >\n> > On Wed, May 29, 2024 at 8:04 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Wed, 22 May 2024 at 14:26, Peter Smith <[email protected]> wrote:\n> > > >\n> > > > On Tue, May 21, 2024 at 8:40 PM PG Doc comments form\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > The following documentation comment has been logged on the website:\n> > > > >\n> > > > > Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > > > > Description:\n> > > > >\n> > > > > The documentation on this page mentions:\n> > > > >\n> > > > > \"If no column list is specified, any columns added later are automatically\n> > > > > replicated.\"\n> > > > >\n> > > > > It feels ambiguous what this could mean. Does it mean:\n> > > > >\n> > > > > 1/ That if you alter the table on the publisher and add a new column, it\n> > > > > will be replicated\n> > > > >\n> > > > > 2/ If you add a column list later and add a column to it, it will be\n> > > > > replicated\n> > > > >\n> > > > > In both cases, does the subscriber automatically create this column if it\n> > > > > wasn't there before?\n> > > >\n> > > > No, the subscriber will not automatically create the column. That is\n> > > > already clearly said at the top of the same page you linked \"The table\n> > > > on the subscriber side must have at least all the columns that are\n> > > > published.\"\n> > > >\n> > > > All that \"If no column list...\" paragraph was trying to say is:\n> > > >\n> > > > CREATE PUBLICATION pub FOR TABLE T;\n> > > >\n> > > > is not quite the same as:\n> > > >\n> > > > CREATE PUBLICATION pub FOR TABLE T(a,b,c);\n> > > >\n> > > > The difference is, in the 1st case if you then ALTER the TABLE T to\n> > > > have a new column 'd' then that will automatically start replicating\n> > > > the 'd' data without having to do anything to either the PUBLICATION\n> > > > or the SUBSCRIPTION. Of course, if TABLE T at the subscriber side does\n> > > > not have a column 'd' then you'll get an error because your subscriber\n> > > > table needs to have *at least* all the replicated columns. (I\n> > > > demonstrate this error below)\n> > > >\n> > > > Whereas in the 2nd case, even though you ALTER'ed the TABLE T to have\n> > > > a new column 'd' then that won't be replicated because 'd' was not\n> > > > named in the PUBLICATION's column list.\n> > > >\n> > > > ~~~~\n> > > >\n> > > > Here's an example where you can see this in action\n> > > >\n> > > > Here is an example of the 1st case -- it shows 'd' is automatically\n> > > > replicated and also shows the subscriber-side error caused by the\n> > > > missing column:\n> > > >\n> > > > test_pub=# CREATE TABLE T(a int,b int, c int);\n> > > > test_pub=# CREATE PUBLICATION pub FOR TABLE T;\n> > > >\n> > > > test_sub=# CREATE TABLE T(a int,b int, c int);\n> > > > test_sub=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=test_pub' PUBLICATION pub;\n> > > >\n> > > > See the replication happening\n> > > > test_pub=# INSERT INTO T VALUES (1,2,3);\n> > > > test_sub=# SELECT * FROM t;\n> > > > a | b | c\n> > > > ---+---+---\n> > > > 1 | 2 | 3\n> > > > (1 row)\n> > > >\n> > > > Now alter the publisher table T and insert some new data\n> > > > test_pub=# ALTER TABLE T ADD COLUMN d int;\n> > > > test_pub=# INSERT INTO T VALUES (5,6,7,8);\n> > > >\n> > > > This will cause subscription errors like:\n> > > > 2024-05-22 11:53:19.098 AEST [16226] ERROR: logical replication\n> > > > target relation \"public.t\" is missing replicated column: \"d\"\n> > > >\n> > > > ~~~~\n> > > >\n> > > > I think the following small change will remove any ambiguity:\n> > > >\n> > > > BEFORE\n> > > > If no column list is specified, any columns added later are\n> > > > automatically replicated.\n> > > >\n> > > > SUGGESTION\n> > > > If no column list is specified, any columns added to the table later\n> > > > are automatically replicated.\n> > > >\n> > > > ~~\n> > > >\n> > > > I attached a small patch to make the above change.\n> > >\n> > > A small recommendation:\n> > > It would enhance clarity to include a line break following \"If no\n> > > column list is specified, any columns added to the table later are\":\n> > > - If no column list is specified, any columns added later are automatically\n> > > + If no column list is specified, any columns added to the table\n> > > later are automatically\n> > > replicated. This means that having a column list which names all columns\n> >\n> > Hi Vignesh,\n> >\n> > IIUC you're saying my v1 patch *content* and rendering is OK, but you\n> > only wanted the SGML text to have better wrapping for < 80 chars\n> > lines. So I have attached a patch v2 with improved wrapping. If you\n> > meant something different then please explain.\n>\n> Yes, that is what I meant and the updated patch looks good.\n\nAdding Amit to get his opinion on the same.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 4 Jun 2024 11:09:35 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Fri, May 31, 2024 at 10:54 PM Peter Smith <[email protected]> wrote:\n>\n> On Wed, May 29, 2024 at 8:04 PM vignesh C <[email protected]> wrote:\n> >\n> > > >\n> > > > The following documentation comment has been logged on the website:\n> > > >\n> > > > Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > > > Description:\n> > > >\n> > > > The documentation on this page mentions:\n> > > >\n> > > > \"If no column list is specified, any columns added later are automatically\n> > > > replicated.\"\n> > > >\n> > > > It feels ambiguous what this could mean. Does it mean:\n> > > >\n> > > > 1/ That if you alter the table on the publisher and add a new column, it\n> > > > will be replicated\n> > > >\n> > > > 2/ If you add a column list later and add a column to it, it will be\n> > > > replicated\n> > > >\n> > > > In both cases, does the subscriber automatically create this column if it\n> > > > wasn't there before?\n> > >\n> > > ~~~~\n> > >\n> > > I think the following small change will remove any ambiguity:\n> > >\n> > > BEFORE\n> > > If no column list is specified, any columns added later are\n> > > automatically replicated.\n> > >\n> > > SUGGESTION\n> > > If no column list is specified, any columns added to the table later\n> > > are automatically replicated.\n> > >\n> > > ~~\n> > >\n> > > I attached a small patch to make the above change.\n> >\n> > A small recommendation:\n> > It would enhance clarity to include a line break following \"If no\n> > column list is specified, any columns added to the table later are\":\n> > - If no column list is specified, any columns added later are automatically\n> > + If no column list is specified, any columns added to the table\n> > later are automatically\n> > replicated. This means that having a column list which names all columns\n>\n> Hi Vignesh,\n>\n> IIUC you're saying my v1 patch *content* and rendering is OK, but you\n> only wanted the SGML text to have better wrapping for < 80 chars\n> lines. So I have attached a patch v2 with improved wrapping. If you\n> meant something different then please explain.\n>\n\nYour patch is an improvement. Koen, does the proposed change make\nthings clear to you?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 4 Jun 2024 11:26:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Tue, Jun 4, 2024 at 11:26 AM Amit Kapila <[email protected]> wrote:\n>\n> >\n> > IIUC you're saying my v1 patch *content* and rendering is OK, but you\n> > only wanted the SGML text to have better wrapping for < 80 chars\n> > lines. So I have attached a patch v2 with improved wrapping. If you\n> > meant something different then please explain.\n> >\n>\n> Your patch is an improvement. Koen, does the proposed change make\n> things clear to you?\n>\n\nI am planning to push and backpatch the latest patch by Peter Smith\nunless there are any further comments or suggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 Jun 2024 14:39:49 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "Yes, this change is clear to me that the \"columns added\" applies to the\ntable on the publisher.\n\nRegards,\nKoen De Groote\n\nOn Tue, Jun 4, 2024 at 7:57 AM Amit Kapila <[email protected]> wrote:\n\n> On Fri, May 31, 2024 at 10:54 PM Peter Smith <[email protected]>\n> wrote:\n> >\n> > On Wed, May 29, 2024 at 8:04 PM vignesh C <[email protected]> wrote:\n> > >\n> > > > >\n> > > > > The following documentation comment has been logged on the website:\n> > > > >\n> > > > > Page:\n> https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > > > > Description:\n> > > > >\n> > > > > The documentation on this page mentions:\n> > > > >\n> > > > > \"If no column list is specified, any columns added later are\n> automatically\n> > > > > replicated.\"\n> > > > >\n> > > > > It feels ambiguous what this could mean. Does it mean:\n> > > > >\n> > > > > 1/ That if you alter the table on the publisher and add a new\n> column, it\n> > > > > will be replicated\n> > > > >\n> > > > > 2/ If you add a column list later and add a column to it, it will\n> be\n> > > > > replicated\n> > > > >\n> > > > > In both cases, does the subscriber automatically create this\n> column if it\n> > > > > wasn't there before?\n> > > >\n> > > > ~~~~\n> > > >\n> > > > I think the following small change will remove any ambiguity:\n> > > >\n> > > > BEFORE\n> > > > If no column list is specified, any columns added later are\n> > > > automatically replicated.\n> > > >\n> > > > SUGGESTION\n> > > > If no column list is specified, any columns added to the table later\n> > > > are automatically replicated.\n> > > >\n> > > > ~~\n> > > >\n> > > > I attached a small patch to make the above change.\n> > >\n> > > A small recommendation:\n> > > It would enhance clarity to include a line break following \"If no\n> > > column list is specified, any columns added to the table later are\":\n> > > - If no column list is specified, any columns added later are\n> automatically\n> > > + If no column list is specified, any columns added to the table\n> > > later are automatically\n> > > replicated. This means that having a column list which names all\n> columns\n> >\n> > Hi Vignesh,\n> >\n> > IIUC you're saying my v1 patch *content* and rendering is OK, but you\n> > only wanted the SGML text to have better wrapping for < 80 chars\n> > lines. So I have attached a patch v2 with improved wrapping. If you\n> > meant something different then please explain.\n> >\n>\n> Your patch is an improvement. Koen, does the proposed change make\n> things clear to you?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nYes, this change is clear to me that the \"columns added\" applies to the table on the publisher.Regards,Koen De GrooteOn Tue, Jun 4, 2024 at 7:57 AM Amit Kapila <[email protected]> wrote:On Fri, May 31, 2024 at 10:54 PM Peter Smith <[email protected]> wrote:\n>\n> On Wed, May 29, 2024 at 8:04 PM vignesh C <[email protected]> wrote:\n> >\n> > > >\n> > > > The following documentation comment has been logged on the website:\n> > > >\n> > > > Page: https://www.postgresql.org/docs/16/logical-replication-col-lists.html\n> > > > Description:\n> > > >\n> > > > The documentation on this page mentions:\n> > > >\n> > > > \"If no column list is specified, any columns added later are automatically\n> > > > replicated.\"\n> > > >\n> > > > It feels ambiguous what this could mean. Does it mean:\n> > > >\n> > > > 1/ That if you alter the table on the publisher and add a new column, it\n> > > > will be replicated\n> > > >\n> > > > 2/ If you add a column list later and add a column to it, it will be\n> > > > replicated\n> > > >\n> > > > In both cases, does the subscriber automatically create this column if it\n> > > > wasn't there before?\n> > >\n> > > ~~~~\n> > >\n> > > I think the following small change will remove any ambiguity:\n> > >\n> > > BEFORE\n> > > If no column list is specified, any columns added later are\n> > > automatically replicated.\n> > >\n> > > SUGGESTION\n> > > If no column list is specified, any columns added to the table later\n> > > are automatically replicated.\n> > >\n> > > ~~\n> > >\n> > > I attached a small patch to make the above change.\n> >\n> > A small recommendation:\n> > It would enhance clarity to include a line break following \"If no\n> > column list is specified, any columns added to the table later are\":\n> > -   If no column list is specified, any columns added later are automatically\n> > +   If no column list is specified, any columns added to the table\n> > later are automatically\n> >     replicated. This means that having a column list which names all columns\n>\n> Hi Vignesh,\n>\n> IIUC you're saying my v1 patch *content* and rendering is OK, but you\n> only wanted the SGML text to have better wrapping for < 80 chars\n> lines. So I have attached a patch v2 with improved wrapping. If you\n> meant something different then please explain.\n>\n\nYour patch is an improvement. Koen, does the proposed change make\nthings clear to you?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 7 Jun 2024 11:53:05 +0200", "msg_from": "Koen De Groote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" }, { "msg_contents": "On Fri, Jun 7, 2024 at 3:23 PM Koen De Groote <[email protected]> wrote:\n>\n> Yes, this change is clear to me that the \"columns added\" applies to the table on the publisher.\n>\n\nThanks for the confirmation. I have pushed and backpatched the fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Jun 2024 14:11:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ambiguous description on new columns" } ]
[ { "msg_contents": "Hello hackers,\n\nPlease look at a bunch of unused variables and a couple of other defects\nI found in the perl code, maybe you'll find them worth fixing:\ncontrib/amcheck/t/001_verify_heapam.pl\n$result # unused since introduction in 866e24d47\nunused sub:\nget_toast_for # not used since 860593ec3\n\ncontrib/amcheck/t/002_cic.pl\n$result # orphaned since 7f580aa5d\n\nsrc/backend/utils/activity/generate-wait_event_types.pl\n$note, $note_name # unused since introduction in fa8892847\n\nsrc/bin/pg_dump/t/003_pg_dump_with_server.pl\n$cmd, $stdout, $stderr, $result # unused since introduction in 2f9eb3132\n\nsrc/bin/pg_dump/t/005_pg_dump_filterfile.pl\n$cmd, $stdout, $stderr, $result # unused since introduciton in a5cf808be\n\nsrc/test/modules/ldap_password_func/t/001_mutated_bindpasswd.pl\n$slapd, $ldap_schema_dir # unused since introduction in 419a8dd81\n\nsrc/test/modules/ssl_passphrase_callback/t/001_testfunc.pl\n$clearpass # orphaned since b846091fd\n\nsrc/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n$ostmt # unused since introduction in 47bb9db75\n\nsrc/test/recovery/t/021_row_visibility.pl\n$ret # unused since introduction in 7b28913bc\n\nsrc/test/recovery/t/032_relfilenode_reuse.pl\n$ret # unused since introduction in e2f65f425\n\nsrc/test/recovery/t/035_standby_logical_decoding.pl\n$stdin, $ret, $slot # unused since introduction in fcd77d532\n$subscriber_stdin, $subscriber_stdout, $subscriber_stderr # unused since introduction in 376dc8205\n\nsrc/test/subscription/t/024_add_drop_pub.pl\ninvalid reference in a comment:\ntab_drop_refresh -> tab_2 # introduced with 1046a69b3\n(invalid since v6-0001-...patch in the commit's thread)\n\nsrc/tools/msvc_gendef.pl\n@def # unused since introduction in 933b46644?\n\nI've attached a patch with all of these changes (tested with meson build\non Windows and check-world on Linux).\n\nBest regards,\nAlexander", "msg_date": "Mon, 20 May 2024 20:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Cleaning up perl code" }, { "msg_contents": "Alexander Lakhin <[email protected]> writes:\n\n> Hello hackers,\n>\n> Please look at a bunch of unused variables and a couple of other defects\n> I found in the perl code, maybe you'll find them worth fixing:\n\nNice cleanup! Did you use some static analysis tool, or did look for\nthem manually? If I add [Variables::ProhibitUnusedVariables] to\nsrc/tools/perlcheck/perlcriticrc, it finds a few more, see the attached\npatch.\n\nThe scripts parsing errcodes.txt really should be refactored into using\na common module, but that's a patch for another day.\n\n- ilmari", "msg_date": "Mon, 20 May 2024 21:39:34 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "Hello Dagfinn,\n\nThank you for paying attention to it and improving the possible fix!\n\n20.05.2024 23:39, Dagfinn Ilmari Mannsåker wrote:\n> Nice cleanup! Did you use some static analysis tool, or did look for\n> them manually?\n\nI reviewed my collection of unica I gathered for several months, but had\nfound some of them too minor/requiring more analysis.\nThen I added more with perlcritic's policy UnusedVariables, and also\nchecked for unused subs with a script from blogs.perl.org (and it confirmed\nmy only previous find of that kind).\n\n> If I add [Variables::ProhibitUnusedVariables] to\n> src/tools/perlcheck/perlcriticrc, it finds a few more, see the attached\n> patch.\n\nYes, I saw unused $sqlstates, but decided that they are meaningful enough\nto stay. Though maybe enabling ProhibitUnusedVariables justifies fixing\nthem too.\n\n> The scripts parsing errcodes.txt really should be refactored into using\n> a common module, but that's a patch for another day.\n\nAgree, and I would leave 005_negotiate_encryption.pl (with $node_conf,\n$server_config unused since d39a49c1e) aside for another day too.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 21 May 2024 06:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "On Tue, May 21, 2024 at 06:00:00AM +0300, Alexander Lakhin wrote:\n> I reviewed my collection of unica I gathered for several months, but had\n> found some of them too minor/requiring more analysis.\n> Then I added more with perlcritic's policy UnusedVariables, and also\n> checked for unused subs with a script from blogs.perl.org (and it confirmed\n> my only previous find of that kind).\n\nNice catches from both of you. The two ones in\ngenerate-wait_event_types.pl are caused by me, actually.\n\nNot sure about the changes in the errcodes scripts, though. The\ncurrent state of thing can be also useful when it comes to debugging\nthe parsing, and it does not hurt to keep the parsing rules the same\nacross the board.\n\n>> The scripts parsing errcodes.txt really should be refactored into using\n>> a common module, but that's a patch for another day.\n> \n> Agree, and I would leave 005_negotiate_encryption.pl (with $node_conf,\n> $server_config unused since d39a49c1e) aside for another day too.\n\nI'm not sure about these ones as each one of these scripts have their\nown local tweaks. Now, if there is a cleaner picture with a .pm\nmodule I don't see while reading the whole, why not as long as it\nimproves the code.\n--\nMichael", "msg_date": "Tue, 21 May 2024 14:33:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "On Tue, May 21, 2024 at 02:33:16PM +0900, Michael Paquier wrote:\n> Nice catches from both of you. The two ones in\n> generate-wait_event_types.pl are caused by me, actually.\n> \n> Not sure about the changes in the errcodes scripts, though. The\n> current state of thing can be also useful when it comes to debugging\n> the parsing, and it does not hurt to keep the parsing rules the same\n> across the board.\n\nFor now, I have staged for commit the attached, that handles most of\nthe changes from Alexander (msvc could go for more cleanup?). I'll\nlook at the changes from Dagfinn after that, including if perlcritic\ncould be changed. I'll handle the first part when v18 opens up, as\nthat's cosmetic.\n\nThe incorrect comment in 024_add_drop_pub.pl has been fixed as of\n53785d2a2aaa, as that was a separate issue.\n--\nMichael", "msg_date": "Fri, 24 May 2024 14:09:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "On Fri, May 24, 2024 at 02:09:49PM +0900, Michael Paquier wrote:\n> For now, I have staged for commit the attached, that handles most of\n> the changes from Alexander (msvc could go for more cleanup?).\n\nThis one has been applied as of 0c1aca461481 now that v18 is\nopen.\n\n> I'll look at the changes from Dagfinn after that, including if perlcritic\n> could be changed. I'll handle the first part when v18 opens up, as\n> that's cosmetic.\n\nI'm still biased about the second set of changes proposed here,\nthough. ProhibitUnusedVariables would have benefits when writing perl\ncode in terms of clarity because we would avoid useless stuff, but it\nseems to me that we should put more efforts into the unification of\nthe errcodes parsing paths first to have a cleaner long-term picture.\n\nThat's not directly the fault of this proposal that we have the same\nparsing rules spread across three PL languages, so perhaps what's\nproposed is fine as-is, at the end.\n\nAny thoughts or comments from others more familiar with\nProhibitUnusedVariables?\n--\nMichael", "msg_date": "Tue, 2 Jul 2024 10:11:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Fri, May 24, 2024 at 02:09:49PM +0900, Michael Paquier wrote:\n>> For now, I have staged for commit the attached, that handles most of\n>> the changes from Alexander (msvc could go for more cleanup?).\n>\n> This one has been applied as of 0c1aca461481 now that v18 is\n> open.\n>\n>> I'll look at the changes from Dagfinn after that, including if perlcritic\n>> could be changed. I'll handle the first part when v18 opens up, as\n>> that's cosmetic.\n\nFor clarity, I've rebased my addional unused-variable changes (except\nthe errcodes-related ones, see below) onto current master, and split it\ninto separate commits with detailed explaiations for each file file, see\nattached.\n\n> I'm still biased about the second set of changes proposed here,\n> though. ProhibitUnusedVariables would have benefits when writing perl\n> code in terms of clarity because we would avoid useless stuff, but it\n> seems to me that we should put more efforts into the unification of\n> the errcodes parsing paths first to have a cleaner long-term picture.\n>\n> That's not directly the fault of this proposal that we have the same\n> parsing rules spread across three PL languages, so perhaps what's\n> proposed is fine as-is, at the end.\n\nIt turns out there are a couple more places that parse errcodes.txt,\nnamely doc/src/sgml/generate-errcodes-table.pl and\nsrc/backend/utils/generate-errcodes.pl. I'll have a go refactoring all\nof these into a common function à la Catalog::ParseHeader() that returns\na data structure these scripts can use as as appropriate.\n\n> Any thoughts or comments from others more familiar with\n> ProhibitUnusedVariables?\n\nRelatedly, I also had a look at prohibiting unused regex captures\n(RegularExpressions::ProhibitUnusedCapture), which found a few real\ncases, but also lots of false positives in Catalog.pm, because it\ndoesn't understand that %+ uses all named captures, so I won't propose a\npatch for that until that's fixed upstream\n(https://github.com/Perl-Critic/Perl-Critic/pull/1065).\n\n- ilmari", "msg_date": "Tue, 02 Jul 2024 13:55:25 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "On 2024-07-02 Tu 8:55 AM, Dagfinn Ilmari Mannsåker wrote:\n> Relatedly, I also had a look at prohibiting unused regex captures\n> (RegularExpressions::ProhibitUnusedCapture), which found a few real\n> cases, but also lots of false positives in Catalog.pm, because it\n> doesn't understand that %+ uses all named captures, so I won't propose a\n> patch for that until that's fixed upstream\n> (https://github.com/Perl-Critic/Perl-Critic/pull/1065).\n>\n\nWe could mark Catalog.pm with a \"## no critic (ProhibitUnusedCapture)\" \nand then use the test elsewhere.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-07-02 Tu 8:55 AM, Dagfinn\n Ilmari Mannsåker wrote:\n\n\n\n\nRelatedly, I also had a look at prohibiting unused regex captures\n(RegularExpressions::ProhibitUnusedCapture), which found a few real\ncases, but also lots of false positives in Catalog.pm, because it\ndoesn't understand that %+ uses all named captures, so I won't propose a\npatch for that until that's fixed upstream\n(https://github.com/Perl-Critic/Perl-Critic/pull/1065).\n\n\n\n\n\nWe could mark Catalog.pm with a \"## no critic (ProhibitUnusedCapture)\" and then use the test elsewhere.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 2 Jul 2024 11:11:22 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n\n> On 2024-07-02 Tu 8:55 AM, Dagfinn Ilmari Mannsåker wrote:\n>> Relatedly, I also had a look at prohibiting unused regex captures\n>> (RegularExpressions::ProhibitUnusedCapture), which found a few real\n>> cases, but also lots of false positives in Catalog.pm, because it\n>> doesn't understand that %+ uses all named captures, so I won't propose a\n>> patch for that until that's fixed upstream\n>> (https://github.com/Perl-Critic/Perl-Critic/pull/1065).\n>>\n>\n> We could mark Catalog.pm with a \"## no critic (ProhibitUnusedCapture)\"\n> and then use the test elsewhere.\n\nYeah, that's what I've done for now. Here's a sequence of patches that\nfixes the existing cases of unused captures, and adds the policy and\noverride.\n\nThe seg-validate.pl script seems unused, unmaintained and useless (it\ndoesn't actually match the syntax accepted by seg, specifcially the (+-)\nsyntax (which my patch fixes in passing)), so maybe we should just\ndelete it instead?\n\n> cheers\n>\n> andrew\n\n -ilmari", "msg_date": "Tue, 02 Jul 2024 16:38:01 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "On Tue, Jul 02, 2024 at 01:55:25PM +0100, Dagfinn Ilmari Mannsåker wrote:\n> For clarity, I've rebased my addional unused-variable changes (except\n> the errcodes-related ones, see below) onto current master, and split it\n> into separate commits with detailed explaiations for each file file, see\n> attached.\n\nThanks, I've squashed these three ones into a single commit for now\n(good catches for 005_negotiate_encryption.pl, btw), and applied them.\n--\nMichael", "msg_date": "Wed, 3 Jul 2024 12:44:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n\n> On Tue, Jul 02, 2024 at 01:55:25PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>> For clarity, I've rebased my addional unused-variable changes (except\n>> the errcodes-related ones, see below) onto current master, and split it\n>> into separate commits with detailed explaiations for each file file, see\n>> attached.\n>\n> Thanks, I've squashed these three ones into a single commit for now\n> (good catches for 005_negotiate_encryption.pl, btw), and applied them.\n\nThanks!\n\n- ilmari\n\n\n", "msg_date": "Wed, 03 Jul 2024 14:20:38 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cleaning up perl code" } ]
[ { "msg_contents": "Many years ago we in effect moved maintenance of the typedefs list for \npgindent into the buildfarm client. The reason was that there were a \nnumber of typedefs that were platform dependent, so we wanted to have \ncoverage across a number of platforms to get a comprehensive list.\n\nLately, this has caused some dissatisfaction, with people wanting the \nlogic for this moved back into core code, among other reasons so we're \nnot reliant on one person - me - for changes. I share this \ndissatisfaction. Indeed, IIRC the use of the buildfarm was originally \nintended as something of a stopgap. Still, we do need to multi-platform \nsupport.\n\nAttached is an attempt to thread this needle. The core is a new perl \nmodule that imports the current buildfarm client logic. The intention is \nthat once we have this, the buildfarm client will switch to using the \nmodule (if found) rather than its own built-in logic. There is precedent \nfor this sort of arrangement (AdjustUpgrade.pm). Accompanying the new \nmodule is a standalone perl script that uses the new module, and \nreplaces the current shell script (thus making it more portable).\n\nOne thing this is intended to provide for is getting typedefs for \nnon-core code such as third party extensions, which isn't entirely \ndifficult \n(<https://adpgtech.blogspot.com/2015/05/running-pgindent-on-non-core-code-or.html>) \nbut it's not as easy as it should be either.\n\nComments welcome.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 20 May 2024 17:11:55 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "tydedef extraction - back to the future" }, { "msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Attached is an attempt to thread this needle. The core is a new perl \n> module that imports the current buildfarm client logic. The intention is \n> that once we have this, the buildfarm client will switch to using the \n> module (if found) rather than its own built-in logic. There is precedent \n> for this sort of arrangement (AdjustUpgrade.pm). Accompanying the new \n> module is a standalone perl script that uses the new module, and \n> replaces the current shell script (thus making it more portable).\n\nHaven't read the code in detail, but +1 for concept. A couple of\nminor quibbles:\n\n* Why not call the wrapper script \"find_typedefs\"? Without the \"s\"\nit seems rather confusing --- \"which typedef is this supposed to\nfind, exactly?\"\n\n* The header comment for sub typedefs seems to have adequate detail\nabout what the arguments are, but that all ought to be propagated\ninto the --help output for the wrapper script. Right now you\ncouldn't figure out how to use the wrapper without reading the\nunderlying module.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 20 May 2024 17:24:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tydedef extraction - back to the future" }, { "msg_contents": "On 20.05.24 23:11, Andrew Dunstan wrote:\n> Attached is an attempt to thread this needle. The core is a new perl \n> module that imports the current buildfarm client logic. The intention is \n> that once we have this, the buildfarm client will switch to using the \n> module (if found) rather than its own built-in logic. There is precedent \n> for this sort of arrangement (AdjustUpgrade.pm). Accompanying the new \n> module is a standalone perl script that uses the new module, and \n> replaces the current shell script (thus making it more portable).\n\nIt looks like this code could use a bit of work to modernize and clean \nup cruft, such as\n\n+ my $sep = $using_msvc ? ';' : ':';\n\nThis can be done with File::Spec.\n\n+ next if $bin =~ m!bin/(ipcclean|pltcl_)!;\n\nThose are long gone.\n\n+ next if $bin =~ m!/postmaster.exe$!; # sometimes a \ncopy not a link\n\nAlso gone.\n\n+ elsif ($using_osx)\n+ {\n+ # On OS X, we need to examine the .o files\n\nUpdate the name.\n\n+ # exclude ecpg/test, which pgindent does too\n+ my $obj_wanted = sub {\n+ /^.*\\.o\\z/s\n+ && !($File::Find::name =~ m!/ecpg/test/!s)\n+ && push(@testfiles, $File::Find::name);\n+ };\n+\n+ File::Find::find($obj_wanted, $binloc);\n+ }\n\nNot clear why this is specific to that platform.\n\nAlso, some instructions should be provided. It looks like this is meant \nto be run on the installation tree? A README and/or a build target \nwould be good.\n\nThe code distinguishes between srcdir and bindir, but it's not clear \nwhat the latter is. It rather looks like the installation prefix. Does \nthis code support out of tree builds? This should be cleared up.\n\n\n\n", "msg_date": "Wed, 22 May 2024 13:32:03 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tydedef extraction - back to the future" } ]
[ { "msg_contents": "Earlier today in [1], a bug was reported regarding a problem with the\ncode added in 66c0185a3 where I'd failed to handle the case correctly\nwhere the UNION's targetlist has columns which are not sortable. For\npg_class, that's relfrozenxid, relminmxid and relacl.\n\nThe most minimal reproducer prior to the revert is:\n\nset enable_hashagg=0;\nexplain (costs off) select '123'::xid union select '123'::xid;\n\nThere is still some ongoing discussion about this on the release\nmailing list as per mentioned by Tom in the commit message in\n7204f3591.\n\nAt some point that discussion is going to need to circle back onto\n-hackers again, and since I've already written a patch to fix the\nissue and un-revert Tom's revert. I just wanted a place on -hackers to\nallow that code to be viewed and discussed. I did also post a patch\non [2], but that no longer applies to master due to the revert.\n\nI'll allow the RMT to choose where the outcome of the RMT decision\ngoes. Let this thread be for at least the coding portion of this or\nbe my thread for this patch for the v18 cycle if the RMT rules in\nfavour of keeping that code reverted for v17.\n\nI've attached 2 patches.\n\n0001 is a simple revert of Tom's revert (7204f3591).\n0002 fixes the issue reported by Hubert.\n\nIf anyone wants to have a look, I'd be grateful for that. Tom did\ncall for further review after this being the 4th issue reported for\n66c0185a3.\n\nDavid\n\n[1] https://postgr.es/message-id/Zktzf926vslR35Fv%40depesz.com\n[2] https://www.postgresql.org/message-id/CAApHDvpDQh1NcL7nAsd3YAKj4vgORwesB3GYuNPnEXXRfA2g4w%40mail.gmail.com", "msg_date": "Tue, 21 May 2024 14:58:10 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Path to unreverting \"Allow planner to use Merge Append to efficiently\n implement UNION\"" }, { "msg_contents": "On 21/05/2024 05:58, David Rowley wrote:\n> Let this thread be for at least the coding portion of this or be my\n> thread for this patch for the v18 cycle if the RMT rules in favour of\n> keeping that code reverted for v17.\n> \n> I've attached 2 patches.\n> \n> 0001 is a simple revert of Tom's revert (7204f3591).\n> 0002 fixes the issue reported by Hubert.\n> \n> If anyone wants to have a look, I'd be grateful for that. Tom did\n> call for further review after this being the 4th issue reported for\n> 66c0185a3.\n\nMy planner experience is a bit rusty, but I took a quick look. Looks \ngenerally OK to me. Some comments below:\n\n> +\t/* For for UNIONs (not UNION ALL), try sorting, if sorting is possible */\n\nDuplicated word: \"For for\"\n\n> /*\n> * build_setop_child_paths\n> *\t\tBuild paths for the set op child relation denoted by 'rel'.\n> *\n> * interesting_pathkeys: if not NIL, also include paths that suit these\n> * pathkeys, sorting any unsorted paths as required.\n> * *pNumGroups: if not NULL, we estimate the number of distinct groups\n> *\t\tin the result, and store it there\n\nThe indentation on 'interesting_pathkeys' and '*pNumGroups' is inconsistent.\n\nI have a vague feeling that this comment deserves to be longer. The \nfunction does a lot of things. How is 'child_tlist' different from \nrel->reltarget for example?\n\n'interesting_pathkeys' is modified by the call to \nadd_setop_child_rel_equivalences(): it adds members to the \nEquivalenceClasses of the pathkeys. Is that worth mentioning here, or is \nthat obvious to someone who know more about the planner?\n\n> \t\t/*\n> \t\t * Create paths to suit final sort order required for setop_pathkeys.\n> \t\t * Here we'll sort the cheapest input path (if not sorted already) and\n> \t\t * incremental sort any paths which are partially sorted.\n> \t\t */\n> \t\tis_sorted = pathkeys_count_contained_in(setop_pathkeys,\n> \t\t\t\t\t\t\t\t\t\t\t\tsubpath->pathkeys,\n> \t\t\t\t\t\t\t\t\t\t\t\t&presorted_keys);\n> \n> \t\tif (!is_sorted)\n> \t\t{\n\nMaybe also mention that if it's already sorted, it's used as is.\n\nBTW, could the same machinery be used for INTERSECT as well? There was a \nbrief mention of that in the original thread, but I didn't understand \nthe details. Not for v17, but I'm curious. I was wondering if \nbuild_setop_child_paths() should be named build_union_child_paths(), \nsince it's only used with UNIONs, but I guess it could be used as is for \nINTERSECT too.\n\n\n# Testing\n\npostgres=# begin; create table foo as select i from generate_series(1, \n1000000) i; create index on foo (i); commit;\nBEGIN\nSELECT 1000000\nCREATE INDEX\nCOMMIT\npostgres=# set enable_seqscan=off;\nSET\npostgres=# explain (select 1 as i union select i from foo) order by i;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------\n Unique (cost=144370.89..149370.89 rows=1000001 width=4)\n -> Sort (cost=144370.89..146870.89 rows=1000001 width=4)\n Sort Key: (1)\n -> Append (cost=0.00..31038.44 rows=1000001 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> Index Only Scan using foo_i_idx on foo \n(cost=0.42..26038.42 rows=1000000 width=4)\n(6 rows)\n\nI'm disappointed it couldn't produce a MergeAppend plan. If you replace \nthe \"union\" with \"union all\" you do get a MergeAppend.\n\nSome more cases where I hoped for a MergeAppend:\n\npostgres=# explain (select i, 'foo' from foo union select i, 'foo' from \nfoo) order by 1;\n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------\n Unique (cost=380767.54..395767.54 rows=2000000 width=36)\n -> Sort (cost=380767.54..385767.54 rows=2000000 width=36)\n Sort Key: foo.i, ('foo'::text)\n -> Append (cost=0.42..62076.85 rows=2000000 width=36)\n -> Index Only Scan using foo_i_idx on foo \n(cost=0.42..26038.42 rows=1000000 width=36)\n -> Index Only Scan using foo_i_idx on foo foo_1 \n(cost=0.42..26038.42 rows=1000000 width=36)\n(6 rows)\n\n\npostgres=# explain (select 'foo', i from foo union select 'bar', i from \nfoo) order by 1;\n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------\n Unique (cost=380767.54..395767.54 rows=2000000 width=36)\n -> Sort (cost=380767.54..385767.54 rows=2000000 width=36)\n Sort Key: ('foo'::text), foo.i\n -> Append (cost=0.42..62076.85 rows=2000000 width=36)\n -> Index Only Scan using foo_i_idx on foo \n(cost=0.42..26038.42 rows=1000000 width=36)\n -> Index Only Scan using foo_i_idx on foo foo_1 \n(cost=0.42..26038.42 rows=1000000 width=36)\n(6 rows)\n\n\nThe following two queries are the same from the user's point of view, \nbut one is written using WITH:\n\npostgres=# explain (select i from foo union (select 1::int order by 1) \nunion select i from foo) order by 1;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------\n Unique (cost=326083.66..336083.67 rows=2000001 width=4)\n -> Sort (cost=326083.66..331083.67 rows=2000001 width=4)\n Sort Key: foo.i\n -> Append (cost=0.42..62076.87 rows=2000001 width=4)\n -> Index Only Scan using foo_i_idx on foo \n(cost=0.42..26038.42 rows=1000000 width=4)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> Index Only Scan using foo_i_idx on foo foo_1 \n(cost=0.42..26038.42 rows=1000000 width=4)\n(7 rows)\n\npostgres=# explain with x (i) as (select 1::int order by 1) (select i \nfrom foo union select i from x union select i from foo) order by 1;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------\n Unique (cost=0.89..82926.54 rows=2000001 width=4)\n -> Merge Append (cost=0.89..77926.54 rows=2000001 width=4)\n Sort Key: foo.i\n -> Index Only Scan using foo_i_idx on foo \n(cost=0.42..26038.42 rows=1000000 width=4)\n -> Sort (cost=0.02..0.03 rows=1 width=4)\n Sort Key: (1)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> Index Only Scan using foo_i_idx on foo foo_1 \n(cost=0.42..26038.42 rows=1000000 width=4)\n(8 rows)\n\nI would've expected a MergeAppend in both cases.\n\n\nNone of these test cases are broken as such, you just don't get the \nbenefit of the optimization. I suspect they might all have the same root \ncause, as they all involve constants in the target list. I think that's \na pretty common use case of UNION though.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 21 May 2024 14:48:19 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Path to unreverting \"Allow planner to use Merge Append to\n efficiently implement UNION\"" }, { "msg_contents": "On 2024-May-21, David Rowley wrote:\n\n> I've attached 2 patches.\n> \n> 0001 is a simple revert of Tom's revert (7204f3591).\n> 0002 fixes the issue reported by Hubert.\n\nI would like to request that you don't keep 0001's message as you have\nit here. It'd be more readable to take 66c0185a3d14's whole commit\nmessage with a small suffix like \"try 2\" in the commit title, and add an\nadditional second paragraph stating it was transiently reverted by\n7204f35919b7. Otherwise it's harder to make sense of the commit on its\nown later.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 21 May 2024 14:34:56 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Path to unreverting \"Allow planner to use Merge Append to\n efficiently implement UNION\"" }, { "msg_contents": "On Wed, 22 May 2024 at 00:35, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-May-21, David Rowley wrote:\n>\n> > I've attached 2 patches.\n> >\n> > 0001 is a simple revert of Tom's revert (7204f3591).\n> > 0002 fixes the issue reported by Hubert.\n>\n> I would like to request that you don't keep 0001's message as you have\n> it here. It'd be more readable to take 66c0185a3d14's whole commit\n> message with a small suffix like \"try 2\" in the commit title, and add an\n> additional second paragraph stating it was transiently reverted by\n> 7204f35919b7. Otherwise it's harder to make sense of the commit on its\n> own later.\n\nThanks for having a look. I was planning to have the commit message\nas per attached. I'd only split the patch for ease of review per\nrequest of Tom. I should have mentioned that here.\n\nI would adjust the exact wording in the final paragraph as required\ndepending on what plan materialises.\n\nThis also fixes up the comment stuff that Heikki mentioned.\n\nDavid", "msg_date": "Wed, 22 May 2024 00:44:32 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Path to unreverting \"Allow planner to use Merge Append to\n efficiently implement UNION\"" }, { "msg_contents": "On Tue, May 21, 2024 at 8:44 AM David Rowley <[email protected]> wrote:\n> Thanks for having a look. I was planning to have the commit message\n> as per attached. I'd only split the patch for ease of review per\n> request of Tom. I should have mentioned that here.\n>\n> I would adjust the exact wording in the final paragraph as required\n> depending on what plan materialises.\n>\n> This also fixes up the comment stuff that Heikki mentioned.\n\nThe consensus on pgsql-release was to unrevert this patch and commit\nthe fix now, rather than waiting for the next beta. However, the\nconsensus was also to push the un-revert as a separate commit from the\nbug fix, rather than together as suggested by Álvaro. Since time is\nshort due to the impending release and it's very late where you are,\nI've taken care of this. Hope that's OK.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 May 2024 13:36:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Path to unreverting \"Allow planner to use Merge Append to\n efficiently implement UNION\"" }, { "msg_contents": "On Wed, 22 May 2024 at 05:36, Robert Haas <[email protected]> wrote:\n> The consensus on pgsql-release was to unrevert this patch and commit\n> the fix now, rather than waiting for the next beta. However, the\n> consensus was also to push the un-revert as a separate commit from the\n> bug fix, rather than together as suggested by Álvaro. Since time is\n> short due to the impending release and it's very late where you are,\n> I've taken care of this. Hope that's OK.\n\nThanks for handling that. It's much appreciated.\n\nDavid\n\n\n", "msg_date": "Wed, 22 May 2024 08:37:23 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Path to unreverting \"Allow planner to use Merge Append to\n efficiently implement UNION\"" }, { "msg_contents": "(Thanks for your review. I'm sorry I didn't have time and energy to\nrespond properly until now)\n\nOn Tue, 21 May 2024 at 23:48, Heikki Linnakangas <[email protected]> wrote:\n> BTW, could the same machinery be used for INTERSECT as well? There was a\n> brief mention of that in the original thread, but I didn't understand\n> the details. Not for v17, but I'm curious. I was wondering if\n> build_setop_child_paths() should be named build_union_child_paths(),\n> since it's only used with UNIONs, but I guess it could be used as is for\n> INTERSECT too.\n\nI'd previously thought about that, but when I thought about it I'd\nconsidered getting rid of the SetOp Intersect and replacing with a\njoin. To do that my conclusion was that we'd first need to improve\njoins using IS NOT DISTINCT FROM, as that's the behaviour we need for\ncorrect setop NULL handling. However, on relooking, I see that we\ncould still use SetOp Intersect with the flags injected into the\ntargetlist and get sorted results to it via Merge Append rather than\nAppend. That might require better Const handling than what's in the\npatch today due to the 1/0 flag that gets added to the subquery tlist.\nI was unsure how much trouble to go to for INTERSECT. I spent about 7\nyears in a job writing queries and don't recall ever feeling the need\nto use INTERSECT. I did use EXCEPT, however... like at least once.\nI'll probably circle back to it one day. People maybe don't use it\nbecause it's so terribly optimised.\n\n> # Testing\n>\n> postgres=# begin; create table foo as select i from generate_series(1,\n> 1000000) i; create index on foo (i); commit;\n> BEGIN\n> SELECT 1000000\n> CREATE INDEX\n> COMMIT\n> postgres=# set enable_seqscan=off;\n> SET\n> postgres=# explain (select 1 as i union select i from foo) order by i;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------\n> Unique (cost=144370.89..149370.89 rows=1000001 width=4)\n> -> Sort (cost=144370.89..146870.89 rows=1000001 width=4)\n> Sort Key: (1)\n> -> Append (cost=0.00..31038.44 rows=1000001 width=4)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> -> Index Only Scan using foo_i_idx on foo\n> (cost=0.42..26038.42 rows=1000000 width=4)\n> (6 rows)\n>\n> I'm disappointed it couldn't produce a MergeAppend plan. If you replace\n> the \"union\" with \"union all\" you do get a MergeAppend.\n>\n> Some more cases where I hoped for a MergeAppend:\n\nI've not looked again in detail, but there was some discussion along\nthese lines in [1]. I think the problem is down to how we remove\nredundant PathKeys when the EquivalenceClass has a Const. There can\nonly be 1 value, so no need for a PathKey to represent that. The\nproblem with that comes with lack of equivalence visibility through\nsubqueries. The following demonstrates:\n\ncreate table ab(a int, b int, primary key(a,b));\nset enable_seqscan=0;\nset enable_bitmapscan=0;\n\nexplain (costs off) select * from (select * from ab where a=1 order by\nb) order by a,b;\n QUERY PLAN\n-------------------------------------------\n Sort\n Sort Key: ab.a, ab.b\n -> Index Only Scan using ab_pkey on ab\n Index Cond: (a = 1)\n(4 rows)\n\nexplain (costs off) select * from (select * from ab where a=1 order by\nb) order by b;\n QUERY PLAN\n-------------------------------------\n Index Only Scan using ab_pkey on ab\n Index Cond: (a = 1)\n(2 rows)\n\nBecause the subquery only publishes that it's ordered by \"b\", the\nouter query thinks it needs to sort on \"a,b\". That's a wasted effort\nsince the subquery has an equivalence class for \"a\" with a constant.\nThe outer query doesn't know that.\n\n> postgres=# explain (select i, 'foo' from foo union select i, 'foo' from\n> foo) order by 1;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------\n> Unique (cost=380767.54..395767.54 rows=2000000 width=36)\n> -> Sort (cost=380767.54..385767.54 rows=2000000 width=36)\n> Sort Key: foo.i, ('foo'::text)\n> -> Append (cost=0.42..62076.85 rows=2000000 width=36)\n> -> Index Only Scan using foo_i_idx on foo\n> (cost=0.42..26038.42 rows=1000000 width=36)\n> -> Index Only Scan using foo_i_idx on foo foo_1\n> (cost=0.42..26038.42 rows=1000000 width=36)\n> (6 rows)\n>\n>\n> postgres=# explain (select 'foo', i from foo union select 'bar', i from\n> foo) order by 1;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------\n> Unique (cost=380767.54..395767.54 rows=2000000 width=36)\n> -> Sort (cost=380767.54..385767.54 rows=2000000 width=36)\n> Sort Key: ('foo'::text), foo.i\n> -> Append (cost=0.42..62076.85 rows=2000000 width=36)\n> -> Index Only Scan using foo_i_idx on foo\n> (cost=0.42..26038.42 rows=1000000 width=36)\n> -> Index Only Scan using foo_i_idx on foo foo_1\n> (cost=0.42..26038.42 rows=1000000 width=36)\n> (6 rows)\n\nThis isn't great. I think it's for the same reason as mentioned above.\nI didn't test, but I think the patch in [1] should fix it. I need to\nspend more time on it before proposing it for v18. It adds some\npossibly expensive lookups and requires recursively searching\nPathKeys. It's quite complex and needs more study.\n\n> The following two queries are the same from the user's point of view,\n> but one is written using WITH:\n>\n> postgres=# explain (select i from foo union (select 1::int order by 1)\n> union select i from foo) order by 1;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------\n> Unique (cost=326083.66..336083.67 rows=2000001 width=4)\n> -> Sort (cost=326083.66..331083.67 rows=2000001 width=4)\n> Sort Key: foo.i\n> -> Append (cost=0.42..62076.87 rows=2000001 width=4)\n> -> Index Only Scan using foo_i_idx on foo\n> (cost=0.42..26038.42 rows=1000000 width=4)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> -> Index Only Scan using foo_i_idx on foo foo_1\n> (cost=0.42..26038.42 rows=1000000 width=4)\n> (7 rows)\n>\n> postgres=# explain with x (i) as (select 1::int order by 1) (select i\n> from foo union select i from x union select i from foo) order by 1;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------\n> Unique (cost=0.89..82926.54 rows=2000001 width=4)\n> -> Merge Append (cost=0.89..77926.54 rows=2000001 width=4)\n> Sort Key: foo.i\n> -> Index Only Scan using foo_i_idx on foo\n> (cost=0.42..26038.42 rows=1000000 width=4)\n> -> Sort (cost=0.02..0.03 rows=1 width=4)\n> Sort Key: (1)\n> -> Result (cost=0.00..0.01 rows=1 width=4)\n> -> Index Only Scan using foo_i_idx on foo foo_1\n> (cost=0.42..26038.42 rows=1000000 width=4)\n> (8 rows)\n>\n> I would've expected a MergeAppend in both cases.\n\nThat's surprising. I don't have an answer without debugging and I\ncan't quite motivate myself to do that right now for this patch.\n\n> None of these test cases are broken as such, you just don't get the\n> benefit of the optimization. I suspect they might all have the same root\n> cause, as they all involve constants in the target list. I think that's\n> a pretty common use case of UNION though.\n\nIt's true that there are quite a few things left on the table here. I\nthink the refactoring work that has been done moves some of the\nbarriers away for future improvements. There just wasn't enough time\nto get those done for v17. I hope to get some time and energy for it\nin v18. I'm just thankful that you found no bugs. If you do happen to\nfind any, I can tell you a good time not to report them! :)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqo1rV8O4pMU2-22iTASBXgnm4kbHF6A8_VMqiDR3hG8A@mail.gmail.com\n\n\n", "msg_date": "Wed, 22 May 2024 15:05:20 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Path to unreverting \"Allow planner to use Merge Append to\n efficiently implement UNION\"" } ]
[ { "msg_contents": "hi.\n\nhttps://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n<<<start_quote\nThe regexp_replace function provides substitution of new text for\nsubstrings that match POSIX regular expression patterns. It has the\nsyntax regexp_replace(source, pattern, replacement [, start [, N ]] [,\nflags ]). (Notice that N cannot be specified unless start is, but\nflags can be given in any case.) The source string is returned\nunchanged if there is no match to the pattern. If there is a match,\nthe source string is returned with the replacement string substituted\nfor the matching substring. The replacement string can contain \\n,\nwhere n is 1 through 9, to indicate that the source substring matching\nthe n'th parenthesized subexpression of the pattern should be\ninserted, and it can contain \\& to indicate that the substring\nmatching the entire pattern should be inserted.\n<<<end_quote\n\n<<\nThe replacement string can contain \\n, where n is 1 through 9,\nto indicate that the source substring matching the n'th parenthesized\nsubexpression of the pattern should be inserted\n<<\ni think it explained example like:\nSELECT regexp_replace('foobarbaz', 'b(..)', 'X\\1Y', 'g');\n\nbut it does not seem to explain cases like:\nSELECT regexp_replace('foobarbaz', 'b(..)', 'X\\2Y', 'g');\n?\n\n\nI think it means that 'b(..)', (..) the parenthesized subexpression is\n1, the whole expression is (n+1) parenthesized subexpression.\nso it is equivalent to\nSELECT regexp_replace('foobarbaz', 'b..', 'XY', 'g');\n\n\n", "msg_date": "Tue, 21 May 2024 11:24:07 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "doc regexp_replace replacement string \\n does not explained properly" }, { "msg_contents": "On Monday, May 20, 2024, jian he <[email protected]> wrote:\n\n> hi.\n>\n> https://www.postgresql.org/docs/current/functions-\n> matching.html#FUNCTIONS-POSIX-REGEXP\n>\n>\n> If there is a match,\n> the source string is returned with the replacement string substituted\n> for the matching substring.\n\n\n>\nThis happens regardless of the presence of parentheses.\n\n\n>\n> The replacement string can contain \\n,\n> where n is 1 through 9, to indicate that the source substring matching\n> the n'th parenthesized subexpression of the pattern should be\n> inserted, and it can contain \\& to indicate that the substring\n> matching the entire pattern should be inserted.\n\n\n Then if the replacement text contains “\\n” expressions those are replaced\nwith text captured from the corresponding parentheses group.\n\n\n> <<\n> i think it explained example like:\n> SELECT regexp_replace('foobarbaz', 'b(..)', 'X\\1Y', 'g');\n\n\nglobal - find two matches to process.\n\nfoobarbaz\nfooX\\1YX\\1Y\nfooXarYXazY\n\n\n>\n> but it does not seem to explain cases like:\n> SELECT regexp_replace('foobarbaz', 'b(..)', 'X\\2Y', 'g');\n>\n>\nfoobarbaz\nfooX\\2YX\\2Y\nfooX{empty string, no second capture group}YX{empty}Y\nfooXYXY\n\nThe docs are correct, though I suppose being explicit that a missing\ncapture group results in an empty string substitution instead of an error\nis probably warranted.\n\nDavid J.\n\nOn Monday, May 20, 2024, jian he <[email protected]> wrote:hi.\n\nhttps://www.postgresql.org/docs/current/functions-matching.html#FUNCTIONS-POSIX-REGEXP If there is a match,\nthe source string is returned with the replacement string substituted\nfor the matching substring.This happens regardless of the presence of parentheses. The replacement string can contain \\n,\nwhere n is 1 through 9, to indicate that the source substring matching\nthe n'th parenthesized subexpression of the pattern should be\ninserted, and it can contain \\& to indicate that the substring\nmatching the entire pattern should be inserted. Then if the replacement text contains “\\n” expressions those are replaced with text captured from the corresponding parentheses group.\n<<\ni think it explained example like:\nSELECT regexp_replace('foobarbaz', 'b(..)', 'X\\1Y', 'g');global - find two matches to process.foobarbazfooX\\1YX\\1YfooXarYXazY \n\nbut it does not seem to explain cases like:\nSELECT regexp_replace('foobarbaz', 'b(..)', 'X\\2Y', 'g');foobarbazfooX\\2YX\\2YfooX{empty string, no second capture group}YX{empty}YfooXYXYThe docs are correct, though I suppose being explicit that a missing capture group results in an empty string substitution instead of an error is probably warranted.David J.", "msg_date": "Mon, 20 May 2024 20:44:04 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc regexp_replace replacement string \\n does not explained\n properly" } ]
[ { "msg_contents": "Hello,\n\nApologies if this is the wrong place to make such a request.\n\nIt would be useful to have the ability to prevent clients from setting whatever isolation levels they choose. Specifically, it would be desirable to enforce SERIALIZABLE for all transactions since the serializable guarantee only holds if other transactions also use the isolation level.\n\nEdgeDB which is built on Postgres only allows the SERIALIZABLE isolation level, for example.\n\nThanks\n\n\n", "msg_date": "Tue, 21 May 2024 00:06:38 -0400", "msg_from": "\"Tihrd Reed\" <[email protected]>", "msg_from_op": true, "msg_subject": "Feature request: limiting isolation level choices" } ]
[ { "msg_contents": "Hi, all\n\nI want to add some columns of int(or Oid) array and declare GIN index for it in catalog when bootstrap.\n\nBut current catalogs all use btree, I tried to declare a GIN index but failed, ex:\npg_class.h\n```\nCATALOG(pg_class\n...\nInt32 \tmy_new_column[1] BKI_DEFAULT(_null_);\n...\n} FormData_pg_class;\n\nDECLARE_INDEX(pg_class_my_index, 7200, on pg_class using gin(my_new_column array_ops));\n#define ClassMYIndexId 7200\n```\nBut this failed when init in heap_form_tuple().\n\nI could use SQL to create GIN index column for user tables.\n\nBut is it possible to declare array column with GIN index in catalog when bootstrap?\n\nThanks.\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\n\n\n\n\n Hi, all\n\nI want to add some columns of int(or Oid) array and declare GIN index for it in catalog when bootstrap.\n\nBut current catalogs all use btree, I tried to declare a GIN index but failed, ex:\npg_class.h\n```\nCATALOG(pg_class\n...\nInt32 \tmy_new_column[1] BKI_DEFAULT(_null_); \n...\n} FormData_pg_class;\n\n\n\nDECLARE_INDEX(pg_class_my_index, 7200, on pg_class using gin(my_new_column array_ops));\n#define ClassMYIndexId 7200\n```\nBut this failed when init in heap_form_tuple().\n\nI could use SQL to create GIN index column for user tables.\n\nBut is it possible to declare array column with GIN index in catalog when bootstrap?\n\nThanks.\n\n\nZhang Mingli\nwww.hashdata.xyz", "msg_date": "Tue, 21 May 2024 17:27:31 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": true, "msg_subject": "How to declare GIN index on array type column when bootstrap?" } ]
[ { "msg_contents": "Hello Hackers,\n\nIf the user tries to open the relation in RangeVar and NoLock mode\ncalling *table_openrv(relation,\nNoLock), *it will internally call relation_openrv()-->relation_open().\nIn *relation_open()\n*we checking the Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES); ,\nhere we expecting the lockmode is *NoLock or greater than that*, but in\nsame function again we checking this assert case Assert(lockmode != NoLock\n|| IsBootstrapProcessingMode() || CheckRelationLockedByMe(r,\nAccessShareLock, true)); , here we are expecting (*lockmode != NoLock) *,\nso why are there two cases that contradict? and What if the user tries to\nopen the relation in NoLock mode? and that will definitely cause the assert\nfailure, Suppose the user who writes some extension and reads some relation\noid that is constant, and wants to acquire NoLock?, need some clarification\non this.\n\nThanks & Regards\nPradeep\n\nHello Hackers,If the user tries to open the relation in RangeVar and NoLock mode calling table_openrv(relation, NoLock), it will internally call relation_openrv()-->relation_open(). In relation_open() we checking the Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES); , here we expecting the lockmode is NoLock or greater than that, but in same function again we checking this assert case Assert(lockmode != NoLock || IsBootstrapProcessingMode() || CheckRelationLockedByMe(r, AccessShareLock, true)); , here we are expecting (lockmode != NoLock) , so why are there two cases that contradict?  and What if the user tries to open the relation in NoLock mode? and that will definitely cause the assert failure, Suppose the user who writes some extension and reads some relation oid that is constant, and wants to acquire NoLock?, need some clarification on this.Thanks & RegardsPradeep", "msg_date": "Tue, 21 May 2024 19:28:21 +0530", "msg_from": "Pradeep Kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Possible Bug in relation_open" }, { "msg_contents": "On Tue, May 21, 2024 at 9:58 AM Pradeep Kumar <[email protected]> wrote:\n> If the user tries to open the relation in RangeVar and NoLock mode calling table_openrv(relation, NoLock), it will internally call relation_openrv()-->relation_open(). In relation_open() we checking the Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES); , here we expecting the lockmode is NoLock or greater than that, but in same function again we checking this assert case Assert(lockmode != NoLock || IsBootstrapProcessingMode() || CheckRelationLockedByMe(r, AccessShareLock, true)); , here we are expecting (lockmode != NoLock) , so why are there two cases that contradict? and What if the user tries to open the relation in NoLock mode? and that will definitely cause the assert failure, Suppose the user who writes some extension and reads some relation oid that is constant, and wants to acquire NoLock?, need some clarification on this.\n\nYou need to acquire a lock. Otherwise, the relcache entry could change\nunderneath you while you're accessing it, which would result in\nPostgreSQL crashing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 21 May 2024 10:44:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible Bug in relation_open" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, May 21, 2024 at 9:58 AM Pradeep Kumar <[email protected]> wrote:\n>> If the user tries to open the relation in RangeVar and NoLock mode calling table_openrv(relation, NoLock), it will internally call relation_openrv()-->relation_open(). In relation_open() we checking the Assert(lockmode >= NoLock && lockmode < MAX_LOCKMODES); , here we expecting the lockmode is NoLock or greater than that, but in same function again we checking this assert case Assert(lockmode != NoLock || IsBootstrapProcessingMode() || CheckRelationLockedByMe(r, AccessShareLock, true)); , here we are expecting (lockmode != NoLock) , so why are there two cases that contradict? and What if the user tries to open the relation in NoLock mode? and that will definitely cause the assert failure, Suppose the user who writes some extension and reads some relation oid that is constant, and wants to acquire NoLock?, need some clarification on this.\n\n> You need to acquire a lock. Otherwise, the relcache entry could change\n> underneath you while you're accessing it, which would result in\n> PostgreSQL crashing.\n\nTo clarify: the rule is that it's only allowed to pass NoLock if you\nknow for certain that some suitable lock on that relation is already\nheld by the current query. That's why these conditions are complicated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 21 May 2024 11:12:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible Bug in relation_open" } ]
[ { "msg_contents": "Hi hackers,\n\nI'd submit an implementation of multi-key sort for review. Please see the\ncode as attachment. Thanks for your reponse in advance.\n\n\nOverview\n--------\n\nMKsort (multi-key sort) is an alternative of standard qsort algorithm,\nwhich has better performance for particular sort scenarios, i.e. the data\nset has multiple keys to be sorted.\n\nThe implementation is based on the paper:\nJon L. Bentley and Robert Sedgewick, \"Fast Algorithms for Sorting and\nSearching Strings\", Jan 1997 [1]\n\nMKsort is applied only for tuple sort by the patch. Theoretically it can\nbe applied for general-purpose sort scenario when there are multiple sort\nkeys available, but it is relatively difficult in practice because kind of\nunique interface is needed to manipulate the keys. So I limit the usage of\nmksort to sort SortTuple.\n\nComparing to classic quick sort, it can get significant performance\nimprovement once multiple keys are available. A rough test shows it got\n~129% improvement than qsort for ORDER BY on 6 keys, and ~52% for CREATE\nINDEX on the same data set. (See more details in section \"Performance\nTest\")\n\nAuthor: Yao Wang <[email protected]>\nCo-author: Hongxu Ma <[email protected]>\n\nScope\n-----\n\nThe interface of mksort is pretty simple: in tuplesort_sort_memtuples(),\nmksort_tuple() is invoked instead of qsort_tuple() if mksort is applicable.\nThe major logic in mksort_tuple() is to apply mksort algorithm on\nSortTuple, and kind of callback mechanism is used to handle\nsort-variant-specific issue, e.g. comparing different datums, like\nqsort_tuple() does. It also handles the complexity of \"abbreviated keys\".\n\nA small difference from classic mksort algorithm is: for IndexTuple, when\nall the columns are equal, an additional comparing based on ItemPointer\nis performed to determine the order. It is to make the result consistent\nto existing qsort.\n\nI did consider about implementing mksort by the approach of kind of\ntemplate mechanism like qsort (see sort_template.h), but it seems\nunnecessary because all concrete tuple types need to be handled are\nderived from SortTuple. Use callback to isolate type specific features\nis good enough.\n\nNote that not all tuple types are supported by mksort. Please see the\ncomments inside tuplesort_sort_memtuples().\n\nTest Cases\n----------\n\nThe changes of test cases include:\n\n* Generally, mksort should generate result exactly same to qsort. However\nsome test cases don't. The reason is that SQL doesn't specify order on\nall possible columns, e.g. \"select c1, c2 from t1 order by c1\" will\ngenerate different results between mksort/qsort when c1 values are equal,\nand the solution is to order c2 as well (\"select c1, c2 from t1 order by\nc1, c2\"). (e.g. geometry)\n* Some cases need to be updated to display the new sort method \"multi-key\nsort\" in explain result. (e.g. incremental_sort)\n* regress/tuplesort was updated with new cases to cover some scenarios of\nmksort.\n\nPerformance Test\n----------------\n\nThe script I used to configure the build:\n\nCFLAGS=\"-O3 -fargument-noalias-global -fno-omit-frame-pointer -g\"\n./configure --prefix=$PGHOME --with-pgport=5432 --with-perl --with-openssl\n--with-python --with-pam --with-blocksize=16 --with-wal-blocksize=16\n--with-perl --enable-tap-tests --with-gssapi --with-ldap\n\nI used the script for a rough test for ORDER BY:\n\n\\timing on\ncreate table t1 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\ninsert into t1 values (generate_series(1,499999), 0, 0, 0, 0, \n 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb');\nupdate t1 set c2 = c1 % 100, c3 = c1 % 50, c4 = c1 % 10, c5 = c1 % 3;\nupdate t1 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n|| (c1 % 5)::text;\n\n-- Use a large work mem to ensure the entire sort happens in memory\nset work_mem='1GB';\n\n-- switch between qsort/mksort\nset enable_mk_sort=off;\n\nexplain analyze select c1 from t1 order by c6, c5, c4, c3, c2, c1;\n\nResults:\n\nmksort:\n1341.283 ms (00:01.341)\n1379.098 ms (00:01.379)\n1369.868 ms (00:01.370)\n\nqsort:\n3137.277 ms (00:03.137)\n3147.771 ms (00:03.148)\n3131.887 ms (00:03.132)\n\nThe perf improvement is ~129%.\n\nAnother perf test for CREATE INDEX:\n\ncreate index idx_t1_mk on t3 (c6, c5, c4, c3, c2, c1);\n\nResults:\n\nmksort:\n1147.207 ms (00:01.147)\n1200.501 ms (00:01.201)\n1235.657 ms (00:01.236)\n\nQsort:\n1852.957 ms (00:01.853)\n1824.209 ms (00:01.824)\n1808.781 ms (00:01.809)\n\nThe perf improvement is ~52%.\n\nAnother test is to use one of queries of TPC-H:\n\nset work_mem='1GB';\n\n-- query rewritten from TPCH-Q1, and there are 6001215 rows in lineitem\nexplain analyze select\n l_returnflag,l_linestatus,l_quantity,l_shipmode\nfrom\n lineitem\nwhere\n l_shipdate <= date'1998-12-01' - interval '65 days'\norder by\n l_returnflag,l_linestatus,l_quantity,l_shipmode;\n\nResult:\n\nQsort:\n14582.626 ms\n14524.188 ms\n14524.111 ms\n\nmksort:\n11390.891 ms\n11647.065 ms\n11546.791 ms\n\nThe perf improvement is ~25.8%.\n\n[1] https://www.cs.tufts.edu/~nr/cs257/archive/bob-sedgewick/fast-strings.pdf\n[2] https://www.tpc.org/tpch/\n\n\nThanks,\n\nYao Wang", "msg_date": "Wed, 22 May 2024 12:48:23 +0000", "msg_from": "Wang Yao <[email protected]>", "msg_from_op": true, "msg_subject": "An implementation of multi-key sort" }, { "msg_contents": "On 22/05/2024 15:48, Wang Yao wrote:\n> Comparing to classic quick sort, it can get significant performance\n> improvement once multiple keys are available. A rough test shows it got\n> ~129% improvement than qsort for ORDER BY on 6 keys, and ~52% for CREATE\n> INDEX on the same data set. (See more details in section \"Performance\n> Test\")\n\nImpressive. Did you test the performance of the cases where MK-sort \ndoesn't help, to check if there is a performance regression?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 22 May 2024 18:29:34 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An implementation of multi-key sort" }, { "msg_contents": "No obvious perf regression is expected because PG will follow original\r\nqsort code path when mksort is disabled. For the case, the only extra\r\ncost is the check in tuplesort_sort_memtuples() to enter mksort code path.\r\n\r\nIt's also proved by the experiment today:\r\n\r\nMksort disabled:\r\n2949.287 ms\r\n2955.258 ms\r\n2947.262 ms\r\n\r\nNo mksort code:\r\n2947.094 ms\r\n2946.419 ms\r\n2953.215 ms\r\n\r\nAlmost the same.\r\n\r\nI also updated code with small enhancements. Please see the latest code\r\nas attachment.\r\n\r\n\r\nThanks,\r\n\r\nYao Wang\r\n________________________________\r\n发件人: Heikki Linnakangas <[email protected]>\r\n发送时间: 2024年5月22日 23:29\r\n收件人: Wang Yao <[email protected]>; PostgreSQL Hackers <[email protected]>\r\n抄送: [email protected] <[email protected]>\r\n主题: Re: An implementation of multi-key sort\r\n\r\nOn 22/05/2024 15:48, Wang Yao wrote:\r\n> Comparing to classic quick sort, it can get significant performance\r\n> improvement once multiple keys are available. A rough test shows it got\r\n> ~129% improvement than qsort for ORDER BY on 6 keys, and ~52% for CREATE\r\n> INDEX on the same data set. (See more details in section \"Performance\r\n> Test\")\r\n\r\nImpressive. Did you test the performance of the cases where MK-sort\r\ndoesn't help, to check if there is a performance regression?\r\n\r\n--\r\nHeikki Linnakangas\r\nNeon (https://neon.tech)", "msg_date": "Thu, 23 May 2024 12:39:06 +0000", "msg_from": "Wang Yao <[email protected]>", "msg_from_op": true, "msg_subject": "=?gb2312?B?u9i4tDogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "On 23/05/2024 15:39, Wang Yao wrote:\n> No obvious perf regression is expected because PG will follow original\n> qsort code path when mksort is disabled. For the case, the only extra\n> cost is the check in tuplesort_sort_memtuples() to enter mksort code path.\n\nAnd what about the case the mksort is enabled, but it's not effective \nbecause all leading keys are different?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 23 May 2024 15:47:29 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "When all leading keys are different, mksort will finish the entire sort at the\nfirst sort key and never touch other keys. For the case, mksort falls back to\nkind of qsort actually.\n\nI created another data set with distinct values in all sort keys:\n\ncreate table t2 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\ninsert into t2 values (generate_series(1,499999), 0, 0, 0, 0, '');\nupdate t2 set c2 = 999990 - c1, c3 = 999991 - c1, c4 = 999992 - c1, c5\n= 999993 - c1;\nupdate t2 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n || (999994 - c1)::text;\nexplain analyze select c1 from t2 order by c6, c5, c4, c3, c2, c1;\n\nResults:\n\nMKsort:\n12374.427 ms\n12528.068 ms\n12554.718 ms\n\nqsort:\n12251.422 ms\n12279.938 ms\n12280.254 ms\n\nMKsort is a bit slower than qsort, which can be explained by extra\nchecks of MKsort.\n\nYao Wang\n\nOn Fri, May 24, 2024 at 8:36 PM Wang Yao <[email protected]> wrote:\n>\n>\n>\n> 获取Outlook for Android\n> ________________________________\n> From: Heikki Linnakangas <[email protected]>\n> Sent: Thursday, May 23, 2024 8:47:29 PM\n> To: Wang Yao <[email protected]>; PostgreSQL Hackers <[email protected]>\n> Cc: [email protected] <[email protected]>\n> Subject: Re: 回复: An implementation of multi-key sort\n>\n> On 23/05/2024 15:39, Wang Yao wrote:\n> > No obvious perf regression is expected because PG will follow original\n> > qsort code path when mksort is disabled. For the case, the only extra\n> > cost is the check in tuplesort_sort_memtuples() to enter mksort code path.\n>\n> And what about the case the mksort is enabled, but it's not effective\n> because all leading keys are different?\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.\n\n\n", "msg_date": "Fri, 24 May 2024 20:50:54 +0800", "msg_from": "Yao Wang <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "I added two optimizations to mksort which exist on qsort_tuple():\n\n1. When selecting pivot, always pick the item in the middle of array but\nnot by random. Theoretically it has the same effect to old approach, but\nit can eliminate some unstable perf test results, plus a bit perf benefit by\nremoving random value generator.\n2. Always check whether the array is ordered already, and return\nimmediately if it is. The pre-ordered check requires extra cost and\nimpacts perf numbers on some data sets, but can improve perf\nsignificantly on other data sets.\n\nBy now, mksort has perf results equal or better than qsort on all data\nsets I ever used.\n\nI also updated test case. Please see v3 code as attachment.\n\nPerf test results:\n\nData set 1 (with mass duplicate values):\n-----------------------------------------\n\ncreate table t1 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\ninsert into t1 values (generate_series(1,499999), 0, 0, 0, 0,\n'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb');\nupdate t1 set c2 = c1 % 100, c3 = c1 % 50, c4 = c1 % 10, c5 = c1 % 3;\nupdate t1 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n|| (c1 % 5)::text;\n\nQuery 1:\n\nexplain analyze select c1 from t1 order by c6, c5, c4, c3, c2, c1;\n\nDisable Mksort\n\n3021.636 ms\n3014.669 ms\n3033.588 ms\n\nEnable Mksort\n\n1688.590 ms\n1686.956 ms\n1688.567 ms\n\nThe improvement is 78.9%, which is reduced from the previous version\n(129%). The most cost should be the pre-ordered check.\n\nQuery 2:\n\ncreate index idx_t1_mk on t1 (c6, c5, c4, c3, c2, c1);\n\nDisable Mksort\n\n1674.648 ms\n1680.608 ms\n1681.373 ms\n\nEnable Mksort\n\n1143.341 ms\n1143.462 ms\n1143.894 ms\n\nThe improvement is ~47%, which is also reduced a bit (52%).\n\nData set 2 (with distinct values):\n----------------------------------\n\ncreate table t2 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\ninsert into t2 values (generate_series(1,499999), 0, 0, 0, 0, '');\nupdate t2 set c2 = 999990 - c1, c3 = 999991 - c1, c4 = 999992 - c1, c5\n= 999993 - c1;\nupdate t2 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n|| (999994 - c1)::text;\n\nQuery 1:\n\nexplain analyze select c1 from t2 order by c6, c5, c4, c3, c2, c1;\n\nDisable Mksort\n\n12199.963 ms\n12197.068 ms\n12191.657 ms\n\nEnable Mksort\n\n9538.219 ms\n9571.681 ms\n9536.335 ms\n\nThe improvement is 27.9%, which is much better than the old approach (-6.2%).\n\nQuery 2 (the data is pre-ordered):\n\nexplain analyze select c1 from t2 order by c6 desc, c5, c4, c3, c2, c1;\n\nEnable Mksort\n\n768.191 ms\n768.079 ms\n767.026 ms\n\nDisable Mksort\n\n768.757 ms\n766.166 ms\n766.149 ms\n\nThey are almost the same since no actual sort was performed, and much\nbetter than the old approach (-1198.1%).\n\n\nThanks,\n\nYao Wang\n\nOn Fri, May 24, 2024 at 8:50 PM Yao Wang <[email protected]> wrote:\n>\n> When all leading keys are different, mksort will finish the entire sort at the\n> first sort key and never touch other keys. For the case, mksort falls back to\n> kind of qsort actually.\n>\n> I created another data set with distinct values in all sort keys:\n>\n> create table t2 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\n> insert into t2 values (generate_series(1,499999), 0, 0, 0, 0, '');\n> update t2 set c2 = 999990 - c1, c3 = 999991 - c1, c4 = 999992 - c1, c5\n> = 999993 - c1;\n> update t2 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n> || (999994 - c1)::text;\n> explain analyze select c1 from t2 order by c6, c5, c4, c3, c2, c1;\n>\n> Results:\n>\n> MKsort:\n> 12374.427 ms\n> 12528.068 ms\n> 12554.718 ms\n>\n> qsort:\n> 12251.422 ms\n> 12279.938 ms\n> 12280.254 ms\n>\n> MKsort is a bit slower than qsort, which can be explained by extra\n> checks of MKsort.\n>\n> Yao Wang\n>\n> On Fri, May 24, 2024 at 8:36 PM Wang Yao <[email protected]> wrote:\n> >\n> >\n> >\n> > 获取Outlook for Android\n> > ________________________________\n> > From: Heikki Linnakangas <[email protected]>\n> > Sent: Thursday, May 23, 2024 8:47:29 PM\n> > To: Wang Yao <[email protected]>; PostgreSQL Hackers <[email protected]>\n> > Cc: [email protected] <[email protected]>\n> > Subject: Re: 回复: An implementation of multi-key sort\n> >\n> > On 23/05/2024 15:39, Wang Yao wrote:\n> > > No obvious perf regression is expected because PG will follow original\n> > > qsort code path when mksort is disabled. For the case, the only extra\n> > > cost is the check in tuplesort_sort_memtuples() to enter mksort code path.\n> >\n> > And what about the case the mksort is enabled, but it's not effective\n> > because all leading keys are different?\n> >\n> > --\n> > Heikki Linnakangas\n> > Neon (https://neon.tech)\n> >\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.", "msg_date": "Fri, 31 May 2024 20:09:53 +0800", "msg_from": "Yao Wang <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "To be accurate, \"multi-key sort\" includes both \"multi-key quick sort\"\nand \"multi-key heap sort\". This patch includes code change related to\nonly \"multi-key quick sort\" which is used to replace standard quick\nsort for tuplesort. The \"multi-key heap sort\" is about an implementation\nof multi-key heap and should be treated as a separated task. We need\nto clarify the naming to avoid confusion.\n\nI updated code which is related to only function/var renaming and\nrelevant comments, plus some minor assertions changes. Please see the\nattachment.\n\n\nThanks,\n\nYao Wang\n\nOn Fri, May 31, 2024 at 8:09 PM Yao Wang <[email protected]> wrote:\n>\n> I added two optimizations to mksort which exist on qsort_tuple():\n>\n> 1. When selecting pivot, always pick the item in the middle of array but\n> not by random. Theoretically it has the same effect to old approach, but\n> it can eliminate some unstable perf test results, plus a bit perf benefit by\n> removing random value generator.\n> 2. Always check whether the array is ordered already, and return\n> immediately if it is. The pre-ordered check requires extra cost and\n> impacts perf numbers on some data sets, but can improve perf\n> significantly on other data sets.\n>\n> By now, mksort has perf results equal or better than qsort on all data\n> sets I ever used.\n>\n> I also updated test case. Please see v3 code as attachment.\n>\n> Perf test results:\n>\n> Data set 1 (with mass duplicate values):\n> -----------------------------------------\n>\n> create table t1 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\n> insert into t1 values (generate_series(1,499999), 0, 0, 0, 0,\n> 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb');\n> update t1 set c2 = c1 % 100, c3 = c1 % 50, c4 = c1 % 10, c5 = c1 % 3;\n> update t1 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n> || (c1 % 5)::text;\n>\n> Query 1:\n>\n> explain analyze select c1 from t1 order by c6, c5, c4, c3, c2, c1;\n>\n> Disable Mksort\n>\n> 3021.636 ms\n> 3014.669 ms\n> 3033.588 ms\n>\n> Enable Mksort\n>\n> 1688.590 ms\n> 1686.956 ms\n> 1688.567 ms\n>\n> The improvement is 78.9%, which is reduced from the previous version\n> (129%). The most cost should be the pre-ordered check.\n>\n> Query 2:\n>\n> create index idx_t1_mk on t1 (c6, c5, c4, c3, c2, c1);\n>\n> Disable Mksort\n>\n> 1674.648 ms\n> 1680.608 ms\n> 1681.373 ms\n>\n> Enable Mksort\n>\n> 1143.341 ms\n> 1143.462 ms\n> 1143.894 ms\n>\n> The improvement is ~47%, which is also reduced a bit (52%).\n>\n> Data set 2 (with distinct values):\n> ----------------------------------\n>\n> create table t2 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\n> insert into t2 values (generate_series(1,499999), 0, 0, 0, 0, '');\n> update t2 set c2 = 999990 - c1, c3 = 999991 - c1, c4 = 999992 - c1, c5\n> = 999993 - c1;\n> update t2 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n> || (999994 - c1)::text;\n>\n> Query 1:\n>\n> explain analyze select c1 from t2 order by c6, c5, c4, c3, c2, c1;\n>\n> Disable Mksort\n>\n> 12199.963 ms\n> 12197.068 ms\n> 12191.657 ms\n>\n> Enable Mksort\n>\n> 9538.219 ms\n> 9571.681 ms\n> 9536.335 ms\n>\n> The improvement is 27.9%, which is much better than the old approach (-6.2%).\n>\n> Query 2 (the data is pre-ordered):\n>\n> explain analyze select c1 from t2 order by c6 desc, c5, c4, c3, c2, c1;\n>\n> Enable Mksort\n>\n> 768.191 ms\n> 768.079 ms\n> 767.026 ms\n>\n> Disable Mksort\n>\n> 768.757 ms\n> 766.166 ms\n> 766.149 ms\n>\n> They are almost the same since no actual sort was performed, and much\n> better than the old approach (-1198.1%).\n>\n>\n> Thanks,\n>\n> Yao Wang\n>\n> On Fri, May 24, 2024 at 8:50 PM Yao Wang <[email protected]> wrote:\n> >\n> > When all leading keys are different, mksort will finish the entire sort at the\n> > first sort key and never touch other keys. For the case, mksort falls back to\n> > kind of qsort actually.\n> >\n> > I created another data set with distinct values in all sort keys:\n> >\n> > create table t2 (c1 int, c2 int, c3 int, c4 int, c5 int, c6 varchar(100));\n> > insert into t2 values (generate_series(1,499999), 0, 0, 0, 0, '');\n> > update t2 set c2 = 999990 - c1, c3 = 999991 - c1, c4 = 999992 - c1, c5\n> > = 999993 - c1;\n> > update t2 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n> > || (999994 - c1)::text;\n> > explain analyze select c1 from t2 order by c6, c5, c4, c3, c2, c1;\n> >\n> > Results:\n> >\n> > MKsort:\n> > 12374.427 ms\n> > 12528.068 ms\n> > 12554.718 ms\n> >\n> > qsort:\n> > 12251.422 ms\n> > 12279.938 ms\n> > 12280.254 ms\n> >\n> > MKsort is a bit slower than qsort, which can be explained by extra\n> > checks of MKsort.\n> >\n> > Yao Wang\n> >\n> > On Fri, May 24, 2024 at 8:36 PM Wang Yao <[email protected]> wrote:\n> > >\n> > >\n> > >\n> > > 获取Outlook for Android\n> > > ________________________________\n> > > From: Heikki Linnakangas <[email protected]>\n> > > Sent: Thursday, May 23, 2024 8:47:29 PM\n> > > To: Wang Yao <[email protected]>; PostgreSQL Hackers <[email protected]>\n> > > Cc: [email protected] <[email protected]>\n> > > Subject: Re: 回复: An implementation of multi-key sort\n> > >\n> > > On 23/05/2024 15:39, Wang Yao wrote:\n> > > > No obvious perf regression is expected because PG will follow original\n> > > > qsort code path when mksort is disabled. For the case, the only extra\n> > > > cost is the check in tuplesort_sort_memtuples() to enter mksort code path.\n> > >\n> > > And what about the case the mksort is enabled, but it's not effective\n> > > because all leading keys are different?\n> > >\n> > > --\n> > > Heikki Linnakangas\n> > > Neon (https://neon.tech)\n> > >\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.", "msg_date": "Fri, 7 Jun 2024 21:59:55 +0800", "msg_from": "Yao Wang <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "Hello Yao,\n\nI was interested in the patch, considering the promise of significant\nspeedups of sorting, so I took a quick look and did some basic perf\ntesting today. Unfortunately, my benchmarks don't really confirm any\npeformance benefits, so I haven't looked at the code very much and only\nhave some very basic feedback:\n\n1) The new GUC is missing from the .sample config, triggering a failure\nof \"make check-world\". Fixed by 0002.\n\n2) There's a place mixing tabs/spaces in indentation. Fixed by 0003.\n\n3) I tried running pgindent, mostly to see how that would affect the\ncomments, and for most it's probably fine, but a couple are mangled\n(usually those with a numbered list of items). Might needs some changes\nto use formatting that's not reformatted like this. The changes from\npgindent are in 0004, but this is not a fix - it just shows the changes\nafter running pgindent.\n\nNow, regarding the performance tests - I decided to do the usual black\nbox testing, i.e. generate tables with varying numbers of columns, data\ntypes, different data distribution (random, correlated, ...) and so on.\nAnd then run simple ORDER BY queries on that, measuring timing with and\nwithout mk-sort, and checking the effect.\n\nSo I wrote a simple bash script (attached) that does exactly that - it\ngenerates a table with 1k - 10M rows, fills with with data (with some\nbasic simple data distributions), and then runs the queries.\n\nThe raw results are too large to attach, I'm only attaching a PDF\nshowing the summary with a \"speedup heatmap\" - it's a pivot with the\nparameters on the left, and then the GUC and number on columns on top.\nSo the first group of columns is with enable_mk_sort=off, the second\ngroup with enable_mk_sort=on, and finally the heatmap with relative\ntiming (enable_mk_sort=on / enable_mk_sort=off).\n\nSo values <100% mean it got faster (green color - good), and values\n>100% mean it got slower (red - bad). And the thing is - pretty much\neverything is red, often in the 200%-300% range, meaning it got 2x-3x\nslower. There's only very few combinations where it got faster. That\ndoes not seem very promising ... but maybe I did something wrong?\n\nAfter seeing this, I took a look at your example again, which showed\nsome nice speedups. But it seems very dependent on the order of keys in\nthe ORDER BY clause. For example consider this:\n\nset enable_mk_sort = on;\nexplain (analyze, timing off)\nselect * from t1 order by c6, c5, c4, c3, c2, c1;\n\n QUERY PLAN\n-------------------------------------------------------------------\n Sort (cost=72328.81..73578.81 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Sort Key: c6, c5, c4, c3, c2, c1\n Sort Method: quicksort Memory: 59163kB\n -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Planning Time: 0.054 ms\n Execution Time: 1095.183 ms\n(6 rows)\n\nset enable_mk_sort = on;\nexplain (analyze, timing off)\nselect * from t1 order by c6, c5, c4, c3, c2, c1;\n\n QUERY PLAN\n-------------------------------------------------------------------\n Sort (cost=72328.81..73578.81 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Sort Key: c6, c5, c4, c3, c2, c1\n Sort Method: multi-key quick sort Memory: 59163kB\n -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Planning Time: 0.130 ms\n Execution Time: 633.635 ms\n(6 rows)\n\nWhich seems great, but let's reverse the sort keys:\n\nset enable_mk_sort = off;\nexplain (analyze, timing off)\nselect * from t1 order by c1, c2, c3, c4, c5, c6;\n\n QUERY PLAN\n-------------------------------------------------------------------\n\n Sort (cost=72328.81..73578.81 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Sort Key: c1, c2, c3, c4, c5, c6\n Sort Method: quicksort Memory: 59163kB\n -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Planning Time: 0.146 ms\n Execution Time: 170.085 ms\n(6 rows)\n\nset enable_mk_sort = off;\nexplain (analyze, timing off)\nselect * from t1 order by c1, c2, c3, c4, c5, c6;\n\n QUERY PLAN\n-------------------------------------------------------------------\n Sort (cost=72328.81..73578.81 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Sort Key: c1, c2, c3, c4, c5, c6\n Sort Method: multi-key quick sort Memory: 59163kB\n -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n (actual rows=499999 loops=1)\n Planning Time: 0.127 ms\n Execution Time: 367.263 ms\n(6 rows)\n\nI believe this is the case Heikki was asking about. I see the response\nwas that it's OK and the overhead is very low, but without too much\ndetail so I don't know what case you measured.\n\nAnyway, I think it seems to be very sensitive to the exact data set.\nWhich is not entirely surprising, I guess - most optimizations have a\nmix of improved/regressed cases, yielding a heatmap with a mix of green\nand red areas, and we have to either optimize the code (or heuristics to\nenable the feature), or convince ourselves the \"red\" cases are less\nimportant / unlikely etc.\n\nBut here the results are almost universally \"red\", so it's going to be\nvery hard to convince ourselves this is a good trade off. Of course, you\nmay argue the cases I've tested are wrong and not representative. I\ndon't think that's the case, though.\n\nIt's also interesting (and perhaps a little bit bizarre) that almost all\nthe cases that got better are for a single-column sort. Which is exactly\nthe case the patch should not affect. But it seems pretty consistent, so\nmaybe this is something worth investigating.\n\nFWIW I'm not familiar with the various quicksort variants, but I noticed\nthat the Bentley & Sedgewick paper mentioned as the basis for the patch\nis from 1997, and apparently implements stuff originally proposed by\nHoare in 1961. So maybe this is just an example of an algorithm that was\ngood for a hardware at that time, but the changes (e.g. the growing\nimportant of on-CPU caches) made it less relevant?\n\nAnother thing I noticed while skimming [1] is this:\n\n The algorithm is designed to exploit the property that in many\n problems, strings tend to have shared prefixes.\n\nIf that's the case, isn't it wrong to apply this to all sorts, including\nsorts with non-string keys? It might explain why your example works OK,\nas it involves key c6 which is string with all values sharing the same\n(fairly long) prefix. But then maybe we should be careful and restrict\nthis to only such those cases?\n\nregards\n\n[1] https://en.wikipedia.org/wiki/Multi-key_quicksort\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 9 Jun 2024 23:09:13 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "hi Tomas,\n\nSo many thanks for your kind response and detailed report. I am working\non locating issues based on your report/script and optimizing code, and\nwill update later.\n\nCould you please also send me the script to generate report pdf\nfrom the test results (explain*.log)? I can try to make one by myself,\nbut I'd like to get a report exactly the same as yours. It's really\nhelpful.\n\nThanks in advance.\n\n\nYao Wang\n\nOn Mon, Jun 10, 2024 at 5:09 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hello Yao,\n>\n> I was interested in the patch, considering the promise of significant\n> speedups of sorting, so I took a quick look and did some basic perf\n> testing today. Unfortunately, my benchmarks don't really confirm any\n> peformance benefits, so I haven't looked at the code very much and only\n> have some very basic feedback:\n>\n> 1) The new GUC is missing from the .sample config, triggering a failure\n> of \"make check-world\". Fixed by 0002.\n>\n> 2) There's a place mixing tabs/spaces in indentation. Fixed by 0003.\n>\n> 3) I tried running pgindent, mostly to see how that would affect the\n> comments, and for most it's probably fine, but a couple are mangled\n> (usually those with a numbered list of items). Might needs some changes\n> to use formatting that's not reformatted like this. The changes from\n> pgindent are in 0004, but this is not a fix - it just shows the changes\n> after running pgindent.\n>\n> Now, regarding the performance tests - I decided to do the usual black\n> box testing, i.e. generate tables with varying numbers of columns, data\n> types, different data distribution (random, correlated, ...) and so on.\n> And then run simple ORDER BY queries on that, measuring timing with and\n> without mk-sort, and checking the effect.\n>\n> So I wrote a simple bash script (attached) that does exactly that - it\n> generates a table with 1k - 10M rows, fills with with data (with some\n> basic simple data distributions), and then runs the queries.\n>\n> The raw results are too large to attach, I'm only attaching a PDF\n> showing the summary with a \"speedup heatmap\" - it's a pivot with the\n> parameters on the left, and then the GUC and number on columns on top.\n> So the first group of columns is with enable_mk_sort=off, the second\n> group with enable_mk_sort=on, and finally the heatmap with relative\n> timing (enable_mk_sort=on / enable_mk_sort=off).\n>\n> So values <100% mean it got faster (green color - good), and values\n> >100% mean it got slower (red - bad). And the thing is - pretty much\n> everything is red, often in the 200%-300% range, meaning it got 2x-3x\n> slower. There's only very few combinations where it got faster. That\n> does not seem very promising ... but maybe I did something wrong?\n>\n> After seeing this, I took a look at your example again, which showed\n> some nice speedups. But it seems very dependent on the order of keys in\n> the ORDER BY clause. For example consider this:\n>\n> set enable_mk_sort = on;\n> explain (analyze, timing off)\n> select * from t1 order by c6, c5, c4, c3, c2, c1;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Sort (cost=72328.81..73578.81 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Sort Key: c6, c5, c4, c3, c2, c1\n> Sort Method: quicksort Memory: 59163kB\n> -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Planning Time: 0.054 ms\n> Execution Time: 1095.183 ms\n> (6 rows)\n>\n> set enable_mk_sort = on;\n> explain (analyze, timing off)\n> select * from t1 order by c6, c5, c4, c3, c2, c1;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Sort (cost=72328.81..73578.81 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Sort Key: c6, c5, c4, c3, c2, c1\n> Sort Method: multi-key quick sort Memory: 59163kB\n> -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Planning Time: 0.130 ms\n> Execution Time: 633.635 ms\n> (6 rows)\n>\n> Which seems great, but let's reverse the sort keys:\n>\n> set enable_mk_sort = off;\n> explain (analyze, timing off)\n> select * from t1 order by c1, c2, c3, c4, c5, c6;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------\n>\n> Sort (cost=72328.81..73578.81 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Sort Key: c1, c2, c3, c4, c5, c6\n> Sort Method: quicksort Memory: 59163kB\n> -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Planning Time: 0.146 ms\n> Execution Time: 170.085 ms\n> (6 rows)\n>\n> set enable_mk_sort = off;\n> explain (analyze, timing off)\n> select * from t1 order by c1, c2, c3, c4, c5, c6;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Sort (cost=72328.81..73578.81 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Sort Key: c1, c2, c3, c4, c5, c6\n> Sort Method: multi-key quick sort Memory: 59163kB\n> -> Seq Scan on t1 (cost=0.00..24999.99 rows=499999 width=76)\n> (actual rows=499999 loops=1)\n> Planning Time: 0.127 ms\n> Execution Time: 367.263 ms\n> (6 rows)\n>\n> I believe this is the case Heikki was asking about. I see the response\n> was that it's OK and the overhead is very low, but without too much\n> detail so I don't know what case you measured.\n>\n> Anyway, I think it seems to be very sensitive to the exact data set.\n> Which is not entirely surprising, I guess - most optimizations have a\n> mix of improved/regressed cases, yielding a heatmap with a mix of green\n> and red areas, and we have to either optimize the code (or heuristics to\n> enable the feature), or convince ourselves the \"red\" cases are less\n> important / unlikely etc.\n>\n> But here the results are almost universally \"red\", so it's going to be\n> very hard to convince ourselves this is a good trade off. Of course, you\n> may argue the cases I've tested are wrong and not representative. I\n> don't think that's the case, though.\n>\n> It's also interesting (and perhaps a little bit bizarre) that almost all\n> the cases that got better are for a single-column sort. Which is exactly\n> the case the patch should not affect. But it seems pretty consistent, so\n> maybe this is something worth investigating.\n>\n> FWIW I'm not familiar with the various quicksort variants, but I noticed\n> that the Bentley & Sedgewick paper mentioned as the basis for the patch\n> is from 1997, and apparently implements stuff originally proposed by\n> Hoare in 1961. So maybe this is just an example of an algorithm that was\n> good for a hardware at that time, but the changes (e.g. the growing\n> important of on-CPU caches) made it less relevant?\n>\n> Another thing I noticed while skimming [1] is this:\n>\n> The algorithm is designed to exploit the property that in many\n> problems, strings tend to have shared prefixes.\n>\n> If that's the case, isn't it wrong to apply this to all sorts, including\n> sorts with non-string keys? It might explain why your example works OK,\n> as it involves key c6 which is string with all values sharing the same\n> (fairly long) prefix. But then maybe we should be careful and restrict\n> this to only such those cases?\n>\n> regards\n>\n> [1] https://en.wikipedia.org/wiki/Multi-key_quicksort\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.\n\n\n", "msg_date": "Fri, 14 Jun 2024 19:20:11 +0800", "msg_from": "Yao Wang <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "\n\nOn 6/14/24 13:20, Yao Wang wrote:\n> hi Tomas,\n> \n> So many thanks for your kind response and detailed report. I am working\n> on locating issues based on your report/script and optimizing code, and\n> will update later.\n> \n> Could you please also send me the script to generate report pdf\n> from the test results (explain*.log)? I can try to make one by myself,\n> but I'd like to get a report exactly the same as yours. It's really\n> helpful.\n> \n\nI don't have a script for that. I simply load the results into a\nspreadsheet, do a pivot table to \"aggregate and reshuffle\" it a bit, and\nthen add a heatmap. I use google sheets for this, but any other\nspreadsheet should handle this too, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 14 Jun 2024 14:27:50 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "On Fri, Jun 14, 2024 at 6:20 PM Yao Wang <[email protected]> wrote:\n>\n> hi Tomas,\n>\n> So many thanks for your kind response and detailed report. I am working\n> on locating issues based on your report/script and optimizing code, and\n> will update later.\n\nHi,\nThis is an interesting proof-of-concept!\n\nGiven the above, I've set this CF entry to \"waiting on author\".\n\nAlso, I see you've added Heikki as a reviewer. I'm not sure how others\nthink, but I consider a \"reviewer\" in the CF app to be someone who has\nvolunteered to be responsible to help move this patch forward. If\nthere is a name in the reviewer column, it may discourage others from\ndoing review. It also can happened that people ping reviewers to ask\n\"There's been no review for X months -- are you planning on looking at\nthis?\", and it's not great if that message is a surprise.\n\nNote that we prefer not to top-post in emails since it makes our web\narchive more difficult to read.\n\nThanks,\nJohn\n\n\n", "msg_date": "Thu, 20 Jun 2024 17:00:03 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "Hi John,\n\nThanks for your kind message. I talked to Heikki before getting Tomas's\nresponse, and he said \"no promise but I will take a look\". That's why I\nadded his email. I have updated the CF entry and added Tomas as reviewer.\n\nHi Tomas,\n\nAgain, I'd say a big thank to you. The report and script are really, really\nhelpful. And your ideas are very valuable.\n\nFirstly, the expectation of mksort performance:\n\n1. When mksort works well, it should be faster than qsort because it saves\nthe cost of comparing duplicated values every time.\n2. When all values are distinct at a particular column, the comparison\nwill finish immediately, and mksort will actually fall back to qsort. For\nthe case, mksort should be equal or a bit slower than qsort because it need\nto maintain more complex state.\n\nGenerally, the benefit of mksort is mainly from duplicated values and sort\nkeys: the more duplicated values and sort keys are, the bigger benefit it\ngets.\n\nAnalysis on the report in your previous mail\n--------------------------------------------\n\n1. It seems the script uses $count to specify the duplicated values:\n\nnumber of repetitions for each value (ndistinct = nrows/count)\n\nHowever, it is not always correct. For type text, the script generates\nvalues like this:\n\nexpr=\"md5(((i / $count) + random())::text)\"\n\nBut md5() generates totally random values regardless of $count. Some cases\nof timestamptz have the same problem.\n\nFor all distinct values, the sort will finish at first depth and fall to\nqsort actually.\n\n2. Even for the types with correct duplicated setting, the duplicated ratio\nis very small: e.g. say $nrows = 10000 and $count = 100, only 1% duplicated\nrows can go to depth 2, and only 0.01% of them can go to depth 3. So it still\nworks on nearly all distinct values.\n\n3. Qsort of PG17 uses kind of specialization for tuple comparator, i.e. it\nuses specialized functions for different types, e.g. qsort_tuple_unsigned()\nfor unsigned int. The specialized comparators avoid all type related checks\nand are much faster than regular comparator. That is why we saw 200% or more\nregression for the cases.\n\n\nCode optimizations I did for mk qsort\n-------------------------------------\n\n1. Adapted specialization for tuple comparator.\n2. Use kind of \"hybrid\" sort: when we actually adapt bubble sort due to\nlimited sort items, use bubble sort to check datums since specified depth.\n3. Other other optimizations such as pre-ordered check.\n\n\nAnalysis on the new report\n--------------------------\n\nI also did some modifications to your script about the issues of data types,\nplus an output about distinct value count/distinct ratio, and an indicator\nfor improvement/regression. I attached the new script and a report on a\ndata set with 100,000 rows and 2, 5, 8 columns.\n\n1. Generally, the result match the expectation: \"When mksort works well, it\nshould be faster than qsort; when mksort falls to qsort, it should be equal\nor a bit slower than qsort.\"\n2. For all values of \"sequential\" (except text type), mksort is a bit slower\nthan qsort because no actual sort is performed due to the \"pre-ordered\"\ncheck.\n3. For int and bigint type, mksort became faster and faster when\nthere were more and more duplicated values and sort keys. Improvement of\nthe best cases is about 58% (line 333) and 57% (line 711).\n4. For timestamptz type, mksort is a bit slower than qsort because the\ndistinct ratio is always 1 for almost all cases. I think more benefit is\navailable by increasing the duplicated values.\n5. For text type, mksort is faster than qsort for all cases, and\nimprovement of the best case is about 160% (line 1510). It is the only\ntested type in which specialization comparators are disabled.\n\nObviously, text has much better improvement than others. I suppose the cause\nis about the specialisation comparators: for the types with them, the\ncomparing is too faster so the cost saved by mksort is not significant. Only\nwhen saved cost became big enough, mksort can defeat qsort.\n\nFor other types without specialisation comparators, mksort can defeat\nqsort completely. It is the \"real\" performance of mksort.\n\n\nAnswers for some other questions you mentioned\n----------------------------------------------\n\nQ1: Why are almost all the cases that got better for a single-column sort?\n\nA: mksort is enabled only for multi column sort. When there is only one\ncolumn, qsort works. So we can simply ignore the cases.\n\nQ2: Why did the perf become worse by just reversing the sort keys?\n\nA: In the example we used, the sort keys are ordered from more duplicated\nto less. Please see the SQL:\n\nupdate t1 set c2 = c1 % 100, c3 = c1 % 50, c4 = c1 % 10, c5 = c1 % 3;\nupdate t1 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n|| (c1 % 5)::text;\n\nSo c6 has most duplicated values, c5 has less, and so on. By the order\n\"C6, c5, c4, ...\", mksort can take effect on every sort key.\n\nBy the reverse order \"c2, c3, c4...\", mksort almost finished on first\nsort key (c2) because it has only 1% duplicated values, and fell back\nto qsort actually.\n\nBased on the new code, I reran the example, and got about 141% improvement\nfor order \"c6, c5, c4...\", and about -4% regression for order\n\"c2, c3, c4...\".\n\nQ3: Does mksort work effectively only for particular type, e.g. string?\n\nA: No, the implementation of mksort does not distinguish data type for\nspecial handling. It just calls existing comparators which are also\nused by qsort. I used long prefix for string just to enlarge the time\ncost of comparing to amplify the result. The new report shows mksort\ncan work effectively on non-string types and string without long prefix.\n\nQ4: Was the algorithm good for a hardware at that time, but the changes\n(e.g. the growing important of on-CPU caches) made it less relevant?\n\nA: As my understanding, the answer is no because the benefit of mksort\nis from saving cost for duplicated comparison, which is not related to\nhardware. I suppose the new report can prove it.\n\nHowever, the hardware varying definitely affects the perf, especially\nconsidering that the perf different between mksort and qsort is not so\nbig when mksort falls back to qsort. I am not able to test on a wide\nrange of hardwares, so any finding is appreciated.\n\n\nPotential improvement spaces\n----------------------------\n\nI tried some other optimizations but didn't add the code finally because\nthe benefit is not very sure and/or the implementation is complex. Just\nraise them for more discussion if necessary:\n\n1. Use distinct stats info of table to enable mksort\n\nIt's kind of heuristics: in optimizer, check Form_pg_statistic->stadistinct\nof a table via pg_statistics. Enable mksort only when it is less than a\nthreshold.\n\nThe hacked code works, which need to modify a couple of interfaces of\noptimizer. In addition, a complete solution should consider types and\ndistinct values of all columns, which might be too complex, and the benefit\nseems not so big.\n\n2. Cache of datum positions\n\ne.g. for heap tuple, we need to extract datum position from SortTuple by\nextract_heaptuple_from_sorttuple() for comparing, which is executed\nfor each datum. By comparison, qsort does it once for each tuple.\nTheoretically we can create a cache to remember the datum positions to\navoid duplicated extracting.\n\nThe hacked code works, but the improvement seems limited. Not sure if more\nimprovement space is available.\n\n3. Template mechanism\n\nQsort uses kind of template mechanism by macro (see sort_template.h), which\navoids cost of runtime type check. Theoretically template mechanism can be\napplied to mksort, but I am hesitating because it will impose more complexity\nand the code will become difficult to maintain.\n\nPlease let me know your opinion, thanks!\n\nYao Wang\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.", "msg_date": "Thu, 4 Jul 2024 20:45:33 +0800", "msg_from": "Yao Wang <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "\nOn 04/07/2024 3:45 pm, Yao Wang wrote:\n> Generally, the benefit of mksort is mainly from duplicated values and sort\n> keys: the more duplicated values and sort keys are, the bigger benefit it\n> gets.\n...\n> 1. Use distinct stats info of table to enable mksort\n>\n> It's kind of heuristics: in optimizer, check Form_pg_statistic->stadistinct\n> of a table via pg_statistics. Enable mksort only when it is less than a\n> threshold.\n>\n> The hacked code works, which need to modify a couple of interfaces of\n> optimizer. In addition, a complete solution should consider types and\n> distinct values of all columns, which might be too complex, and the benefit\n> seems not so big.\n\n\nIf mksort really provides advantage only when there are a lot of \nduplicates (for prefix keys?) and of small fraction of duplicates there \nis even some (small) regression\nthen IMHO taking in account in planner information about estimated \nnumber of distinct values seems to be really important. What was a \nproblem with accessing this statistics and why it requires modification \nof optimizer interfaces? There is `get_variable_numdistinct` function \nwhich is defined and used only in selfuncs.c\n\nInformation about values distribution seems to be quite useful for  \nchoosing optimal sort algorithm. Not only for multi-key sort \noptimization. For example if we know min.max value of sort key and it is \nsmall, we can use O(N) algorithm for sorting. Also it can help to \nestimate when TOP-N search is preferable.\n\nRight now Posgres creates special path for incremental sort. I am not \nsure if we also need to be separate path for mk-sort.\nBut IMHO if we need to change some optimizer interfaces to be able to \ntake in account statistic and choose preferred sort algorithm at \nplanning time, then it should be done.\nIf mksort can increase sort more than two times (for large number of \nduplicates), it will be nice to take it in account when choosing optimal \nplan.\n\nAlso in this case we do not need extra GUC for explicit enabling of \nmksort. There are too many parameters for optimizer and adding one more \nwill make tuning more complex. So I prefer that decision is take buy \noptimizer itself based on the available information, especially if \ncriteria seems to be obvious.\n\n\nBest regards,\nKonstantin\n\n\n\n", "msg_date": "Sun, 7 Jul 2024 09:32:34 +0300", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "Hello,\n\nThanks for posting a new version of the patch, and for reporting a bunch\nof issues in the bash scripts I used for testing. I decided to repeat\nthose fixed tests on both the old and new version of the patches, and I\nfinally have the results from three machines (the i5/xeon I usually use,\nand also rpi5 for fun).\n\nThe complete scripts, raw results (CSV), and various reports (ODS and\nPDF) are available in my github:\n\n https://github.com/tvondra/mksort-tests\n\nI'm not going to attach all of it to this message, because the raw CSV\nresults alone are ~3MB for each of the three machines.\n\nYou can do your own analysis on the raw CSV results, of course - see the\n'csv' directory, there are data for the clean branch and the two patch\nversions.\n\nBut I've also prepared PDF reports comparing how the patches work on\neach of the machines - see the 'pdf' directory. There are two types of\nreports, depending on what's compared to what.\n\nThe general report structure is the same - columns with results for\ndifferent combinations of parameters, followed by comparison of the\nresults and a heatmap (red - bad/regression, green - good/speedup).\n\nThe \"patch comparison\" reports compare v5/v4, so it's essentially\n\n (timing with v5) / (timing with v4)\n\nwith the mksort enabled or disabled. And the charts are pretty green,\nwhich means v5 is much faster than v4 - so seems like a step in the\nright direction.\n\nThe \"patch impact\" reports compare v4/master and v5/master, i.e. this is\nwhat the users would see after an upgrade. Attached is an small example\nfrom the i5 machine, but the other machines behave in almost exactly the\nsame way (including the tiny rpi5).\n\nFor v4, the results were not great - almost everything regressed (red\ncolor), except for the \"text\" data type (green).\n\nYou can immediately see v5 does much better - it still regresses, but\nthe regressions are way smaller. And the speedup for \"text\" it actually\na bit more significant (there's more/darker green).\n\nSo as I said before, I think v5 is definitely moving in the right\ndirection, but the regressions still seem far too significant. If you're\nsorting a lot of text data, then sure - this will help a lot. But if\nyou're sorting int data, and it happens to be random/correlated, you're\ngoing to pay 10-20% more. That's not great.\n\nI haven't analyzed the code very closely, and I don't have a great idea\non how to fix this. But I think to make this patch committable, this\nneeds to be solved.\n\nConsidering the benefits seems to be pretty specific to \"text\" (and\nperhaps some other data types), maybe the best solution would be to only\nenable this for those cases. Yes, there are some cases where this helps\nfor the other data types too, but that also comes with the regressions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 8 Jul 2024 16:40:30 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "\n\nOn 7/4/24 14:45, Yao Wang wrote:\n> Hi John,\n> \n> Thanks for your kind message. I talked to Heikki before getting Tomas's\n> response, and he said \"no promise but I will take a look\". That's why I\n> added his email. I have updated the CF entry and added Tomas as reviewer.\n> \n> Hi Tomas,\n> \n> Again, I'd say a big thank to you. The report and script are really, really\n> helpful. And your ideas are very valuable.\n> \n> Firstly, the expectation of mksort performance:\n> \n> 1. When mksort works well, it should be faster than qsort because it saves\n> the cost of comparing duplicated values every time.\n> 2. When all values are distinct at a particular column, the comparison\n> will finish immediately, and mksort will actually fall back to qsort. For\n> the case, mksort should be equal or a bit slower than qsort because it need\n> to maintain more complex state.\n> \n> Generally, the benefit of mksort is mainly from duplicated values and sort\n> keys: the more duplicated values and sort keys are, the bigger benefit it\n> gets.\n> \n> Analysis on the report in your previous mail\n> --------------------------------------------\n> \n> 1. It seems the script uses $count to specify the duplicated values:\n> \n> number of repetitions for each value (ndistinct = nrows/count)\n> \n> However, it is not always correct. For type text, the script generates\n> values like this:\n> \n> expr=\"md5(((i / $count) + random())::text)\"\n> \n> But md5() generates totally random values regardless of $count. Some cases\n> of timestamptz have the same problem.\n> \n> For all distinct values, the sort will finish at first depth and fall to\n> qsort actually.\n> \n\nYou're right, thanks for noticing / fixing this.\n\n> 2. Even for the types with correct duplicated setting, the duplicated ratio\n> is very small: e.g. say $nrows = 10000 and $count = 100, only 1% duplicated\n> rows can go to depth 2, and only 0.01% of them can go to depth 3. So it still\n> works on nearly all distinct values.\n> \n\nTrue, but that's why the scripts test with much larger data sets too,\nwith more comparisons needing to look at other columns. It's be possible\nto construct data sets that are likely to benefit more from mksort - I'm\nnot against doing that, but then there's the question of what data sets\nare more representative of what users actually do.\n\nI'd say a random data set like the ones I used are fairly common - it's\nfine to not improve them, but we should not regress them.\n\n> 3. Qsort of PG17 uses kind of specialization for tuple comparator, i.e. it\n> uses specialized functions for different types, e.g. qsort_tuple_unsigned()\n> for unsigned int. The specialized comparators avoid all type related checks\n> and are much faster than regular comparator. That is why we saw 200% or more\n> regression for the cases.\n> \n\nOK, I'm not familiar with this code enough to have an opinion.\n\n> \n> Code optimizations I did for mk qsort\n> -------------------------------------\n> \n> 1. Adapted specialization for tuple comparator.\n> 2. Use kind of \"hybrid\" sort: when we actually adapt bubble sort due to\n> limited sort items, use bubble sort to check datums since specified depth.\n> 3. Other other optimizations such as pre-ordered check.\n> \n> \n> Analysis on the new report\n> --------------------------\n> \n> I also did some modifications to your script about the issues of data types,\n> plus an output about distinct value count/distinct ratio, and an indicator\n> for improvement/regression. I attached the new script and a report on a\n> data set with 100,000 rows and 2, 5, 8 columns.\n> \n\nOK, but I think a report for a single data set size is not sufficient to\nevaluate a patch like this, it can easily miss various caching effects\netc. The results I shared a couple minutes ago are from 1000 to 10M\nrows, and it's much more complete view.\n\n> 1. Generally, the result match the expectation: \"When mksort works well, it\n> should be faster than qsort; when mksort falls to qsort, it should be equal\n> or a bit slower than qsort.\"\n\nThe challenge is how to know in advance if mksort is likely to work well.\n\n> 2. For all values of \"sequential\" (except text type), mksort is a bit slower\n> than qsort because no actual sort is performed due to the \"pre-ordered\"\n> check.\n\nOK\n\n> 3. For int and bigint type, mksort became faster and faster when\n> there were more and more duplicated values and sort keys. Improvement of\n> the best cases is about 58% (line 333) and 57% (line 711).\n\nI find it hard to interpret the text-only report, but I suppose these\nare essentially the \"green\" patches in the PDF report I attached to my\nearlier message. And indeed, there are nice improvements, but only with\ncases with very many duplicates, and the price for that is 10-20%\nregressions in the other cases. That does not seem like a great trade\noff to me.\n\n> 4. For timestamptz type, mksort is a bit slower than qsort because the\n> distinct ratio is always 1 for almost all cases. I think more benefit is\n> available by increasing the duplicated values.\n\nYeah, this was a bug in my script, generating too many distinct values.\nAfter fixing that, it behaves pretty much exactly like int/bigint, which\nis not really surprising.\n\n> 5. For text type, mksort is faster than qsort for all cases, and\n> improvement of the best case is about 160% (line 1510). It is the only\n> tested type in which specialization comparators are disabled.\n> \n\nCorrect.\n\n> Obviously, text has much better improvement than others. I suppose the cause\n> is about the specialisation comparators: for the types with them, the\n> comparing is too faster so the cost saved by mksort is not significant. Only\n> when saved cost became big enough, mksort can defeat qsort.\n> \n> For other types without specialisation comparators, mksort can defeat\n> qsort completely. It is the \"real\" performance of mksort.\n> \n\nNo opinion, but if this is the case, then maybe the best solution is to\nonly use mksort for types without specialized comparators.\n\n> \n> Answers for some other questions you mentioned\n> ----------------------------------------------\n> \n> Q1: Why are almost all the cases that got better for a single-column sort?\n> \n> A: mksort is enabled only for multi column sort. When there is only one\n> column, qsort works. So we can simply ignore the cases.\n> \n> Q2: Why did the perf become worse by just reversing the sort keys?\n> \n> A: In the example we used, the sort keys are ordered from more duplicated\n> to less. Please see the SQL:\n> \n> update t1 set c2 = c1 % 100, c3 = c1 % 50, c4 = c1 % 10, c5 = c1 % 3;\n> update t1 set c6 = 'aaaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbbbbbbb'\n> || (c1 % 5)::text;\n> \n> So c6 has most duplicated values, c5 has less, and so on. By the order\n> \"C6, c5, c4, ...\", mksort can take effect on every sort key.\n> \n> By the reverse order \"c2, c3, c4...\", mksort almost finished on first\n> sort key (c2) because it has only 1% duplicated values, and fell back\n> to qsort actually.\n> \n> Based on the new code, I reran the example, and got about 141% improvement\n> for order \"c6, c5, c4...\", and about -4% regression for order\n> \"c2, c3, c4...\".\n> \n\nOK\n\n> Q3: Does mksort work effectively only for particular type, e.g. string?\n> \n> A: No, the implementation of mksort does not distinguish data type for\n> special handling. It just calls existing comparators which are also\n> used by qsort. I used long prefix for string just to enlarge the time\n> cost of comparing to amplify the result. The new report shows mksort\n> can work effectively on non-string types and string without long prefix.\n> \n\nMaybe I misunderstood, but I think it seems to help much less for types\nwith specialized comparators, so maybe I'd rephrase this if it only\nworks effectively for types without them.\n\n> Q4: Was the algorithm good for a hardware at that time, but the changes\n> (e.g. the growing important of on-CPU caches) made it less relevant?\n> \n> A: As my understanding, the answer is no because the benefit of mksort\n> is from saving cost for duplicated comparison, which is not related to\n> hardware. I suppose the new report can prove it.\n> \n> However, the hardware varying definitely affects the perf, especially\n> considering that the perf different between mksort and qsort is not so\n> big when mksort falls back to qsort. I am not able to test on a wide\n> range of hardwares, so any finding is appreciated.\n> \n\nOK. FWIW I think it's important to test with a range of data set sizes\nexactly to evaluate these hardware-related effects (some of which are\nrelated to size of various caches).\n\n> \n> Potential improvement spaces\n> ----------------------------\n> \n> I tried some other optimizations but didn't add the code finally because\n> the benefit is not very sure and/or the implementation is complex. Just\n> raise them for more discussion if necessary:\n> \n> 1. Use distinct stats info of table to enable mksort\n> \n> It's kind of heuristics: in optimizer, check Form_pg_statistic->stadistinct\n> of a table via pg_statistics. Enable mksort only when it is less than a\n> threshold.\n> \n> The hacked code works, which need to modify a couple of interfaces of\n> optimizer. In addition, a complete solution should consider types and\n> distinct values of all columns, which might be too complex, and the benefit\n> seems not so big.\n> \n\nI assume that's not in v5, or did I miss a part of the patch doing it?\n\n\nI any case, I suspect relying on stadistinct is going to be unreliable.\nIt's known to be pretty likely to be off, and especially if this is\nabout multiple columns, which can be correlated in some way.\n\nIt would be much better if we could make this decision at runtime, based\non some cheap heuristics. Not sure if that's possible, though.\n\n> 2. Cache of datum positions\n> \n> e.g. for heap tuple, we need to extract datum position from SortTuple by\n> extract_heaptuple_from_sorttuple() for comparing, which is executed\n> for each datum. By comparison, qsort does it once for each tuple.\n> Theoretically we can create a cache to remember the datum positions to\n> avoid duplicated extracting.\n> \n> The hacked code works, but the improvement seems limited. Not sure if more\n> improvement space is available.\n> \n\nNo idea. But it seems more like an independent optimization than a fix\nfor the cases where mksort v5 regresses, right?\n\n> 3. Template mechanism\n> \n> Qsort uses kind of template mechanism by macro (see sort_template.h), which\n> avoids cost of runtime type check. Theoretically template mechanism can be\n> applied to mksort, but I am hesitating because it will impose more complexity\n> and the code will become difficult to maintain.\n> \n\nNo idea, but same as above - I don't see how templating could address\nthe regressions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:14:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "\n\nOn 7/7/24 08:32, Konstantin Knizhnik wrote:\n> \n> On 04/07/2024 3:45 pm, Yao Wang wrote:\n>> Generally, the benefit of mksort is mainly from duplicated values and\n>> sort\n>> keys: the more duplicated values and sort keys are, the bigger benefit it\n>> gets.\n> ...\n>> 1. Use distinct stats info of table to enable mksort\n>>\n>> It's kind of heuristics: in optimizer, check\n>> Form_pg_statistic->stadistinct\n>> of a table via pg_statistics. Enable mksort only when it is less than a\n>> threshold.\n>>\n>> The hacked code works, which need to modify a couple of interfaces of\n>> optimizer. In addition, a complete solution should consider types and\n>> distinct values of all columns, which might be too complex, and the\n>> benefit\n>> seems not so big.\n> \n> \n> If mksort really provides advantage only when there are a lot of\n> duplicates (for prefix keys?) and of small fraction of duplicates there\n> is even some (small) regression\n> then IMHO taking in account in planner information about estimated\n> number of distinct values seems to be really important. What was a\n> problem with accessing this statistics and why it requires modification\n> of optimizer interfaces? There is `get_variable_numdistinct` function\n> which is defined and used only in selfuncs.c\n> \n\nYeah, I've been wondering about that too. But I'm also a bit unsure if\nusing this known-unreliable statistics (especially with filters and\nmultiple columns) would actually fix the regressions.\n\n> Information about values distribution seems to be quite useful for \n> choosing optimal sort algorithm. Not only for multi-key sort\n> optimization. For example if we know min.max value of sort key and it is\n> small, we can use O(N) algorithm for sorting. Also it can help to\n> estimate when TOP-N search is preferable.\n> \n\nThis assumes the information is accurate / reliable, and I'm far from\nsure about that.\n\n> Right now Posgres creates special path for incremental sort. I am not\n> sure if we also need to be separate path for mk-sort.\n> But IMHO if we need to change some optimizer interfaces to be able to\n> take in account statistic and choose preferred sort algorithm at\n> planning time, then it should be done.\n> If mksort can increase sort more than two times (for large number of\n> duplicates), it will be nice to take it in account when choosing optimal\n> plan.\n> \n\nI did commit the incremental sort patch, and TBH I'm not convinced I'd\ndo that again. It's a great optimization when it works (and it seems to\nwork in plenty of cases), but we've also had a number of reports about\nsignificant regressions, where the incremental sort costing is quite\noff. Granted, it's often about cases where we already had issues and\nincremental sort just \"exacerbates\" that (say, with LIMIT queries), but\nthat's kinda the point I'm trying to make - stats are inherently\nincomplete / simplified, and some plans are more sensitive to that.\n\nWhich is why I'm wondering if we might do the decision based on some\ninformation collected at runtime.\n\n> Also in this case we do not need extra GUC for explicit enabling of\n> mksort. There are too many parameters for optimizer and adding one more\n> will make tuning more complex. So I prefer that decision is take buy\n> optimizer itself based on the available information, especially if\n> criteria seems to be obvious.\n> \n\nThe GUC is very useful for testing, so let's keep it for now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 8 Jul 2024 17:25:43 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "BTW I forgot to report that I intended to test this on 32-bit ARM too,\nbecause that sometimes triggers \"funny\" behavior, but the build fails\nlike this:\n\nIn file included from tuplesort.c:630:\nmk_qsort_tuple.c: In function ‘mkqs_compare_datum_by_shortcut’:\nmk_qsort_tuple.c:167:23: warning: implicit declaration of function\n‘ApplySignedSortComparator’; did you mean ‘ApplyUnsignedSortComparator’?\n[-Wimplicit-function-declaration]\n 167 | ret = ApplySignedSortComparator(tuple1->datum1,\n | ^~~~~~~~~~~~~~~~~~~~~~~~~\n | ApplyUnsignedSortComparator\nmk_qsort_tuple.c: In function ‘mkqs_compare_tuple’:\nmk_qsort_tuple.c:376:23: warning: implicit declaration of function\n‘qsort_tuple_signed_compare’; did you mean\n‘qsort_tuple_unsigned_compare’? [-Wimplicit-function-declaration]\n 376 | ret = qsort_tuple_signed_compare(a, b, state);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~\n | qsort_tuple_unsigned_compare\n/usr/bin/ld: utils/sort/tuplesort.o: in function\n`mkqs_compare_datum_by_shortcut':\n/home/debian/postgres/src/backend/utils/sort/mk_qsort_tuple.c:167:\nundefined reference to `ApplySignedSortComparator'\n/usr/bin/ld:\n/home/debian/postgres/src/backend/utils/sort/mk_qsort_tuple.c:167:\nundefined reference to `ApplySignedSortComparator'\n/usr/bin/ld:\n/home/debian/postgres/src/backend/utils/sort/mk_qsort_tuple.c:167:\nundefined reference to `ApplySignedSortComparator'\n/usr/bin/ld: utils/sort/tuplesort.o: in function `mkqs_compare_tuple':\n/home/debian/postgres/src/backend/utils/sort/mk_qsort_tuple.c:376:\nundefined reference to `qsort_tuple_signed_compare'\n/usr/bin/ld: utils/sort/tuplesort.o: in function\n`mkqs_compare_datum_by_shortcut':\n/home/debian/postgres/src/backend/utils/sort/mk_qsort_tuple.c:167:\nundefined reference to `ApplySignedSortComparator'\ncollect2: error: ld returned 1 exit status\nmake[2]: *** [Makefile:67: postgres] Error 1\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\nI haven't investigated why it fails.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Jul 2024 13:34:26 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5?=\n =?UTF-8?Q?_sort?=" }, { "msg_contents": "On Sun, Jul 7, 2024 at 2:32 AM Konstantin Knizhnik <[email protected]> wrote:\n> If mksort really provides advantage only when there are a lot of\n> duplicates (for prefix keys?) and of small fraction of duplicates there\n> is even some (small) regression\n> then IMHO taking in account in planner information about estimated\n> number of distinct values seems to be really important.\n\nI don't think we can rely on the planner's n_distinct estimates for\nthis at all. That information tends to be massively unreliable when we\nhave it at all. If we rely on it for good performance, it will be easy\nto find cases where it's wrong and performance is bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Jul 2024 14:58:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "Thanks for all of your comments.\n\n\nProgress\n--------\n\nBecause there are too many details for discussion, let me summarize to\ntwo major issues:\n\n1. Since it seems the perf is ideal for data types without specialized\ncomparator, can we improve the perf for data types with specialized\ncomparator to a satisfying level?\n\nI refactored some code on mksort code path and eliminated kinda perf\nbottlenecks. With latest code, most of results shows mksort got better\nperf than qsort. For other cases, the regressions are usually less than\n5% except seldom exceptions. Since most of the exceptions are transient\n(happening occasionally in a series of \"normal\" results), I prefer they\nare due to external interruption because the test machine is not\ndedicated. Let's discuss on the latest code.\n\n2. Should we update optimizer to take in account statistic\n(pg_statistic->stadistinct) and choose mksort/qsort accordingly?\n\nThe trick here is not just about the reliability of stadistinct. The\nsort perf is affected by a couple of conditions: sort key count, distinct\nratio, data type, and data layout. e.g. with int type, 5 sort keys, and\ndistinct ratio is about 0.05, we can see about 1% perf improvement for\n\"random\" data set, about 7% for \"correlated\" data set, and almost the\nsame for \"sequential\" data set because of pre-ordered check. So it is\npretty difficult to create a formula to calculate the benefit.\n\nAnyway, I tried to make an implementation by adding \"mkqsApplicable\"\ndetermined by optimizer, so we can discuss based on code and perf result.\nI also updated the test script to add an extra column \"mk_enabled\"\nindicating whether mksort is enabled or not by optimizer.\n\nAccording to the result, the choosing mechanism in optimizer\nalmost eliminated all regressions. Please note that even when mksort is\ndisabled (i.e. qsort was performed twice actually), there are still\n\"regressions\" which are usually less than 5% and should be accounted\nto kinda error range.\n\nI am still hesitating about putting the code to final version because\nthere are a number of concerns:\n\na. for sort keys without a real table, stadistinct is unavailable.\nb. stadistinct may not be accurate as mentioned.\nc. the formula I used may not be able to cover all possible cases.\n\nOn the other hand, the worst result may be just 5% regression. So the\nside effects may not be so serious?\n\nI can refine the code (e.g. for now mkqsApplicable is valid only for\nsome particular code paths), but I prefer to do more after we have a\nclear decision about whether the code in optimizer is needed.\n\nPlease let me know your comments, thanks.\n\n\nAttachements\n------------\n\n- v6-Implement-multi-key-quick-sort.patch\n(v6) The latest code without optimizer change\n\n- v6-1-add-Sort-ndistInFirstRow.patch .\n(v6.1) The latest code with optimizer change, can be applied to v6\n\n- mksort-test-v2.sh\nThe script made by Tomas and modified by me to produce test result\n\n- new_report.txt\nTest result in a small data set (100,000 rows) based on v6\n\n- new_report_opti.txt\nTest result in a small data set (100,000 rows) based on v6.1\n\nI tried to produce a \"full\" report with all data ranges, but it seems\nkept working for more than 15 hours on my machine and was always\ndisturbed by other heavy loads. However, I did run tests on some\ndatasets with other sizes and got similar results.\n\nAnswers for other questions\n---------------------------\n\n1. Can we enable mksort for just particular data types?\n\nAs I mentioned, it is not easy to make the decision considering all the\nfactors impacting the result and all possible combinations. Code in\noptimizer I showed may be a grip.\n\n2. Does v5 include the code about \"distinct stats info\" and others?\n\nNo. As I mentioned, all code in \"Potential improvement spaces\" was not\nincluded in v5 (or not implemented at all). v6.1 includes some code in\noptimizer.\n\n3. Should we remove the GUC enable_mk_sort?\n\nI kept it at least for coding phase. And I prefer keeping it permanently\nin case some scenarios we are not aware of.\n\n4. Build failure on 32-bit ARM\n\nIt is a code fault by myself. ApplySignedSortComparator() is built only\nwhen SIZEOF_DATUM >= 8. I was aware of that, but missed encapsulating\nall relevant code in the condition. It is supposed to have been fixed on\nv6, but I don't have a 32-bit ARM platform. @Tomas please take a try if\nyou still have interest, thanks.\n\n5. How templating could address the regressions?\n\nPlease refer the implementation of qsort in sort_template.h, which adapted\nkinda template mechanism by using macros since C language does not have\nbuilt-in template. .e.g. for comparator, it uses a macro ST_COMPARE which\nis specialized for different functions (such as\nqsort_tuple_unsigned_compare()) for different data types. As a contrast,\nmksort needs to determine the comparator on runtime for each comparison\n(see mkqs_compare_tuple()), which needs more costs. Although the cost is\nnot much, comparison is very performance sensitive. (About 1~2% regression\nif my memory is correct)\n\n\nThanks,\n\nYao Wang\n\nOn Wed, Jul 10, 2024 at 2:58 AM Robert Haas <[email protected]> wrote:\n>\n> On Sun, Jul 7, 2024 at 2:32 AM Konstantin Knizhnik <[email protected]> wrote:\n> > If mksort really provides advantage only when there are a lot of\n> > duplicates (for prefix keys?) and of small fraction of duplicates there\n> > is even some (small) regression\n> > then IMHO taking in account in planner information about estimated\n> > number of distinct values seems to be really important.\n>\n> I don't think we can rely on the planner's n_distinct estimates for\n> this at all. That information tends to be massively unreliable when we\n> have it at all. If we rely on it for good performance, it will be easy\n> to find cases where it's wrong and performance is bad.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n-- \nThis electronic communication and the information and any files transmitted \nwith it, or attached to it, are confidential and are intended solely for \nthe use of the individual or entity to whom it is addressed and may contain \ninformation that is confidential, legally privileged, protected by privacy \nlaws, or otherwise restricted from disclosure to anyone else. If you are \nnot the intended recipient or the person responsible for delivering the \ne-mail to the intended recipient, you are hereby notified that any use, \ncopying, distributing, dissemination, forwarding, printing, or copying of \nthis e-mail is strictly prohibited. If you received this e-mail in error, \nplease return the e-mail to the sender, delete it from your computer, and \ndestroy any printed copy of it.", "msg_date": "Fri, 26 Jul 2024 19:18:35 +0800", "msg_from": "Yao Wang <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" }, { "msg_contents": "On Fri, Jul 26, 2024 at 6:18 PM Yao Wang <[email protected]> wrote:\n\n> 2. Should we update optimizer to take in account statistic\n> (pg_statistic->stadistinct) and choose mksort/qsort accordingly?\n\n> According to the result, the choosing mechanism in optimizer\n> almost eliminated all regressions. Please note that even when mksort is\n> disabled (i.e. qsort was performed twice actually), there are still\n> \"regressions\" which are usually less than 5% and should be accounted\n> to kinda error range.\n\nThis is kind of an understatement. To be clear, I see mostly ~1%,\nwhich I take to be in the noise level. If it were commonly 5%\nregression, that'd be cause for concern. If actual noise were 5%, the\ntesting methodology is not strict enough.\n\n> I am still hesitating about putting the code to final version because\n> there are a number of concerns:\n>\n> a. for sort keys without a real table, stadistinct is unavailable.\n> b. stadistinct may not be accurate as mentioned.\n> c. the formula I used may not be able to cover all possible cases.\n>\n> On the other hand, the worst result may be just 5% regression. So the\n> side effects may not be so serious?\n\nFWIW, I share these concerns as well. I don't believe this proves a\nbound on the worst result, since the test is using ideal well-behaved\ndata with no filter. The worst result is when the estimates are wrong,\nso real world use could easily be back to 10-20% regression, which is\nnot acceptable. I believe this is what Robert and Tomas were warning\nabout.\n\nIt'd be good to understand what causes the differences, whether better\nor worse. Some initial thoughts:\n\n- If the first key is unique, then I would hope multikey would be no\ndifferent then a standard sort.\n\n- If the first key commonly ties, and other following keys tie also,\nthose later comparisons are a waste. In that case, it's not hard to\nimagine that partitioning on only one key at a time might be fast.\n\n- If the first key commonly ties, but the second key is closer to\nunique, I'm not sure which way is better. Have we tested this case?\n\n- If we actually only have one sort key, a multi-key sort with a\nsingle depth should ideally have no significant performance difference\nthan standard sort. That seems like a good sanity check. Has this been\ntried?\n\n- For the biggest benefit/regression cases, it'd be good to know what\nchanged at the hardware level. # comparsisons? # swaps? # cache\nmisses? # branch mispredicts?\n\nLooking at the code a bit, I have some questions and see some\narchitectural issues. My thoughts below have a bunch of brainstorms so\nshould be taken with a large grain of salt:\n\n1. The new single-use abstraction for the btree tid tiebreak seems\nawkward. In standard sort, all that knowledge was confined to btree's\nfull comparetup function. Now it's spread out, and general code has to\nworry about \"duplicated\" tuples. The passed-down isNull seems only\nneeded for this? (Quick idea: It seems we could pass a start-depth and\nmax-depth to some \"comparetup_mk\", which would be very similar to\ncurrent \"comparetup + comparetup_tiebreak\". The btree function would\nhave to know that a depth greater than the last sortkey is a signal to\ndo the tid comparison. And if start-depth and max-depth are the same,\nthat means comparing at a single depth. That might simplify the code\nelsewhere because there is no need for a separate getDatum function.)\n\n2. I don't understand why the pre-ordered check sometimes tolerates\nduplicates and sometimes doesn't.\n\n3. \"tiebreak\" paths and terminology is already somewhat awkward for\nstandard sort (my fault), but seems really out of place in multikey\nsort. It already has a general concept of \"depth\", so that should be\nused in fullest generality.\n\n3A. Random thought: I wonder if the shortcut (and abbreviated?)\ncomparisons could be thought of as having their own depth < 0. If it's\nworth it to postpone later keys, maybe it's worth it to postpone the\nfull comparison for the first key as well? I could be wrong, though.\n\n3B. Side note: I've long wanted to try separating all NULL first keys\nto a separate array, so we can remove all those branches for NULL\nordering and reduce SortTuple to 16 bytes. That might be easier to\ncode if we could simply specify \"start_depth = 1\" at the top level for\nthat.\n\n4. Trying to stuff all our optimized comparators in the same path was\na heroic effort, but it's quite messy and seems pretty bad for the\ninstruction cache and branch predictor. I don't think we need yet\nanother template, but some of these branches should be taken as we\nrecurse into a partition to keep them out of the hot path.\n\n5.\n+ /*\n+ * When the count < 16 and no need to handle duplicated tuples, use\n+ * bubble sort.\n+ *\n+ * Use 16 instead of 7 which is used in standard qsort, because mk qsort\n+ * need more cost to maintain more complex state.\n\nNote: 7 isn't ideal for standard sort either, and should probably be\nat least 10 (at least for single-key sorts). If one implementation's\nparameter is more ideal than the other, it obscures what the true\ntrade-offs are. 16 happens to be a power of two -- how many different\nvalues did you test? (And isn't this the same as our insertion sort,\nnot bubble sort?).\n\n\n", "msg_date": "Sun, 11 Aug 2024 09:05:41 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IOWbnuWkjTogQW4gaW1wbGVtZW50YXRpb24gb2YgbXVsdGkta2V5IHNvcnQ=?=" } ]
[ { "msg_contents": "Hi.\n\nPer Coverity.\n\n2. returned_null: SearchSysCacheAttName returns NULL (checked 20 out of 21\ntimes).\n3. var_assigned: Assigning: ptup = NULL return value from\nSearchSysCacheAttName.\n 964 ptup = SearchSysCacheAttName(relid, attname);\nCID 1545986: (#1 of 1): Dereference null return value (NULL_RETURNS)\n4. dereference: Dereferencing ptup, which is known to be NULL.\n\nThe functions SearchSysCacheAttNum and SearchSysCacheAttName,\nneed to have the result checked.\n\nThe commit 5091995\n<https://github.com/postgres/postgres/commit/509199587df73f06eda898ae13284292f4ae573a>,\nleft an oversight.\n\nFixed by the patch attached, a change of style, unfortunately, was\nnecessary.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 22 May 2024 11:44:32 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "Em qua., 22 de mai. de 2024 às 11:44, Ranier Vilela <[email protected]>\nescreveu:\n\n> Hi.\n>\n> Per Coverity.\n>\n> 2. returned_null: SearchSysCacheAttName returns NULL (checked 20 out of\n> 21 times).\n> 3. var_assigned: Assigning: ptup = NULL return value from\n> SearchSysCacheAttName.\n> 964 ptup = SearchSysCacheAttName(relid, attname);\n> CID 1545986: (#1 of 1): Dereference null return value (NULL_RETURNS)\n> 4. dereference: Dereferencing ptup, which is known to be NULL.\n>\n> The functions SearchSysCacheAttNum and SearchSysCacheAttName,\n> need to have the result checked.\n>\n> The commit 5091995\n> <https://github.com/postgres/postgres/commit/509199587df73f06eda898ae13284292f4ae573a>,\n> left an oversight.\n>\n> Fixed by the patch attached, a change of style, unfortunately, was\n> necessary.\n>\nv1 Attached, fix wrong column variable name in error report.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 22 May 2024 13:09:50 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "Em qua., 22 de mai. de 2024 às 13:09, Ranier Vilela <[email protected]>\nescreveu:\n\n> Em qua., 22 de mai. de 2024 às 11:44, Ranier Vilela <[email protected]>\n> escreveu:\n>\n>> Hi.\n>>\n>> Per Coverity.\n>>\n>> 2. returned_null: SearchSysCacheAttName returns NULL (checked 20 out of\n>> 21 times).\n>> 3. var_assigned: Assigning: ptup = NULL return value from\n>> SearchSysCacheAttName.\n>> 964 ptup = SearchSysCacheAttName(relid, attname);\n>> CID 1545986: (#1 of 1): Dereference null return value (NULL_RETURNS)\n>> 4. dereference: Dereferencing ptup, which is known to be NULL.\n>>\n>> The functions SearchSysCacheAttNum and SearchSysCacheAttName,\n>> need to have the result checked.\n>>\n>> The commit 5091995\n>> <https://github.com/postgres/postgres/commit/509199587df73f06eda898ae13284292f4ae573a>,\n>> left an oversight.\n>>\n>> Fixed by the patch attached, a change of style, unfortunately, was\n>> necessary.\n>>\n> v1 Attached, fix wrong column variable name in error report.\n>\n1. Another concern is the function *get_partition_ancestors*,\nwhich may return NIL, which may affect *llast_oid*, which does not handle\nNIL entries.\n\n2. Is checking *relispartition* enough?\nThere a function *check_rel_can_be_partition*\n(src/backend/utils/adt/partitionfuncs.c),\nwhich performs a much more robust check, would it be worth using it?\n\nWith the v2 attached, 1 is handled, but, in this case,\nwill it be the most correct?\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 22 May 2024 15:28:48 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Wed, May 22, 2024 at 03:28:48PM -0300, Ranier Vilela wrote:\n> 1. Another concern is the function *get_partition_ancestors*,\n> which may return NIL, which may affect *llast_oid*, which does not handle\n> NIL entries.\n\nHm? We already know in the code path that the relation we are dealing\nwith when calling get_partition_ancestors() *is* a partition thanks to\nthe check on relispartition, no? In this case, calling\nget_partition_ancestors() is valid and there should be a top-most\nparent in any case all the time. So I don't get the point of checking\nget_partition_ancestors() for NIL-ness just for the sake of assuming\nthat it would be possible.\n\n> 2. Is checking *relispartition* enough?\n> There a function *check_rel_can_be_partition*\n> (src/backend/utils/adt/partitionfuncs.c),\n> which performs a much more robust check, would it be worth using it?\n> \n> With the v2 attached, 1 is handled, but, in this case,\n> will it be the most correct?\n\nSaying that, your point about the result of SearchSysCacheAttName not\nchecked if it is a valid tuple is right. We paint errors in these\ncases even if they should not happen as that's useful when it comes to\ndebugging, at least.\n--\nMichael", "msg_date": "Thu, 23 May 2024 09:21:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Thu, May 23, 2024 at 5:52 AM Michael Paquier <[email protected]> wrote:\n\n> On Wed, May 22, 2024 at 03:28:48PM -0300, Ranier Vilela wrote:\n> > 1. Another concern is the function *get_partition_ancestors*,\n> > which may return NIL, which may affect *llast_oid*, which does not handle\n> > NIL entries.\n>\n> Hm? We already know in the code path that the relation we are dealing\n> with when calling get_partition_ancestors() *is* a partition thanks to\n> the check on relispartition, no? In this case, calling\n> get_partition_ancestors() is valid and there should be a top-most\n> parent in any case all the time. So I don't get the point of checking\n> get_partition_ancestors() for NIL-ness just for the sake of assuming\n> that it would be possible.\n>\n\n+1.\n\n\n>\n> > 2. Is checking *relispartition* enough?\n> > There a function *check_rel_can_be_partition*\n> > (src/backend/utils/adt/partitionfuncs.c),\n> > which performs a much more robust check, would it be worth using it?\n> >\n> > With the v2 attached, 1 is handled, but, in this case,\n> > will it be the most correct?\n>\n> Saying that, your point about the result of SearchSysCacheAttName not\n> checked if it is a valid tuple is right. We paint errors in these\n> cases even if they should not happen as that's useful when it comes to\n> debugging, at least.\n>\n\nI think an Assert would do instead of whole ereport(). The callers have\nalready resolved attribute name to attribute number. Hence the attribute\n*should* exist in both partition as well as topmost partitioned table.\n\n relid = llast_oid(ancestors);\n+\n ptup = SearchSysCacheAttName(relid, attname);\n+ if (!HeapTupleIsValid(ptup))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_UNDEFINED_COLUMN),\n+ errmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" does not exist\",\n+ attname, RelationGetRelationName(rel))));\n\nWe changed the relid from OID of partition to that of topmost partitioned\ntable but didn't change rel; which still points to partition relation. We\nhave to invoke relation_open() with new relid, in order to use rel in the\nerror message. I don't think all that is worth it, unless we find a\nscenario when SearchSysCacheAttName() returns NULL.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, May 23, 2024 at 5:52 AM Michael Paquier <[email protected]> wrote:On Wed, May 22, 2024 at 03:28:48PM -0300, Ranier Vilela wrote:\n> 1. Another concern is the function *get_partition_ancestors*,\n> which may return NIL, which may affect *llast_oid*, which does not handle\n> NIL entries.\n\nHm?  We already know in the code path that the relation we are dealing\nwith when calling get_partition_ancestors() *is* a partition thanks to\nthe check on relispartition, no?  In this case, calling\nget_partition_ancestors() is valid and there should be a top-most\nparent in any case all the time.  So I don't get the point of checking\nget_partition_ancestors() for NIL-ness just for the sake of assuming\nthat it would be possible.+1. \n\n> 2. Is checking *relispartition* enough?\n> There a function *check_rel_can_be_partition*\n> (src/backend/utils/adt/partitionfuncs.c),\n> which performs a much more robust check, would it be worth using it?\n> \n> With the v2 attached, 1 is handled, but, in this case,\n> will it be the most correct?\n\nSaying that, your point about the result of SearchSysCacheAttName not\nchecked if it is a valid tuple is right.  We paint errors in these\ncases even if they should not happen as that's useful when it comes to\ndebugging, at least.I think an Assert would do instead of whole ereport(). The callers have already resolved attribute name to attribute number. Hence the attribute *should* exist in both partition as well as topmost partitioned table.   relid = llast_oid(ancestors);+ \t\tptup = SearchSysCacheAttName(relid, attname);+\t\tif (!HeapTupleIsValid(ptup))+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_UNDEFINED_COLUMN),+\t\t\t\t\terrmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" does not exist\",+\t\t\t\t\t\t\tattname, RelationGetRelationName(rel))));We changed the relid from OID of partition to that of topmost partitioned table but didn't change rel; which still points to partition relation. We have to invoke relation_open() with new relid, in order to use rel in the error message. I don't think all that is worth it, unless we find a scenario when SearchSysCacheAttName() returns NULL.-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 23 May 2024 14:57:33 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "Hi Micheal,\n\nEm qua., 22 de mai. de 2024 às 21:21, Michael Paquier <[email protected]>\nescreveu:\n\n> On Wed, May 22, 2024 at 03:28:48PM -0300, Ranier Vilela wrote:\n> > 1. Another concern is the function *get_partition_ancestors*,\n> > which may return NIL, which may affect *llast_oid*, which does not handle\n> > NIL entries.\n>\n> Hm? We already know in the code path that the relation we are dealing\n> with when calling get_partition_ancestors() *is* a partition thanks to\n> the check on relispartition, no? In this case, calling\n> get_partition_ancestors() is valid and there should be a top-most\n> parent in any case all the time. So I don't get the point of checking\n> get_partition_ancestors() for NIL-ness just for the sake of assuming\n> that it would be possible.\n>\nI don't have strong feelings about this.\nBut analyzing the function, *pg_partition_root*\n(src/backend/utils/adt/partitionfuncs.c),\nwe see that checking whether it is a partition is done by\ncheck_rel_can_be_partition.\nAnd it doesn't trust get_partition_ancestors, checking\nif the return is NIL.\n\n>\n> > 2. Is checking *relispartition* enough?\n> > There a function *check_rel_can_be_partition*\n> > (src/backend/utils/adt/partitionfuncs.c),\n> > which performs a much more robust check, would it be worth using it?\n> >\n> > With the v2 attached, 1 is handled, but, in this case,\n> > will it be the most correct?\n>\n> Saying that, your point about the result of SearchSysCacheAttName not\n> checked if it is a valid tuple is right. We paint errors in these\n> cases even if they should not happen as that's useful when it comes to\n> debugging, at least.\n>\nThanks.\n\nbest regards,\nRanier Vilela\n\n> --\n> Michael\n>\n\nHi Micheal,Em qua., 22 de mai. de 2024 às 21:21, Michael Paquier <[email protected]> escreveu:On Wed, May 22, 2024 at 03:28:48PM -0300, Ranier Vilela wrote:\n> 1. Another concern is the function *get_partition_ancestors*,\n> which may return NIL, which may affect *llast_oid*, which does not handle\n> NIL entries.\n\nHm?  We already know in the code path that the relation we are dealing\nwith when calling get_partition_ancestors() *is* a partition thanks to\nthe check on relispartition, no?  In this case, calling\nget_partition_ancestors() is valid and there should be a top-most\nparent in any case all the time.  So I don't get the point of checking\nget_partition_ancestors() for NIL-ness just for the sake of assuming\nthat it would be possible.I don't have strong feelings about this.But analyzing the function, *pg_partition_root* (src/backend/utils/adt/partitionfuncs.c),we see that checking whether it is a partition is done bycheck_rel_can_be_partition.And it doesn't trust get_partition_ancestors, checkingif the return is NIL. \n\n> 2. Is checking *relispartition* enough?\n> There a function *check_rel_can_be_partition*\n> (src/backend/utils/adt/partitionfuncs.c),\n> which performs a much more robust check, would it be worth using it?\n> \n> With the v2 attached, 1 is handled, but, in this case,\n> will it be the most correct?\n\nSaying that, your point about the result of SearchSysCacheAttName not\nchecked if it is a valid tuple is right.  We paint errors in these\ncases even if they should not happen as that's useful when it comes to\ndebugging, at least.Thanks.best regards,Ranier Vilela \n--\nMichael", "msg_date": "Thu, 23 May 2024 08:23:18 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "Em qui., 23 de mai. de 2024 às 06:27, Ashutosh Bapat <\[email protected]> escreveu:\n\n>\n>\n> On Thu, May 23, 2024 at 5:52 AM Michael Paquier <[email protected]>\n> wrote:\n>\n>> On Wed, May 22, 2024 at 03:28:48PM -0300, Ranier Vilela wrote:\n>> > 1. Another concern is the function *get_partition_ancestors*,\n>> > which may return NIL, which may affect *llast_oid*, which does not\n>> handle\n>> > NIL entries.\n>>\n>> Hm? We already know in the code path that the relation we are dealing\n>> with when calling get_partition_ancestors() *is* a partition thanks to\n>> the check on relispartition, no? In this case, calling\n>> get_partition_ancestors() is valid and there should be a top-most\n>> parent in any case all the time. So I don't get the point of checking\n>> get_partition_ancestors() for NIL-ness just for the sake of assuming\n>> that it would be possible.\n>>\n>\n> +1.\n>\n>\n>>\n>> > 2. Is checking *relispartition* enough?\n>> > There a function *check_rel_can_be_partition*\n>> > (src/backend/utils/adt/partitionfuncs.c),\n>> > which performs a much more robust check, would it be worth using it?\n>> >\n>> > With the v2 attached, 1 is handled, but, in this case,\n>> > will it be the most correct?\n>>\n>> Saying that, your point about the result of SearchSysCacheAttName not\n>> checked if it is a valid tuple is right. We paint errors in these\n>> cases even if they should not happen as that's useful when it comes to\n>> debugging, at least.\n>>\n>\n> I think an Assert would do instead of whole ereport().\n>\nIMO, Assert there is no better solution here.\n\n\n> The callers have already resolved attribute name to attribute number.\n> Hence the attribute *should* exist in both partition as well as topmost\n> partitioned table.\n>\n> relid = llast_oid(ancestors);\n> +\n> ptup = SearchSysCacheAttName(relid, attname);\n> + if (!HeapTupleIsValid(ptup))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_UNDEFINED_COLUMN),\n> + errmsg(\"column \\\"%s\\\" of relation \\\"%s\\\" does not exist\",\n> + attname, RelationGetRelationName(rel))));\n>\n> We changed the relid from OID of partition to that of topmost partitioned\n> table but didn't change rel; which still points to partition relation. We\n> have to invoke relation_open() with new relid, in order to use rel in the\n> error message. I don't think all that is worth it, unless we find a\n> scenario when SearchSysCacheAttName() returns NULL.\n>\nAll calls to functions like SearchSysCacheAttName, in the whole codebase,\nchecks if returns are valid.\nIt must be for a very strong reason, such a style.\n\nSo, v3, implements it this way.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 23 May 2024 08:54:12 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Thu, May 23, 2024 at 08:54:12AM -0300, Ranier Vilela wrote:\n> All calls to functions like SearchSysCacheAttName, in the whole codebase,\n> checks if returns are valid.\n> It must be for a very strong reason, such a style.\n\nUsually good practice, as I've outlined once upthread, because we do\nexpect the attributes to exist in this case. Or if you want, an error\nis better than a crash if a concurrent path causes this area to lead\nto inconsistent lookups, which is something I've seen in the past\nwhile hacking on my own stuff, or just fix other things causing\nsyscache lookup inconsistencies. You'd be surprised to hear that\ndropped attributes being mishandled is not that uncommon, especially\nin out-of-core code, as one example. FWIW, I don't see much a point\nin using ereport(), the two checks ought to be elog()s pointing to an\ninternal error as these two errors should never happen. Still, it is\na good idea to check that they never happen: aka an internal \nerror state is better than a crash if a problem arises.\n\n> So, v3, implements it this way.\n\nI don't understand the point behind the open/close of attrelation,\nTBH. That's not needed.\n\nExcept fot these two points, this is just moving the calls to make\nsure that we have valid tuples from the syscache, which is a better\npractice. 509199587df7 is recent enough that this should be fixed now\nrather than later.\n--\nMichael", "msg_date": "Fri, 24 May 2024 14:33:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Fri, May 24, 2024 at 11:03 AM Michael Paquier <[email protected]>\nwrote:\n\n> On Thu, May 23, 2024 at 08:54:12AM -0300, Ranier Vilela wrote:\n> > All calls to functions like SearchSysCacheAttName, in the whole codebase,\n> > checks if returns are valid.\n> > It must be for a very strong reason, such a style.\n>\n> Usually good practice, as I've outlined once upthread, because we do\n> expect the attributes to exist in this case. Or if you want, an error\n> is better than a crash if a concurrent path causes this area to lead\n> to inconsistent lookups, which is something I've seen in the past\n> while hacking on my own stuff, or just fix other things causing\n> syscache lookup inconsistencies. You'd be surprised to hear that\n> dropped attributes being mishandled is not that uncommon, especially\n> in out-of-core code, as one example. FWIW, I don't see much a point\n> in using ereport(), the two checks ought to be elog()s pointing to an\n> internal error as these two errors should never happen. Still, it is\n> a good idea to check that they never happen: aka an internal\n> error state is better than a crash if a problem arises.\n>\n> > So, v3, implements it this way.\n>\n> I don't understand the point behind the open/close of attrelation,\n> TBH. That's not needed.\n>\n> Except fot these two points, this is just moving the calls to make\n> sure that we have valid tuples from the syscache, which is a better\n> practice. 509199587df7 is recent enough that this should be fixed now\n> rather than later.\n>\n\nIf we are looking for avoiding a segfault and get a message which helps\ndebugging, using get_attname and get_attnum might be better options.\nget_attname throws an error. get_attnum doesn't throw an error and returns\nInvalidAttnum which won't return any valid identity sequence, and thus\nreturn a NIL sequence list which is handled in that function already. Using\nthese two functions will avoid the clutter as well as segfault. If that's\nacceptable, I will provide a patch.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, May 24, 2024 at 11:03 AM Michael Paquier <[email protected]> wrote:On Thu, May 23, 2024 at 08:54:12AM -0300, Ranier Vilela wrote:\n> All calls to functions like SearchSysCacheAttName, in the whole codebase,\n> checks if returns are valid.\n> It must be for a very strong reason, such a style.\n\nUsually good practice, as I've outlined once upthread, because we do\nexpect the attributes to exist in this case.  Or if you want, an error\nis better than a crash if a concurrent path causes this area to lead\nto inconsistent lookups, which is something I've seen in the past\nwhile hacking on my own stuff, or just fix other things causing\nsyscache lookup inconsistencies.  You'd be surprised to hear that\ndropped attributes being mishandled is not that uncommon, especially\nin out-of-core code, as one example.  FWIW, I don't see much a point\nin using ereport(), the two checks ought to be elog()s pointing to an\ninternal error as these two errors should never happen.  Still, it is\na good idea to check that they never happen: aka an internal \nerror state is better than a crash if a problem arises.\n\n> So, v3, implements it this way.\n\nI don't understand the point behind the open/close of attrelation,\nTBH.  That's not needed.\n\nExcept fot these two points, this is just moving the calls to make\nsure that we have valid tuples from the syscache, which is a better\npractice.  509199587df7 is recent enough that this should be fixed now\nrather than later.If we are looking for avoiding a segfault and get a message which helps debugging, using get_attname and get_attnum might be better options. get_attname throws an error. get_attnum doesn't throw an error and returns InvalidAttnum which won't return any valid identity sequence, and thus return a NIL sequence list which is handled in that function already. Using these two functions will avoid the clutter as well as segfault. If that's acceptable, I will provide a patch. -- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 24 May 2024 11:58:51 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Fri, May 24, 2024 at 11:58:51AM +0530, Ashutosh Bapat wrote:\n> If we are looking for avoiding a segfault and get a message which helps\n> debugging, using get_attname and get_attnum might be better options.\n> get_attname throws an error. get_attnum doesn't throw an error and returns\n> InvalidAttnum which won't return any valid identity sequence, and thus\n> return a NIL sequence list which is handled in that function already. Using\n> these two functions will avoid the clutter as well as segfault. If that's\n> acceptable, I will provide a patch.\n\nYeah, you could do that with these two routines as well. The result\nwould be the same in terms of runtime validity checks.\n--\nMichael", "msg_date": "Fri, 24 May 2024 15:45:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Fri, May 24, 2024 at 12:16 PM Michael Paquier <[email protected]>\nwrote:\n\n> On Fri, May 24, 2024 at 11:58:51AM +0530, Ashutosh Bapat wrote:\n> > If we are looking for avoiding a segfault and get a message which helps\n> > debugging, using get_attname and get_attnum might be better options.\n> > get_attname throws an error. get_attnum doesn't throw an error and\n> returns\n> > InvalidAttnum which won't return any valid identity sequence, and thus\n> > return a NIL sequence list which is handled in that function already.\n> Using\n> > these two functions will avoid the clutter as well as segfault. If that's\n> > acceptable, I will provide a patch.\n>\n> Yeah, you could do that with these two routines as well. The result\n> would be the same in terms of runtime validity checks.\n>\n\nPFA patch using those two routines.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 24 May 2024 17:18:32 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "Em sex., 24 de mai. de 2024 às 08:48, Ashutosh Bapat <\[email protected]> escreveu:\n\n>\n>\n> On Fri, May 24, 2024 at 12:16 PM Michael Paquier <[email protected]>\n> wrote:\n>\n>> On Fri, May 24, 2024 at 11:58:51AM +0530, Ashutosh Bapat wrote:\n>> > If we are looking for avoiding a segfault and get a message which helps\n>> > debugging, using get_attname and get_attnum might be better options.\n>> > get_attname throws an error. get_attnum doesn't throw an error and\n>> returns\n>> > InvalidAttnum which won't return any valid identity sequence, and thus\n>> > return a NIL sequence list which is handled in that function already.\n>> Using\n>> > these two functions will avoid the clutter as well as segfault. If\n>> that's\n>> > acceptable, I will provide a patch.\n>>\n>> Yeah, you could do that with these two routines as well. The result\n>> would be the same in terms of runtime validity checks.\n>>\n>\n> PFA patch using those two routines.\n>\nThe function *get_attname* palloc the result name (pstrdup).\nIsn't it necessary to free the memory here (pfree)?\n\nbest regards,\nRanier Vilela\n\nEm sex., 24 de mai. de 2024 às 08:48, Ashutosh Bapat <[email protected]> escreveu:On Fri, May 24, 2024 at 12:16 PM Michael Paquier <[email protected]> wrote:On Fri, May 24, 2024 at 11:58:51AM +0530, Ashutosh Bapat wrote:\n> If we are looking for avoiding a segfault and get a message which helps\n> debugging, using get_attname and get_attnum might be better options.\n> get_attname throws an error. get_attnum doesn't throw an error and returns\n> InvalidAttnum which won't return any valid identity sequence, and thus\n> return a NIL sequence list which is handled in that function already. Using\n> these two functions will avoid the clutter as well as segfault. If that's\n> acceptable, I will provide a patch.\n\nYeah, you could do that with these two routines as well.  The result\nwould be the same in terms of runtime validity checks.PFA patch using those two routines. The function *get_attname* palloc the result name (pstrdup).Isn't it necessary to free the memory here (pfree)?best regards,Ranier Vilela", "msg_date": "Fri, 24 May 2024 09:05:35 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "On Fri, May 24, 2024 at 09:05:35AM -0300, Ranier Vilela wrote:\n> The function *get_attname* palloc the result name (pstrdup).\n> Isn't it necessary to free the memory here (pfree)?\n\nThis is going to be freed with the current memory context, and all the\ncallers of getIdentitySequence() are in query execution paths, so I\ndon't see much the point. A second thing was a missing check on the\nattnum returned by get_attnum() with InvalidAttrNumber. I'd be\ntempted to introduce a missing_ok to this routine after looking at the\ncallers in all the tree, as some of them want to fail still would not\nexpect it, so that would reduce a bit the elog churn. That's a story\nfor a different day, though.\n--\nMichael", "msg_date": "Sun, 26 May 2024 16:40:16 -0700", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" }, { "msg_contents": "Thanks a lot Michael.\n\nOn Sun, May 26, 2024 at 4:40 PM Michael Paquier <[email protected]> wrote:\n\n> On Fri, May 24, 2024 at 09:05:35AM -0300, Ranier Vilela wrote:\n> > The function *get_attname* palloc the result name (pstrdup).\n> > Isn't it necessary to free the memory here (pfree)?\n>\n> This is going to be freed with the current memory context, and all the\n> callers of getIdentitySequence() are in query execution paths, so I\n> don't see much the point. A second thing was a missing check on the\n> attnum returned by get_attnum() with InvalidAttrNumber. I'd be\n> tempted to introduce a missing_ok to this routine after looking at the\n> callers in all the tree, as some of them want to fail still would not\n> expect it, so that would reduce a bit the elog churn. That's a story\n> for a different day, though.\n> --\n> Michael\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nThanks a lot Michael.On Sun, May 26, 2024 at 4:40 PM Michael Paquier <[email protected]> wrote:On Fri, May 24, 2024 at 09:05:35AM -0300, Ranier Vilela wrote:\n> The function *get_attname* palloc the result name (pstrdup).\n> Isn't it necessary to free the memory here (pfree)?\n\nThis is going to be freed with the current memory context, and all the\ncallers of getIdentitySequence() are in query execution paths, so I\ndon't see much the point.  A second thing was a missing check on the\nattnum returned by get_attnum() with InvalidAttrNumber.  I'd be\ntempted to introduce a missing_ok to this routine after looking at the\ncallers in all the tree, as some of them want to fail still would not\nexpect it, so that would reduce a bit the elog churn.  That's a story\nfor a different day, though.\n--\nMichael\n-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 27 May 2024 11:27:05 -0700", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid possible dereference null pointer\n (src/backend/catalog/pg_depend.c)" } ]
[ { "msg_contents": "Hi!\n\nI faced the issue, when the sorting node in the actual information  \nshows a larger number of tuples than it actually is. And I can not \nunderstand why?\n\nI attached the dump file with my database and run this query that \nconsists underestimation and it works fine.\n\nUnfortunately, I could not find the approach how I can improve statistic \ninformation here, so I did it in the code.\n\nI needed to better cardinality in IndexScan and MergeJoin nodes. I \nhighlighted them in bold.\n\npostgres=# set enable_hashjoin =off;\nSET\npostgres=# set enable_nestloop =off;\nSET\n\nexplain analyze select cname, avg(degree) from course, student,score \njoin broke_down_course on\n(score.cno=broke_down_course.cno and score.sno=broke_down_course.sno) \nwhere score.sno = student.sno group by (cname);\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=10000001523.70..10000001588.95 rows=10 \nwidth=250) (actual time=262.903..322.973 rows=10 loops=1)\n    Group Key: course.cname\n    ->  Sort  (cost=10000001523.70..10000001545.41 rows=8684 width=222) \n(actual time=256.221..283.710 rows=77540 loops=1)\n          Sort Key: course.cname\n          Sort Method: external merge  Disk: 4656kB\n          ->  Nested Loop  (cost=10000000614.18..10000000955.58 \nrows=8684 width=222) (actual time=7.518..160.518 rows=77540 loops=1)\n*->  Merge Join  (cost=614.18..845.96 rows=868 width=4) (actual \ntime=7.497..126.632 rows=7754 loops=1)*\n                      Merge Cond: ((score.sno = broke_down_course.sno) \nAND (score.cno = broke_down_course.cno))\n*->  Merge Join  (cost=0.70..1297.78 rows=29155 width=16) (actual \ntime=0.099..99.329 rows=29998 loops=1)*\n                            Merge Cond: (score.sno = student.sno)\n*->  Index Scan using score_idx1 on score  (cost=0.42..10125.41 \nrows=29998 width=12) (actual time=0.045..74.427 rows=29998 loops=1)*\n                            ->  Index Only Scan using student_pkey on \nstudent  (cost=0.28..89.28 rows=3000 width=4) (actual time=0.045..2.170 \nrows=3000 loops=1)\n                                  Heap Fetches: 0\n                      ->  Sort  (cost=613.48..632.86 rows=7754 width=8) \n(actual time=7.378..9.626 rows=7754 loops=1)\n                            Sort Key: broke_down_course.sno, \nbroke_down_course.cno\n                            Sort Method: quicksort  Memory: 374kB\n                            ->  Seq Scan on broke_down_course \n(cost=0.00..112.54 rows=7754 width=8) (actual time=0.028..1.428 \nrows=7754 loops=1)\n                ->  Materialize  (cost=0.00..1.15 rows=10 width=218) \n(actual time=0.000..0.001 rows=10 loops=7754)\n                      ->  Seq Scan on course  (cost=0.00..1.10 rows=10 \nwidth=218) (actual time=0.012..0.017 rows=10 loops=1)\n  Planning Time: 124.591 ms\n  Execution Time: 326.547 ms\n\nWhen I run this query again I see that the Sort node shows more actual \ndata than it was in SeqScan (I highlighted it).\n\npostgres=# explain analyze select cname, avg(degree) from course, \nstudent,score join broke_down_course on\n(score.cno=broke_down_course.cno and score.sno=broke_down_course.sno) \nwhere score.sno = student.sno group by (cname);\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  GroupAggregate  (cost=10000001746.28..10000001811.53 rows=10 \nwidth=250) (actual time=553.428..615.028 rows=10 loops=1)\n    Group Key: course.cname\n    ->  Sort  (cost=10000001746.28..10000001767.99 rows=8684 width=222) \n(actual time=546.531..574.223 rows=77540 loops=1)\n          Sort Key: course.cname\n          Sort Method: external merge  Disk: 4656kB\n          ->  Merge Join  (cost=10000000614.18..10000001178.16 rows=8684 \nwidth=222) (actual time=7.892..448.889 rows=77540 loops=1)\n                Merge Cond: ((score.sno = broke_down_course.sno) AND \n(score.cno = broke_down_course.cno))\n                ->  Merge Join (cost=10000000000.70..10000002146.57 \nrows=291550 width=234) (actual time=0.137..318.345 rows=299971 loops=1)\n                      Merge Cond: (score.sno = student.sno)\n                      ->  Index Scan using score_idx1 on score \n(cost=0.42..10125.41 rows=29998 width=12) (actual time=0.046..76.505 \nrows=29998 loops=1)\n                      ->  Materialize \n(cost=10000000000.28..10000000540.41 rows=30000 width=222) (actual \ntime=0.082..76.345 rows=299964 loops=1)\n                            ->  Nested Loop \n(cost=10000000000.28..10000000465.41 rows=30000 width=222) (actual \ntime=0.077..16.543 rows=30000 loops=1)\n                                  ->  Index Only Scan using student_pkey \non student  (cost=0.28..89.28 rows=3000 width=4) (actual \ntime=0.045..2.774 rows=3000 loops=1)\n                                        Heap Fetches: 0\n                                  ->  Materialize (cost=0.00..1.15 \nrows=10 width=218) (actual time=0.000..0.002 rows=10 loops=3000)\n                                        ->  Seq Scan on course \n(cost=0.00..1.10 rows=10 width=218) (actual time=0.023..0.038 rows=10 \nloops=1)\n*->  Sort  (cost=613.48..632.86 rows=7754 width=8) (actual \ntime=7.612..21.214 rows=77531 loops=1)*\n                      Sort Key: broke_down_course.sno, broke_down_course.cno\n                      Sort Method: quicksort  Memory: 374kB\n                      ->  Seq Scan on broke_down_course \n(cost=0.00..112.54 rows=7754 width=8) (actual time=0.016..1.366 \nrows=7754 loops=1)\n  Planning Time: 96.685 ms\n  Execution Time: 618.538 ms\n(22 rows)\n\n\nMaybe, my reproduction looks questionable, sorry for that, but I \nseriously don't understand why we have so many tuples here in Sort node.", "msg_date": "Wed, 22 May 2024 23:31:21 +0300", "msg_from": "\"a.rybakina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sort operation displays more tuples than it contains its subnode" }, { "msg_contents": "On Thu, 23 May 2024 at 08:48, a.rybakina <[email protected]> wrote:\n> -> Sort (cost=613.48..632.86 rows=7754 width=8) (actual time=7.612..21.214 rows=77531 loops=1)\n> Sort Key: broke_down_course.sno, broke_down_course.cno\n> Sort Method: quicksort Memory: 374kB\n> -> Seq Scan on broke_down_course (cost=0.00..112.54 rows=7754 width=8) (actual time=0.016..1.366 rows=7754 loops=1)\n\n\n> Maybe, my reproduction looks questionable, sorry for that, but I seriously don't understand why we have so many tuples here in Sort node.\n\nThis is because of the \"mark and restore\" that occurs because of the\nMerge Join. This must happen for joins because every tuple matching\nthe join condition must join to every other tuple that matches the\njoin condition. That means, if you have 3 tuples with the same key on\neither side, you get 9 rows, not 3 rows.\n\nHere's a simple example of the behaviour you describe shrunk down so\nthat it's more easily understandable:\n\ncreate table t(a int);\ninsert into t values(1),(1),(1);\nset enable_hashjoin=0;\nset enable_nestloop=0;\nexplain (analyze, costs off) select * from t inner join t t1 on t.a=t1.a;\n QUERY PLAN\n------------------------------------------------------------------------\n Merge Join (actual time=0.036..0.038 rows=9 loops=1)\n Merge Cond: (t.a = t1.a)\n -> Sort (actual time=0.025..0.025 rows=3 loops=1)\n Sort Key: t.a\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on t (actual time=0.017..0.018 rows=3 loops=1)\n -> Sort (actual time=0.007..0.007 rows=7 loops=1)\n Sort Key: t1.a\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on t t1 (actual time=0.003..0.003 rows=3 loops=1)\n\nNote the sort has rows=7 and the Seq Scan on t1 rows=3 and an output of 9 rows.\n\nIf you look at the code in [1], you can see the restoreMark() calls to\nachieve this.\n\nDavid\n\n[1] https://en.wikipedia.org/wiki/Sort-merge_join\n\n\n", "msg_date": "Thu, 23 May 2024 09:08:35 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort operation displays more tuples than it contains its subnode" }, { "msg_contents": "\"a.rybakina\" <[email protected]> writes:\n> I faced the issue, when the sorting node in the actual information  \n> shows a larger number of tuples than it actually is. And I can not \n> understand why?\n\nIf I'm reading this correctly, the sort node you're worrying about\nfeeds the inner side of a merge join. Merge join will rewind its\ninner side to the start of the current group of equal-keyed tuples\nwhenever it sees that the next outer tuple must also be joined to\nthat group. Since what EXPLAIN is counting is the number of tuples\nreturned from the node, that causes it to double-count those tuples.\nThe more duplicate-keyed tuples on the outer side, the bigger the\neffect.\n\nYou can see the same thing happening at the Materialize a little\nfurther up, which is feeding the inside of the other merge join.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 May 2024 17:17:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort operation displays more tuples than it contains its subnode" }, { "msg_contents": "Yes, I got it. Thank you very much for the explanation.\n\nOn 23.05.2024 00:17, Tom Lane wrote:\n> \"a.rybakina\" <[email protected]> writes:\n>> I faced the issue, when the sorting node in the actual information\n>> shows a larger number of tuples than it actually is. And I can not\n>> understand why?\n> If I'm reading this correctly, the sort node you're worrying about\n> feeds the inner side of a merge join. Merge join will rewind its\n> inner side to the start of the current group of equal-keyed tuples\n> whenever it sees that the next outer tuple must also be joined to\n> that group. Since what EXPLAIN is counting is the number of tuples\n> returned from the node, that causes it to double-count those tuples.\n> The more duplicate-keyed tuples on the outer side, the bigger the\n> effect.\n>\n> You can see the same thing happening at the Materialize a little\n> further up, which is feeding the inside of the other merge join.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 23 May 2024 17:16:14 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort operation displays more tuples than it contains its subnode" } ]
[ { "msg_contents": "https://postgr.es/m/[email protected] wrote:\n> Separable, nontrivial things not fixed in the attached patch stack:\n> \n> - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n\nI plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\ninside the critical section. Send it in heap_xlog_inplace(), too. The\ninteresting decision is how to handle RelationCacheInitFilePreInvalidate(),\nwhich has an unlink_initfile() that can fail with e.g. EIO. Options:\n\n1. Unlink during critical section, and accept that EIO becomes PANIC. Replay\n may reach the same EIO, and the system won't reopen to connections until\n the storage starts cooperating.a Interaction with checkpoints is not ideal.\n If we checkpoint and then crash between inplace XLogInsert() and inval,\n we'd be relying on StartupXLOG() -> RelationCacheInitFileRemove(). That\n uses elevel==LOG, so replay would neglect to PANIC on EIO.\n\n2. Unlink before critical section, so normal xact abort suffices. This would\n hold RelCacheInitLock and a buffer content lock at the same time. In\n RecordTransactionCommit(), it would hold RelCacheInitLock and e.g. slru\n locks at the same time.\n\nThe PANIC risk of (1) seems similar to the risk of PANIC at\nRecordTransactionCommit() -> XLogFlush(), which hasn't been a problem. The\ncheckpoint-related risk bothers me more, and (1) generally makes it harder to\nreason about checkpoint interactions. The lock order risk of (2) feels\ntolerable. I'm leaning toward (2), but that might change. Other preferences?\n\nAnother decision is what to do about LogLogicalInvalidations(). Currently,\ninplace update invalidations do reach WAL via LogLogicalInvalidations() at the\nnext CCI. Options:\n\na. Within logical decoding, cease processing invalidations for inplace\n updates. Inplace updates don't affect storage or interpretation of table\n rows, so they don't affect logicalrep_write_tuple() outcomes. If they did,\n invalidations wouldn't make it work. Decoding has no way to retrieve a\n snapshot-appropriate version of the inplace-updated value.\n\nb. Make heap_decode() of XLOG_HEAP_INPLACE recreate the invalidation. This\n would be, essentially, cheap insurance against invalidations having a\n benefit I missed in (a).\n\nI plan to pick (a).\n\n> - AtEOXact_Inval(true) is outside the RecordTransactionCommit() critical\n> section, but it is critical. We must not commit transactional DDL without\n> other backends receiving an inval. (When the inplace inval becomes\n> nontransactional, it will face the same threat.)\n\nThis faces the same RelationCacheInitFilePreInvalidate() decision, and I think\nthe conclusion should be the same as for inplace update.\n\nThanks,\nnm\n\n\n", "msg_date": "Wed, 22 May 2024 17:05:48 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Inval reliability, especially for inplace updates" }, { "msg_contents": "On Wed, May 22, 2024 at 05:05:48PM -0700, Noah Misch wrote:\n> https://postgr.es/m/[email protected] wrote:\n> > Separable, nontrivial things not fixed in the attached patch stack:\n> > \n> > - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> > CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> > still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n> \n> I plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\n> inside the critical section. Send it in heap_xlog_inplace(), too.\n\n> a. Within logical decoding, cease processing invalidations for inplace\n\nI'm attaching the implementation. This applies atop the v3 patch stack from\nhttps://postgr.es/m/[email protected], but the threads are\nmostly orthogonal and intended for independent review. Translating a tuple\ninto inval messages uses more infrastructure than relmapper, which needs just\na database ID. Hence, this ended up more like a miniature of inval.c's\nparticipation in the transaction commit sequence.\n\nI waffled on whether to back-patch inplace150-inval-durability-atcommit. The\nconsequences of that bug are plenty bad, but reaching them requires an error\nbetween TransactionIdCommitTree() and AtEOXact_Inval(). I've not heard\nreports of that, and I don't have a recipe for making it happen on demand.\nFor now, I'm leaning toward back-patch. The main risk would be me overlooking\nan LWLock deadlock scenario reachable from the new, earlier RelCacheInitLock\ntiming. Alternatives for RelCacheInitLock:\n\n- RelCacheInitLock before PreCommit_Notify(), because notify concurrency\n matters more than init file concurrency. I chose this.\n- RelCacheInitLock after PreCommit_Notify(), because PreCommit_Notify() uses a\n heavyweight lock, giving it less risk of undetected deadlock.\n- Replace RelCacheInitLock with a heavyweight lock, and keep it before\n PreCommit_Notify().\n- Fold PreCommit_Inval() back into AtCommit_Inval(), accepting that EIO in\n unlink_initfile() will PANIC.\n\nOpinions on that?\n\nThe patch changes xl_heap_inplace of XLOG_HEAP_INPLACE. For back branches, we\ncould choose between:\n\n- Same change, no WAL version bump. Standby must update before primary. This\n is best long-term, but the transition is more disruptive. I'm leaning\n toward this one, but the second option isn't bad:\n\n- heap_xlog_inplace() could set the shared-inval-queue overflow signal on\n every backend. This is more wasteful, but inplace updates might be rare\n enough (~once per VACUUM) to make it tolerable.\n\n- Use LogStandbyInvalidations() just after XLOG_HEAP_INPLACE. This isn't\n correct if one ends recovery between the two records, but you'd need to be\n unlucky to notice. Noticing would need a procedure like the following. A\n hot standby backend populates a relcache entry, then does DDL on the rel\n after recovery ends.\n\nFuture cleanup work could eliminate LogStandbyInvalidations() and the case of\n!markXidCommitted && nmsgs != 0. Currently, the src/test/regress suite still\nreaches that case:\n\n- AlterDomainDropConstraint() queues an inval even if !found; it can stop\n that.\n\n- ON COMMIT DELETE ROWS nontransactionally rebuilds an index, which sends a\n relcache inval. The point of that inval is, I think, to force access\n methods like btree and hash to reload the metapage copy that they store in\n rd_amcache. Since no assigned XID implies no changes to the temp index, the\n no-XID case could simply skip the index rebuild. (temp.sql reaches this\n with a read-only transaction that selects from an ON COMMIT DELETE ROWS\n table. Realistic usage will tend not to do that.) ON COMMIT DELETE ROWS\n has another preexisting problem for indexes, mentioned in a code comment.\n\nThanks,\nnm", "msg_date": "Sat, 15 Jun 2024 15:37:18 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Sat, Jun 15, 2024 at 03:37:18PM -0700, Noah Misch wrote:\n> I'm attaching the implementation.\n\nI'm withdrawing inplace150-inval-durability-atcommit-v1.patch, having found\ntwo major problems so far:\n\n1. It sends transactional invalidation messages before\n ProcArrayEndTransaction(), so other backends can read stale data.\n\n2. It didn't make the equivalent changes for COMMIT PREPARED.\n\n\n", "msg_date": "Sun, 16 Jun 2024 15:12:05 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Sat, Jun 15, 2024 at 03:37:18PM -0700, Noah Misch wrote:\n> On Wed, May 22, 2024 at 05:05:48PM -0700, Noah Misch wrote:\n> > https://postgr.es/m/[email protected] wrote:\n> > > Separable, nontrivial things not fixed in the attached patch stack:\n> > > \n> > > - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> > > CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> > > still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n> > \n> > I plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\n> > inside the critical section. Send it in heap_xlog_inplace(), too.\n> \n> > a. Within logical decoding, cease processing invalidations for inplace\n> \n> I'm attaching the implementation. This applies atop the v3 patch stack from\n> https://postgr.es/m/[email protected], but the threads are\n> mostly orthogonal and intended for independent review. Translating a tuple\n> into inval messages uses more infrastructure than relmapper, which needs just\n> a database ID. Hence, this ended up more like a miniature of inval.c's\n> participation in the transaction commit sequence.\n> \n> I waffled on whether to back-patch inplace150-inval-durability-atcommit\n\nThat inplace150 patch turned out to be unnecessary. Contrary to the\n\"noncritical resource releasing\" comment some lines above\nAtEOXact_Inval(true), the actual behavior is already to promote ERROR to\nPANIC. An ERROR just before or after sending invals becomes PANIC, \"cannot\nabort transaction %u, it was already committed\". Since\ninplace130-AtEOXact_RelationCache-comments existed to clear the way for\ninplace150, inplace130 also becomes unnecessary. I've removed both from the\nattached v2 patch stack.\n\n> The patch changes xl_heap_inplace of XLOG_HEAP_INPLACE. For back branches, we\n> could choose between:\n> \n> - Same change, no WAL version bump. Standby must update before primary. This\n> is best long-term, but the transition is more disruptive. I'm leaning\n> toward this one, but the second option isn't bad:\n> \n> - heap_xlog_inplace() could set the shared-inval-queue overflow signal on\n> every backend. This is more wasteful, but inplace updates might be rare\n> enough (~once per VACUUM) to make it tolerable.\n> \n> - Use LogStandbyInvalidations() just after XLOG_HEAP_INPLACE. This isn't\n> correct if one ends recovery between the two records, but you'd need to be\n> unlucky to notice. Noticing would need a procedure like the following. A\n> hot standby backend populates a relcache entry, then does DDL on the rel\n> after recovery ends.\n\nThat still holds.", "msg_date": "Mon, 17 Jun 2024 16:58:54 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "Hi,\n\nOn 2024-06-17 16:58:54 -0700, Noah Misch wrote:\n> On Sat, Jun 15, 2024 at 03:37:18PM -0700, Noah Misch wrote:\n> > On Wed, May 22, 2024 at 05:05:48PM -0700, Noah Misch wrote:\n> > > https://postgr.es/m/[email protected] wrote:\n> > > > Separable, nontrivial things not fixed in the attached patch stack:\n> > > >\n> > > > - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> > > > CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> > > > still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n> > >\n> > > I plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\n> > > inside the critical section. Send it in heap_xlog_inplace(), too.\n\nI'm worried this might cause its own set of bugs, e.g. if there are any places\nthat, possibly accidentally, rely on the invalidation from the inplace update\nto also cover separate changes.\n\nHave you considered instead submitting these invalidations during abort as\nwell?\n\n\n> > > a. Within logical decoding, cease processing invalidations for inplace\n> >\n> > I'm attaching the implementation. This applies atop the v3 patch stack from\n> > https://postgr.es/m/[email protected], but the threads are\n> > mostly orthogonal and intended for independent review. Translating a tuple\n> > into inval messages uses more infrastructure than relmapper, which needs just\n> > a database ID. Hence, this ended up more like a miniature of inval.c's\n> > participation in the transaction commit sequence.\n> >\n> > I waffled on whether to back-patch inplace150-inval-durability-atcommit\n>\n> That inplace150 patch turned out to be unnecessary. Contrary to the\n> \"noncritical resource releasing\" comment some lines above\n> AtEOXact_Inval(true), the actual behavior is already to promote ERROR to\n> PANIC. An ERROR just before or after sending invals becomes PANIC, \"cannot\n> abort transaction %u, it was already committed\".\n\nRelying on that, instead of explicit critical sections, seems fragile to me.\nIIRC some of the behaviour around errors around transaction commit/abort has\nchanged a bunch of times. Tying correctness into something that could be\nchanged for unrelated reasons doesn't seem great.\n\nI'm not sure it holds true even today - what if the transaction didn't have an\nxid? Then RecordTransactionAbort() wouldn't trigger\n \"cannot abort transaction %u, it was already committed\"\nI think?\n\n\n\n> > - Same change, no WAL version bump. Standby must update before primary. This\n> > is best long-term, but the transition is more disruptive. I'm leaning\n> > toward this one, but the second option isn't bad:\n\nHm. The inplace record doesn't use the length of the \"main data\" record\nsegment for anything, from what I can tell. If records by an updated primary\nwere replayed by an old standby, it'd just ignore the additional data, afaict?\n\nI think with the code as-is, the situation with an updated standby replaying\nan old primary's record would actually be worse - it'd afaict just assume the\nnow-longer record contained valid fields, despite those just pointing into\nuninitialized memory. I think the replay routine would have to check the\nlength of the main data and execute the invalidation conditionally.\n\n\n> > - heap_xlog_inplace() could set the shared-inval-queue overflow signal on\n> > every backend. This is more wasteful, but inplace updates might be rare\n> > enough (~once per VACUUM) to make it tolerable.\n\nWe already set that surprisingly frequently, as\na) The size of the sinval queue is small\nb) If a backend is busy, it does not process catchup interrupts\n (i.e. executing queries, waiting for a lock prevents processing)\nc) There's no deduplication of invals, we often end up sending the same inval\n over and over.\n\nSo I suspect this might not be too bad, compared to the current badness.\n\n\nAt least for core code. I guess there could be extension code triggering\ninplace updates more frequently? But I'd hope they'd do it not on catalog\ntables... Except that we wouldn't know that that's the case during replay,\nit's not contained in the record.\n\n\n\n\n> > - Use LogStandbyInvalidations() just after XLOG_HEAP_INPLACE. This isn't\n> > correct if one ends recovery between the two records, but you'd need to be\n> > unlucky to notice. Noticing would need a procedure like the following. A\n> > hot standby backend populates a relcache entry, then does DDL on the rel\n> > after recovery ends.\n\nHm. The problematic cases presumably involves an access exclusive lock? If so,\ncould we do LogStandbyInvalidations() *before* logging the WAL record for the\ninplace update? The invalidations can't be processed by other backends until\nthe exclusive lock has been released, which should avoid the race?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:57:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Mon, Jun 17, 2024 at 06:57:30PM -0700, Andres Freund wrote:\n> On 2024-06-17 16:58:54 -0700, Noah Misch wrote:\n> > On Sat, Jun 15, 2024 at 03:37:18PM -0700, Noah Misch wrote:\n> > > On Wed, May 22, 2024 at 05:05:48PM -0700, Noah Misch wrote:\n> > > > https://postgr.es/m/[email protected] wrote:\n> > > > > Separable, nontrivial things not fixed in the attached patch stack:\n> > > > >\n> > > > > - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> > > > > CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> > > > > still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n> > > >\n> > > > I plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\n> > > > inside the critical section. Send it in heap_xlog_inplace(), too.\n> \n> I'm worried this might cause its own set of bugs, e.g. if there are any places\n> that, possibly accidentally, rely on the invalidation from the inplace update\n> to also cover separate changes.\n\nGood point. I do have index_update_stats() still doing an ideally-superfluous\nrelcache update for that reason. Taking that further, it would be cheap\ninsurance to have the inplace update do a transactional inval in addition to\nits immediate inval. Future master-only work could remove the transactional\none. How about that?\n\n> Have you considered instead submitting these invalidations during abort as\n> well?\n\nI had not. Hmmm. If the lock protocol in README.tuplock (after patch\ninplace120) told SearchSysCacheLocked1() to do systable scans instead of\nsyscache reads, that could work. Would need to ensure a PANIC if transaction\nabort doesn't reach the inval submission. Overall, it would be harder to\nreason about the state of caches, but I suspect the patch would be smaller.\nHow should we choose between those strategies?\n\n> > > > a. Within logical decoding, cease processing invalidations for inplace\n> > >\n> > > I'm attaching the implementation. This applies atop the v3 patch stack from\n> > > https://postgr.es/m/[email protected], but the threads are\n> > > mostly orthogonal and intended for independent review. Translating a tuple\n> > > into inval messages uses more infrastructure than relmapper, which needs just\n> > > a database ID. Hence, this ended up more like a miniature of inval.c's\n> > > participation in the transaction commit sequence.\n> > >\n> > > I waffled on whether to back-patch inplace150-inval-durability-atcommit\n> >\n> > That inplace150 patch turned out to be unnecessary. Contrary to the\n> > \"noncritical resource releasing\" comment some lines above\n> > AtEOXact_Inval(true), the actual behavior is already to promote ERROR to\n> > PANIC. An ERROR just before or after sending invals becomes PANIC, \"cannot\n> > abort transaction %u, it was already committed\".\n> \n> Relying on that, instead of explicit critical sections, seems fragile to me.\n> IIRC some of the behaviour around errors around transaction commit/abort has\n> changed a bunch of times. Tying correctness into something that could be\n> changed for unrelated reasons doesn't seem great.\n\nFair enough. It could still be a good idea for master, but given I missed a\nbug in inplace150-inval-durability-atcommit-v1.patch far worse than the ones\n$SUBJECT fixes, let's not risk it in back branches.\n\n> I'm not sure it holds true even today - what if the transaction didn't have an\n> xid? Then RecordTransactionAbort() wouldn't trigger\n> \"cannot abort transaction %u, it was already committed\"\n> I think?\n\nI think that's right. As the inplace160-inval-durability-inplace-v2.patch\nedits to xact.c say, the concept of invals in XID-less transactions is buggy\nat its core. Fortunately, after that patch, we use them only for two things\nthat could themselves stop with something roughly as simple as the attached.\n\n> > > - Same change, no WAL version bump. Standby must update before primary. This\n> > > is best long-term, but the transition is more disruptive. I'm leaning\n> > > toward this one, but the second option isn't bad:\n> \n> Hm. The inplace record doesn't use the length of the \"main data\" record\n> segment for anything, from what I can tell. If records by an updated primary\n> were replayed by an old standby, it'd just ignore the additional data, afaict?\n\nAgreed, but ...\n\n> I think with the code as-is, the situation with an updated standby replaying\n> an old primary's record would actually be worse - it'd afaict just assume the\n> now-longer record contained valid fields, despite those just pointing into\n> uninitialized memory. I think the replay routine would have to check the\n> length of the main data and execute the invalidation conditionally.\n\nI anticipated back branches supporting a new XLOG_HEAP_INPLACE_WITH_INVAL\nalongside the old XLOG_HEAP_INPLACE. Updated standbys would run both fine,\nand old binaries consuming new WAL would PANIC, \"heap_redo: unknown op code\".\n\n> > > - heap_xlog_inplace() could set the shared-inval-queue overflow signal on\n> > > every backend. This is more wasteful, but inplace updates might be rare\n> > > enough (~once per VACUUM) to make it tolerable.\n> \n> We already set that surprisingly frequently, as\n> a) The size of the sinval queue is small\n> b) If a backend is busy, it does not process catchup interrupts\n> (i.e. executing queries, waiting for a lock prevents processing)\n> c) There's no deduplication of invals, we often end up sending the same inval\n> over and over.\n> \n> So I suspect this might not be too bad, compared to the current badness.\n\nThat is good. We might be able to do the overflow signal once at end of\nrecovery, like RelationCacheInitFileRemove() does for the init file. That's\nmildly harder to reason about, but it would be cheaper. Hmmm.\n\n> At least for core code. I guess there could be extension code triggering\n> inplace updates more frequently? But I'd hope they'd do it not on catalog\n> tables... Except that we wouldn't know that that's the case during replay,\n> it's not contained in the record.\n\nFor what it's worth, from a grep of PGXN, only citus does inplace updates.\n\n> > > - Use LogStandbyInvalidations() just after XLOG_HEAP_INPLACE. This isn't\n> > > correct if one ends recovery between the two records, but you'd need to be\n> > > unlucky to notice. Noticing would need a procedure like the following. A\n> > > hot standby backend populates a relcache entry, then does DDL on the rel\n> > > after recovery ends.\n> \n> Hm. The problematic cases presumably involves an access exclusive lock? If so,\n> could we do LogStandbyInvalidations() *before* logging the WAL record for the\n> inplace update? The invalidations can't be processed by other backends until\n> the exclusive lock has been released, which should avoid the race?\n\nA lock forces a backend to drain the inval queue before using the locked\nobject, but it doesn't stop the backend from draining the queue and\nrepopulating cache entries earlier. For example, pg_describe_object() can\nquery many syscaches without locking underlying objects. Hence, the inval\nsystem relies on the buffer change getting fully visible to catcache queries\nbefore the sinval message enters the shared queue.\n\nThanks,\nnm\n\n\n", "msg_date": "Tue, 18 Jun 2024 08:23:49 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Tue, Jun 18, 2024 at 08:23:49AM -0700, Noah Misch wrote:\n> On Mon, Jun 17, 2024 at 06:57:30PM -0700, Andres Freund wrote:\n> > On 2024-06-17 16:58:54 -0700, Noah Misch wrote:\n> > > On Sat, Jun 15, 2024 at 03:37:18PM -0700, Noah Misch wrote:\n> > > > On Wed, May 22, 2024 at 05:05:48PM -0700, Noah Misch wrote:\n> > > > > https://postgr.es/m/[email protected] wrote:\n> > > > > > Separable, nontrivial things not fixed in the attached patch stack:\n> > > > > >\n> > > > > > - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> > > > > > CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> > > > > > still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n> > > > >\n> > > > > I plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\n> > > > > inside the critical section. Send it in heap_xlog_inplace(), too.\n> > \n> > I'm worried this might cause its own set of bugs, e.g. if there are any places\n> > that, possibly accidentally, rely on the invalidation from the inplace update\n> > to also cover separate changes.\n> \n> Good point. I do have index_update_stats() still doing an ideally-superfluous\n> relcache update for that reason. Taking that further, it would be cheap\n> insurance to have the inplace update do a transactional inval in addition to\n> its immediate inval. Future master-only work could remove the transactional\n> one. How about that?\n> \n> > Have you considered instead submitting these invalidations during abort as\n> > well?\n> \n> I had not. Hmmm. If the lock protocol in README.tuplock (after patch\n> inplace120) told SearchSysCacheLocked1() to do systable scans instead of\n> syscache reads, that could work. Would need to ensure a PANIC if transaction\n> abort doesn't reach the inval submission. Overall, it would be harder to\n> reason about the state of caches, but I suspect the patch would be smaller.\n> How should we choose between those strategies?\n> \n> > > > > a. Within logical decoding, cease processing invalidations for inplace\n> > > >\n> > > > I'm attaching the implementation. This applies atop the v3 patch stack from\n> > > > https://postgr.es/m/[email protected], but the threads are\n> > > > mostly orthogonal and intended for independent review. Translating a tuple\n> > > > into inval messages uses more infrastructure than relmapper, which needs just\n> > > > a database ID. Hence, this ended up more like a miniature of inval.c's\n> > > > participation in the transaction commit sequence.\n> > > >\n> > > > I waffled on whether to back-patch inplace150-inval-durability-atcommit\n> > >\n> > > That inplace150 patch turned out to be unnecessary. Contrary to the\n> > > \"noncritical resource releasing\" comment some lines above\n> > > AtEOXact_Inval(true), the actual behavior is already to promote ERROR to\n> > > PANIC. An ERROR just before or after sending invals becomes PANIC, \"cannot\n> > > abort transaction %u, it was already committed\".\n> > \n> > Relying on that, instead of explicit critical sections, seems fragile to me.\n> > IIRC some of the behaviour around errors around transaction commit/abort has\n> > changed a bunch of times. Tying correctness into something that could be\n> > changed for unrelated reasons doesn't seem great.\n> \n> Fair enough. It could still be a good idea for master, but given I missed a\n> bug in inplace150-inval-durability-atcommit-v1.patch far worse than the ones\n> $SUBJECT fixes, let's not risk it in back branches.\n> \n> > I'm not sure it holds true even today - what if the transaction didn't have an\n> > xid? Then RecordTransactionAbort() wouldn't trigger\n> > \"cannot abort transaction %u, it was already committed\"\n> > I think?\n> \n> I think that's right. As the inplace160-inval-durability-inplace-v2.patch\n> edits to xact.c say, the concept of invals in XID-less transactions is buggy\n> at its core. Fortunately, after that patch, we use them only for two things\n> that could themselves stop with something roughly as simple as the attached.\n\nNow actually attached.\n\n> > > > - Same change, no WAL version bump. Standby must update before primary. This\n> > > > is best long-term, but the transition is more disruptive. I'm leaning\n> > > > toward this one, but the second option isn't bad:\n> > \n> > Hm. The inplace record doesn't use the length of the \"main data\" record\n> > segment for anything, from what I can tell. If records by an updated primary\n> > were replayed by an old standby, it'd just ignore the additional data, afaict?\n> \n> Agreed, but ...\n> \n> > I think with the code as-is, the situation with an updated standby replaying\n> > an old primary's record would actually be worse - it'd afaict just assume the\n> > now-longer record contained valid fields, despite those just pointing into\n> > uninitialized memory. I think the replay routine would have to check the\n> > length of the main data and execute the invalidation conditionally.\n> \n> I anticipated back branches supporting a new XLOG_HEAP_INPLACE_WITH_INVAL\n> alongside the old XLOG_HEAP_INPLACE. Updated standbys would run both fine,\n> and old binaries consuming new WAL would PANIC, \"heap_redo: unknown op code\".\n> \n> > > > - heap_xlog_inplace() could set the shared-inval-queue overflow signal on\n> > > > every backend. This is more wasteful, but inplace updates might be rare\n> > > > enough (~once per VACUUM) to make it tolerable.\n> > \n> > We already set that surprisingly frequently, as\n> > a) The size of the sinval queue is small\n> > b) If a backend is busy, it does not process catchup interrupts\n> > (i.e. executing queries, waiting for a lock prevents processing)\n> > c) There's no deduplication of invals, we often end up sending the same inval\n> > over and over.\n> > \n> > So I suspect this might not be too bad, compared to the current badness.\n> \n> That is good. We might be able to do the overflow signal once at end of\n> recovery, like RelationCacheInitFileRemove() does for the init file. That's\n> mildly harder to reason about, but it would be cheaper. Hmmm.\n> \n> > At least for core code. I guess there could be extension code triggering\n> > inplace updates more frequently? But I'd hope they'd do it not on catalog\n> > tables... Except that we wouldn't know that that's the case during replay,\n> > it's not contained in the record.\n> \n> For what it's worth, from a grep of PGXN, only citus does inplace updates.\n> \n> > > > - Use LogStandbyInvalidations() just after XLOG_HEAP_INPLACE. This isn't\n> > > > correct if one ends recovery between the two records, but you'd need to be\n> > > > unlucky to notice. Noticing would need a procedure like the following. A\n> > > > hot standby backend populates a relcache entry, then does DDL on the rel\n> > > > after recovery ends.\n> > \n> > Hm. The problematic cases presumably involves an access exclusive lock? If so,\n> > could we do LogStandbyInvalidations() *before* logging the WAL record for the\n> > inplace update? The invalidations can't be processed by other backends until\n> > the exclusive lock has been released, which should avoid the race?\n> \n> A lock forces a backend to drain the inval queue before using the locked\n> object, but it doesn't stop the backend from draining the queue and\n> repopulating cache entries earlier. For example, pg_describe_object() can\n> query many syscaches without locking underlying objects. Hence, the inval\n> system relies on the buffer change getting fully visible to catcache queries\n> before the sinval message enters the shared queue.\n> \n> Thanks,\n> nm", "msg_date": "Tue, 18 Jun 2024 11:16:43 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Mon, Jun 17, 2024 at 04:58:54PM -0700, Noah Misch wrote:\n> attached v2 patch stack.\n\nRebased. This applies on top of three patches from\nhttps://postgr.es/m/[email protected]. I'm attaching those\nto placate cfbot, but this thread is for review of the last patch only.", "msg_date": "Fri, 28 Jun 2024 20:11:09 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Tue, Jun 18, 2024 at 08:23:49AM -0700, Noah Misch wrote:\n> On Mon, Jun 17, 2024 at 06:57:30PM -0700, Andres Freund wrote:\n> > On 2024-06-17 16:58:54 -0700, Noah Misch wrote:\n> > > That inplace150 patch turned out to be unnecessary. Contrary to the\n> > > \"noncritical resource releasing\" comment some lines above\n> > > AtEOXact_Inval(true), the actual behavior is already to promote ERROR to\n> > > PANIC. An ERROR just before or after sending invals becomes PANIC, \"cannot\n> > > abort transaction %u, it was already committed\".\n> > \n> > Relying on that, instead of explicit critical sections, seems fragile to me.\n> > IIRC some of the behaviour around errors around transaction commit/abort has\n> > changed a bunch of times. Tying correctness into something that could be\n> > changed for unrelated reasons doesn't seem great.\n> \n> Fair enough. It could still be a good idea for master, but given I missed a\n> bug in inplace150-inval-durability-atcommit-v1.patch far worse than the ones\n> $SUBJECT fixes, let's not risk it in back branches.\n\nWhat are your thoughts on whether a change to explicit critical sections\nshould be master-only vs. back-patched? I have a feeling your comment pointed\nto something I'm still missing, but I don't know where to look next.\n\n\n", "msg_date": "Tue, 6 Aug 2024 12:32:22 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "On Tue, Jun 18, 2024 at 08:23:49AM -0700, Noah Misch wrote:\n> On Mon, Jun 17, 2024 at 06:57:30PM -0700, Andres Freund wrote:\n> > On 2024-06-17 16:58:54 -0700, Noah Misch wrote:\n> > > On Sat, Jun 15, 2024 at 03:37:18PM -0700, Noah Misch wrote:\n> > > > On Wed, May 22, 2024 at 05:05:48PM -0700, Noah Misch wrote:\n> > > > > https://postgr.es/m/[email protected] wrote:\n> > > > > > Separable, nontrivial things not fixed in the attached patch stack:\n> > > > > >\n> > > > > > - Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n> > > > > > CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n> > > > > > still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n> > > > >\n> > > > > I plan to fix that like CacheInvalidateRelmap(): send the inval immediately,\n> > > > > inside the critical section. Send it in heap_xlog_inplace(), too.\n> > \n> > I'm worried this might cause its own set of bugs, e.g. if there are any places\n> > that, possibly accidentally, rely on the invalidation from the inplace update\n> > to also cover separate changes.\n> \n> Good point. I do have index_update_stats() still doing an ideally-superfluous\n> relcache update for that reason. Taking that further, it would be cheap\n> insurance to have the inplace update do a transactional inval in addition to\n> its immediate inval. Future master-only work could remove the transactional\n> one. How about that?\n\nRestoring the transactional inval seemed good to me, so I've rebased and\nincluded that. This applies on top of three patches from\nhttps://postgr.es/m/[email protected]. I'm attaching those\nto placate cfbot, but this thread is for review of the last patch only.", "msg_date": "Fri, 30 Aug 2024 18:07:11 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" }, { "msg_contents": "Rebased.", "msg_date": "Mon, 30 Sep 2024 12:33:50 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inval reliability, especially for inplace updates" } ]
[ { "msg_contents": "Currently the escape_json() function takes a cstring and char-by-char\nchecks each character in the string up to the NUL and adds the escape\nsequence if the character requires it.\n\nBecause this function requires a NUL terminated string, we're having\nto do a little more work in some places. For example, in\njsonb_put_escaped_value() we call pnstrdup() on the non-NUL-terminated\nstring to make a NUL-terminated string to pass to escape_json().\n\nTo make this faster, we can just have a version of escape_json which\ntakes a 'len' and stops after doing that many chars rather than\nstopping when the NUL char is reached. Now there's no need to\npnstrdup() which saves some palloc()/memcpy() work.\n\nThere are also a few places where we do escape_json() with a \"text\"\ntyped Datum where we go and convert the text to a NUL-terminated\ncstring so we can pass that along to ecape_json(). That's wasteful as\nwe could just pass the payload of the text Datum directly, and only\nallocate memory if the text Datum needs to be de-toasted. That saves\na useless palloc/memcpy/pfree cycle.\n\nNow, to make this more interesting, since we have a version of\nescape_json which takes a 'len', we could start looking at more than 1\ncharacter at a time. If you look closely add escape_json() all the\nspecial chars apart from \" and \\ are below the space character.\npg_lfind8() and pg_lfind8_le() allow processing of 16 bytes at a time,\nso we only need to search the 16 bytes 3 times to ensure that no\nspecial chars exist within. When that test fails, just go into\nbyte-at-a-time processing first copying over the portion of the string\nthat passed the vector test up until that point.\n\nI've attached 2 patches:\n\n0001 does everything I've described aside from SIMD.\n0002 does SIMD\n\nI've not personally done too much work in the area of JSON, so I don't\nhave any canned workloads to throw at this. I did try the following:\n\ncreate table j1 (very_long_column_name_to_test_json_escape text);\ninsert into j1 select repeat('x', x) from generate_series(0,1024)x;\nvacuum freeze j1;\n\nbench.sql:\nselect row_to_json(j1)::jsonb from j1;\n\nMaster:\n$ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\ntps = 362.494309 (without initial connection time)\ntps = 363.182458 (without initial connection time)\ntps = 362.679654 (without initial connection time)\n\nMaster + 0001 + 0002\n$ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\ntps = 426.456885 (without initial connection time)\ntps = 430.573046 (without initial connection time)\ntps = 431.142917 (without initial connection time)\n\nAbout 18% faster.\n\nIt would be much faster if we could also get rid of the\nescape_json_cstring() call in the switch default case of\ndatum_to_json_internal(). row_to_json() would be heaps faster with\nthat done. I considered adding a special case for the \"text\" type\nthere, but in the end felt that we should just fix that with some\nhypothetical other patch that changes how output functions work.\nOthers may feel it's worthwhile. I certainly could be convinced of it.\n\nI did add a new regression test. I'm not sure I'd want to keep that,\nbut felt it's worth leaving in there for now.\n\nOther things I considered were if doing 16 bytes at a time is too much\nas it puts quite a bit of work into byte-at-a-time processing if just\n1 special char exists in a 16-byte chunk. I considered doing SWAR [1]\nprocessing to do the job of vector8_has_le() and vector8_has() byte\nmaybe with just uint32s. It might be worth doing that. However, I've\nnot done it yet as it raises the bar for this patch quite a bit. SWAR\nvector processing is pretty much write-only code. Imagine trying to\nwrite comments for the code in [2] so that the average person could\nunderstand what's going on!?\n\nI'd be happy to hear from anyone that can throw these patches at a\nreal-world JSON workload to see if it runs more quickly.\n\nParking for July CF.\n\nDavid\n\n[1] https://en.wikipedia.org/wiki/SWAR\n[2] https://dotat.at/@/2022-06-27-tolower-swar.html", "msg_date": "Thu, 23 May 2024 13:23:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Speed up JSON escape processing with SIMD plus other optimisations" }, { "msg_contents": "On Thu, 23 May 2024 at 13:23, David Rowley <[email protected]> wrote:\n> Master:\n> $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\n> tps = 362.494309 (without initial connection time)\n> tps = 363.182458 (without initial connection time)\n> tps = 362.679654 (without initial connection time)\n>\n> Master + 0001 + 0002\n> $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\n> tps = 426.456885 (without initial connection time)\n> tps = 430.573046 (without initial connection time)\n> tps = 431.142917 (without initial connection time)\n>\n> About 18% faster.\n>\n> It would be much faster if we could also get rid of the\n> escape_json_cstring() call in the switch default case of\n> datum_to_json_internal(). row_to_json() would be heaps faster with\n> that done. I considered adding a special case for the \"text\" type\n> there, but in the end felt that we should just fix that with some\n> hypothetical other patch that changes how output functions work.\n> Others may feel it's worthwhile. I certainly could be convinced of it.\n\nJust to turn that into performance numbers, I tried the attached\npatch. The numbers came out better than I thought.\n\nSame test as before:\n\nmaster + 0001 + 0002 + attached hacks:\n$ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\ntps = 616.094394 (without initial connection time)\ntps = 615.928236 (without initial connection time)\ntps = 614.175494 (without initial connection time)\n\nAbout 70% faster than master.\n\nDavid", "msg_date": "Thu, 23 May 2024 14:15:38 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "\nOn 2024-05-22 We 22:15, David Rowley wrote:\n> On Thu, 23 May 2024 at 13:23, David Rowley <[email protected]> wrote:\n>> Master:\n>> $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\n>> tps = 362.494309 (without initial connection time)\n>> tps = 363.182458 (without initial connection time)\n>> tps = 362.679654 (without initial connection time)\n>>\n>> Master + 0001 + 0002\n>> $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\n>> tps = 426.456885 (without initial connection time)\n>> tps = 430.573046 (without initial connection time)\n>> tps = 431.142917 (without initial connection time)\n>>\n>> About 18% faster.\n>>\n>> It would be much faster if we could also get rid of the\n>> escape_json_cstring() call in the switch default case of\n>> datum_to_json_internal(). row_to_json() would be heaps faster with\n>> that done. I considered adding a special case for the \"text\" type\n>> there, but in the end felt that we should just fix that with some\n>> hypothetical other patch that changes how output functions work.\n>> Others may feel it's worthwhile. I certainly could be convinced of it.\n> Just to turn that into performance numbers, I tried the attached\n> patch. The numbers came out better than I thought.\n>\n> Same test as before:\n>\n> master + 0001 + 0002 + attached hacks:\n> $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps\n> tps = 616.094394 (without initial connection time)\n> tps = 615.928236 (without initial connection time)\n> tps = 614.175494 (without initial connection time)\n>\n> About 70% faster than master.\n>\n\nThat's all pretty nice! I'd take the win on this rather than wait for \nsome hypothetical patch that changes how output functions work.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 23 May 2024 16:34:09 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On Fri, 24 May 2024 at 08:34, Andrew Dunstan <[email protected]> wrote:\n> That's all pretty nice! I'd take the win on this rather than wait for\n> some hypothetical patch that changes how output functions work.\n\nOn re-think of that, even if we changed the output functions to write\ndirectly to a StringInfo, we wouldn't get the same speedup. All it\nwould get us is a better ability to know the length of the string the\noutput function generated by looking at the StringInfoData.len before\nand after calling the output function. That *would* allow us to use\nthe SIMD escaping, but not save the palloc/memcpy cycle for\nnon-toasted Datums. In other words, if we want this speedup then I\ndon't see another way other than this special case.\n\nI've attached a rebased patch series which includes the 3rd patch in a\nmore complete form. This one also adds handling for varchar and\nchar(n) output functions. Ideally, these would also use textout() to\nsave from having the ORs in the if condition. The output function code\nis the same in each.\n\nUpdated benchmarks from the test in [1].\n\nmaster @ 7c655a04a\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 366.211426\ntps = 359.707014\ntps = 362.204383\n\nmaster + 0001\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 362.641668\ntps = 367.986495\ntps = 368.698193 (+1% vs master)\n\nmaster + 0001 + 0002\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 430.477314\ntps = 425.173469\ntps = 431.013275 (+18% vs master)\n\nmaster + 0001 + 0002 + 0003\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 606.702305\ntps = 625.727031\ntps = 617.164822 (+70% vs master)\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpLXwMZvbCKcdGfU9XQjGCDm7tFpRdTXuB9PVgpNUYfEQ@mail.gmail.com", "msg_date": "Mon, 27 May 2024 11:39:46 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "Hi David,\n\nThanks for the patch.\n\nIn 0001 patch, I see that there are some escape_json() calls with\nNUL-terminated strings and gets the length by calling strlen(), like below:\n\n- escape_json(&buf, \"timestamp\");\n> + escape_json(&buf, \"timestamp\", strlen(\"timestamp\"));\n\n\n Wouldn't using escape_json_cstring() be better instead? IIUC there isn't\nmuch difference between escape_json() and escape_json_cstring(), right? We\nwould avoid strlen() with escape_json_cstring().\n\nRegards,\n-- \nMelih Mutlu\nMicrosoft\n\nHi David,Thanks for the patch.In 0001 patch, I see that there are some escape_json() calls with NUL-terminated strings and gets the length by calling strlen(), like below:-\tescape_json(&buf, \"timestamp\");+\tescape_json(&buf, \"timestamp\", strlen(\"timestamp\")); Wouldn't using escape_json_cstring() be better instead? IIUC there isn't much difference between escape_json() and escape_json_cstring(), right? We would avoid strlen() with escape_json_cstring().Regards,-- Melih MutluMicrosoft", "msg_date": "Tue, 11 Jun 2024 15:08:29 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On 2024-06-11 Tu 08:08, Melih Mutlu wrote:\n> Hi David,\n>\n> Thanks for the patch.\n>\n> In 0001 patch, I see that there are some escape_json() calls with \n> NUL-terminated strings and gets the length by calling strlen(), like \n> below:\n>\n> - escape_json(&buf, \"timestamp\");\n> + escape_json(&buf, \"timestamp\", strlen(\"timestamp\"));\n>\n>\n>  Wouldn't using escape_json_cstring() be better instead? IIUC there \n> isn't much difference between escape_json() and escape_json_cstring(), \n> right? We would avoid strlen() with escape_json_cstring().\n>\n>\n\nor maybe use sizeof(\"timestamp\") - 1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-06-11 Tu 08:08, Melih Mutlu\n wrote:\n\n\n\n\n\nHi David,\n\n\nThanks for the patch.\n\n\nIn 0001 patch, I see\n that there are some escape_json() calls with\n NUL-terminated strings and gets the length by calling\n strlen(), like below:\n\n-\n escape_json(&buf, \"timestamp\");\n + escape_json(&buf, \"timestamp\", strlen(\"timestamp\"));\n\n\n Wouldn't using escape_json_cstring() be better\n instead? IIUC there isn't much difference between\n escape_json() and escape_json_cstring(), right? We would\n avoid strlen() with escape_json_cstring().\n\n\n\n\n\n\n\n\n\nor maybe use sizeof(\"timestamp\") - 1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 Jun 2024 08:31:23 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "Thanks for having a look.\n\nOn Wed, 12 Jun 2024 at 00:08, Melih Mutlu <[email protected]> wrote:\n> In 0001 patch, I see that there are some escape_json() calls with NUL-terminated strings and gets the length by calling strlen(), like below:\n>\n>> - escape_json(&buf, \"timestamp\");\n>> + escape_json(&buf, \"timestamp\", strlen(\"timestamp\"));\n>\n> Wouldn't using escape_json_cstring() be better instead? IIUC there isn't much difference between escape_json() and escape_json_cstring(), right? We would avoid strlen() with escape_json_cstring().\n\nIt maybe would be better, but not for this reason. Most compilers will\nbe able to perform constant folding to transform the\nstrlen(\"timestamp\") into 9. You can see that's being done by both gcc\nand clang in [1].\n\nIt might be better to use escape_json_cstring() regardless of that as\nthe SIMD only kicks in when there are >= 16 chars, so there might be a\nfew more instructions calling the SIMD version for such a short\nstring. Probably, if we're worried about performance here we could\njust not bother passing the string through the escape function to\nsearch for something we know isn't there and just\nappendBinaryStringInfo \\\"\"timestamp\\\":\" directly.\n\nI don't really have a preference as to which of these we use. I doubt\nthe JSON escaping rules would ever change sufficiently that the latter\nof these methods would be a bad idea. I just doubt it's worth the\ndebate as I imagine the performance won't matter that much.\n\nDavid\n\n[1] https://godbolt.org/z/xqj4rKara\n\n\n", "msg_date": "Wed, 12 Jun 2024 00:43:40 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "I've attached a rebased set of patches. The previous set no longer applied.\n\nDavid", "msg_date": "Tue, 2 Jul 2024 16:49:52 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On 02/07/2024 07:49, David Rowley wrote:\n> I've attached a rebased set of patches. The previous set no longer applied.\n\nI looked briefly at the first patch. Seems reasonable.\n\nOne little thing that caught my eye is that in populate_scalar(), you \nsometimes make a temporary copy of the string to add the \nnull-terminator, but then call escape_json() which doesn't need the \nnull-terminator anymore. See attached patch to avoid that. However, it's \nnot clear to me how to reach that codepath, or if it reachable at all. I \ntried to add a NOTICE there and ran the regression tests, but got no \nfailures.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 24 Jul 2024 13:55:02 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On Wed, 24 Jul 2024 at 22:55, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 02/07/2024 07:49, David Rowley wrote:\n> > I've attached a rebased set of patches. The previous set no longer applied.\n>\n> I looked briefly at the first patch. Seems reasonable.\n>\n> One little thing that caught my eye is that in populate_scalar(), you\n> sometimes make a temporary copy of the string to add the\n> null-terminator, but then call escape_json() which doesn't need the\n> null-terminator anymore. See attached patch to avoid that. However, it's\n> not clear to me how to reach that codepath, or if it reachable at all. I\n> tried to add a NOTICE there and ran the regression tests, but got no\n> failures.\n\nThanks for noticing that. It seems like a good simplification\nregardless. I've incorporated it.\n\nI made another pass over the 0001 and 0003 patches and after a bit of\nrenaming, I pushed the result. I ended up keeping escape_json() as-is\nand giving the new function the name escape_json_with_len(). The text\nversion is named ecape_json_text(). I think originally I did it the\nother way as thought I'd have been able to adjust more locations than\nI did. Having it this way around is slightly less churn.\n\nI did another round of testing on the SIMD patch (attached as v5-0001)\nas I wondered if the SIMD loop maybe shouldn't wait too long before\ncopying the bytes to the destination string. I had wondered if the\nJSON string was very large that if we looked ahead too far that by the\ntime we flush those bytes out to the destination buffer, we'd have\nstarted eviction of L1 cachelines for parts of the buffer that are\nstill to be flushed. I put this to the test (test 3) and found that\nwith a 1MB JSON string it is faster to flush every 512 bytes than it\nis to only flush after checking the entire 1MB. With a 10kB JSON\nstring (test 2), the extra code to flush every 512 bytes seems to slow\nthings down. I'm a bit undecided about whether the flushing is\nworthwhile or not. It really depend on the length of JSON strings we'd\nlike to optimise for. It might be possible to get the best of both but\nI think it might require manually implementing portions of\nappendBinaryStringInfo(). I'd rather not go there. Does anyone have\nany thoughts about that?\n\nTest 2 (10KB) does show a ~261% performance increase but dropped to\n~227% flushing every 512 bytes. Test 3 (1MB) increased performance by\n~99% without early flushing and increased to ~156% flushing every 512\nbytes.\n\nbench.sql: select row_to_json(j1)::jsonb from j1;\n\n## Test 1 (variable JSON strings up to 1KB)\ncreate table j1 (very_long_column_name_to_test_json_escape text);\ninsert into j1 select repeat('x', x) from generate_series(0,1024)x;\nvacuum freeze j1;\n\nmaster @ 17a5871d:\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 364.410386 (without initial connection time)\ntps = 367.914165 (without initial connection time)\ntps = 365.794513 (without initial connection time)\n\nmaster + v5-0001\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 683.570613 (without initial connection time)\ntps = 685.206578 (without initial connection time)\ntps = 679.014056 (without initial connection time)\n\n## Test 2 (10KB JSON strings)\ncreate table j1 (very_long_column_name_to_test_json_escape text);\ninsert into j1 select repeat('x', 1024*10) from generate_series(0,1024)x;\nvacuum freeze j1;\n\nmaster @ 17a5871d:\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 23.872630 (without initial connection time)\ntps = 26.232014 (without initial connection time)\ntps = 26.495739 (without initial connection time)\n\nmaster + v5-0001\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 96.813515 (without initial connection time)\ntps = 96.023632 (without initial connection time)\ntps = 99.630428 (without initial connection time)\n\nmaster + v5-0001 ESCAPE_JSON_MAX_LOOKHEAD 512\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 83.597442 (without initial connection time)\ntps = 85.045554 (without initial connection time)\ntps = 82.105907 (without initial connection time)\n\n## Test 3 (1MB JSON strings)\ncreate table j1 (very_long_column_name_to_test_json_escape text);\ninsert into j1 select repeat('x', 1024*1024) from generate_series(0,10)x;\nvacuum freeze j1;\n\nmaster @ 17a5871d:\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 18.885922 (without initial connection time)\ntps = 18.829701 (without initial connection time)\ntps = 18.889369 (without initial connection time)\n\nmaster v5-0001\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 37.464967 (without initial connection time)\ntps = 37.536676 (without initial connection time)\ntps = 37.561387 (without initial connection time)\n\nmaster + v5-0001 ESCAPE_JSON_MAX_LOOKHEAD 512\n$ for i in {1..3}; do pgbench -n -f bench.sql -T 10 -M prepared\npostgres | grep tps; done\ntps = 48.296320 (without initial connection time)\ntps = 48.118151 (without initial connection time)\ntps = 48.507530 (without initial connection time)\n\nDavid", "msg_date": "Sun, 28 Jul 2024 00:51:14 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On Sun, 28 Jul 2024 at 00:51, David Rowley <[email protected]> wrote:\n> I did another round of testing on the SIMD patch (attached as v5-0001)\n> as I wondered if the SIMD loop maybe shouldn't wait too long before\n> copying the bytes to the destination string. I had wondered if the\n> JSON string was very large that if we looked ahead too far that by the\n> time we flush those bytes out to the destination buffer, we'd have\n> started eviction of L1 cachelines for parts of the buffer that are\n> still to be flushed. I put this to the test (test 3) and found that\n> with a 1MB JSON string it is faster to flush every 512 bytes than it\n> is to only flush after checking the entire 1MB. With a 10kB JSON\n> string (test 2), the extra code to flush every 512 bytes seems to slow\n> things down.\n\nI'd been wondering why test 2 (10KB) with v5-0001\nESCAPE_JSON_MAX_LOOKHEAD 512 was not better than v5-0001. It occurred\nto me that when using 10KB vs 1MB and flushing the buffer every 512\nbytes that enlargeStringInfo() is called more often proportionally to\nthe length of the string. Doing that causes more repalloc/memcpy work\nin stringinfo.c.\n\nWe can reduce the repalloc/memcpy work by calling enlargeStringInfo()\nonce at the beginning of escape_json_with_len(). We already know the\nminimum length we're going to append so we might as well do that.\n\nAfter making that change, doing the 512-byte flushing no longer slows\ndown test 2.\n\nHere are the results of testing v6-0001. I've added test 4, which\ntests a very short string to ensure there are no performance\nregressions when we can't do SIMD. Test 2 patched came out 3.74x\nfaster than master.\n\n## Test 1:\necho \"select row_to_json(j1)::jsonb from j1;\" > test1.sql\nfor i in {1..3}; do pgbench -n -f test1.sql -T 10 -M prepared postgres\n| grep tps; done\n\nmaster @ e6a963748:\ntps = 339.560611\ntps = 344.649009\ntps = 343.246659\n\nv6-0001:\ntps = 610.734018\ntps = 628.297298\ntps = 630.028225\n\nv6-0001 ESCAPE_JSON_MAX_LOOKHEAD 512:\ntps = 557.562866\ntps = 626.476618\ntps = 618.665045\n\n## Test 2:\necho \"select row_to_json(j2)::jsonb from j2;\" > test2.sql\nfor i in {1..3}; do pgbench -n -f test2.sql -T 10 -M prepared postgres\n| grep tps; done\n\nmaster @ e6a963748:\ntps = 25.633934\ntps = 18.580632\ntps = 25.395866\n\nv6-0001:\ntps = 89.325752\ntps = 91.277016\ntps = 86.289533\n\nv6-0001 ESCAPE_JSON_MAX_LOOKHEAD 512:\ntps = 85.194479\ntps = 90.054279\ntps = 85.483279\n\n## Test 3:\necho \"select row_to_json(j3)::jsonb from j3;\" > test3.sql\nfor i in {1..3}; do pgbench -n -f test3.sql -T 10 -M prepared postgres\n| grep tps; done\n\nmaster @ e6a963748:\ntps = 18.863420\ntps = 18.866374\ntps = 18.791395\n\nv6-0001:\ntps = 38.990681\ntps = 37.893820\ntps = 38.057235\n\nv6-0001 ESCAPE_JSON_MAX_LOOKHEAD 512:\ntps = 46.076842\ntps = 46.400413\ntps = 46.165491\n\n## Test 4:\necho \"select row_to_json(j4)::jsonb from j4;\" > test4.sql\nfor i in {1..3}; do pgbench -n -f test4.sql -T 10 -M prepared postgres\n| grep tps; done\n\nmaster @ e6a963748:\ntps = 1700.888458\ntps = 1684.753818\ntps = 1690.262772\n\nv6-0001:\ntps = 1721.821561\ntps = 1699.189207\ntps = 1663.618117\n\nv6-0001 ESCAPE_JSON_MAX_LOOKHEAD 512:\ntps = 1701.565562\ntps = 1706.310398\ntps = 1687.585128\n\nI'm pretty happy with this now so I'd like to commit this and move on\nto other work. Doing \"#define ESCAPE_JSON_MAX_LOOKHEAD 512\", seems\nlike the right thing. If anyone else wants to verify my results or\ntake a look at the patch, please do so.\n\nDavid", "msg_date": "Thu, 1 Aug 2024 16:15:40 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On Thu, 1 Aug 2024 at 16:15, David Rowley <[email protected]> wrote:\n> I'm pretty happy with this now so I'd like to commit this and move on\n> to other work. Doing \"#define ESCAPE_JSON_MAX_LOOKHEAD 512\", seems\n> like the right thing. If anyone else wants to verify my results or\n> take a look at the patch, please do so.\n\nI did some more testing on this on a few different machines; apple M2\nUltra, AMD 7945HX and with a Raspberry Pi 4.\n\nI've attached the results as graphs with the master time normalised to\n1. I tried out quite a few different values for flushing the buffer,\n256 bytes in powers of 2 up to 8192 bytes. It seems like each machine\nhas its own preference to what this should be set to, but no machine\nseems to be too picky about the exact value. They're all small enough\nvalues to fit in L1d cache on each of the CPUs. Test 4 shouldn't\nchange much as there's no SIMD going on in that test. You might notice\na bit of noise from all machines for test 4, apart from the M2. You\ncan assume a similar level of noise for tests 1 to 3 on each of the\nmachines. The Raspberry Pi does seem to prefer not flushing the\nbuffer until the end (listed as \"patched\" in the graphs). I suspect\nthat's because that CPU does better with less code. I've not taken\nthese results quite as seriously since it's likely a platform that we\nwouldn't want to prefer when it comes to tuning optimisations. I was\nmostly interested in not seeing regressions.\n\nI think, if nobody else thinks differently, I'll rename\nESCAPE_JSON_MAX_LOOKHEAD to ESCAPE_JSON_FLUSH_AFTER and set it to 512.\nThe exact value does not seem to matter too much and 512 seems fine.\nIt's better for the M2 than the 7945HX, but not by much.\n\nI've also attached the script I ran to get these results and also the\nfull results.\n\nDavid", "msg_date": "Sun, 4 Aug 2024 02:11:18 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" }, { "msg_contents": "On Sun, 4 Aug 2024 at 02:11, David Rowley <[email protected]> wrote:\n> I did some more testing on this on a few different machines; apple M2\n> Ultra, AMD 7945HX and with a Raspberry Pi 4.\n\nI did some more testing on this patch today as I wanted to see what\nIntel CPUs thought about it. The only modern Intel CPU I have is a\n13th-generation laptop CPU. It's an i7-1370P. It's in a laptop with\nsolid-state cooling. At least, I've never heard a fan running on it.\nWatching the clock speed during the test had it jumping around wildly,\nso I assume it was thermally throttling.\n\nI've attached the results here anyway. They're very noisy.\n\nI also did a test where I removed all the escaping logic and had the\ncode copy the source string to the destination without checking for\nchars to escape. I wanted to see how much was left performance-wise.\nThere was only a further 10% increase.\n\nI tidied up the patch a bit more and pushed it.\n\nThanks for the reviews.\n\nDavid", "msg_date": "Mon, 5 Aug 2024 23:26:23 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed up JSON escape processing with SIMD plus other\n optimisations" } ]