threads
listlengths 1
2.99k
|
---|
[
{
"msg_contents": "Hi hackers,\n I encounter a problem, as shown below:\n\nquery:\n select\n ref_0.ps_suppkey as c0,\n ref_1.c_acctbal as c1,\n ref_2.o_totalprice as c2,\n ref_2.o_orderpriority as c3,\n ref_2.o_clerk as c4\nfrom\n public.partsupp as ref_0\n left join public.nation as sample_0\n inner join public.customer as sample_1\n on (false)\n on (true)\n left join public.customer as ref_1\n right join public.orders as ref_2\n on (false)\n left join public.supplier as ref_3\n on (false)\n on (sample_0.n_comment = ref_1.c_name )\nwhere (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN 4 ELSE 4 END,\nCASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END))\norder by c0, c1, c2, c3, c4 limit 1;\n\non pg16devel:\nc0 | c1 | c2 | c3 | c4\n----+----+----+----+----\n 1 | | | |\n(1 row)\nplan:\n QUERY PLAN\n\n---------------------------------------------------------------------------------------\n Limit\n -> Sort\n Sort Key: ref_0.ps_suppkey, c_acctbal, o_totalprice,\no_orderpriority, o_clerk\n -> Nested Loop Left Join\n -> Seq Scan on partsupp ref_0\n -> Result\n One-Time Filter: false\n(7 rows)\n\non pg15.2:\n c0 | c1 | c2 | c3 | c4\n----+----+----+----+----\n(0 rows)\nplan:\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit\n -> Sort\n Sort Key: ref_0.ps_suppkey, c_acctbal, o_totalprice,\no_orderpriority, o_clerk\n -> Hash Left Join\n Hash Cond: ((n_comment)::text = (c_name)::text)\n Filter: (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN\n4 ELSE 4 END, CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END))\n -> Nested Loop Left Join\n -> Seq Scan on partsupp ref_0\n -> Result\n One-Time Filter: false\n -> Hash\n -> Result\n One-Time Filter: false\n(13 rows)\n\n\n\n regards, tender\nwang\n\nHi hackers, I encounter a problem, as shown below:query: select ref_0.ps_suppkey as c0, ref_1.c_acctbal as c1, ref_2.o_totalprice as c2, ref_2.o_orderpriority as c3, ref_2.o_clerk as c4from public.partsupp as ref_0 left join public.nation as sample_0 inner join public.customer as sample_1 on (false) on (true) left join public.customer as ref_1 right join public.orders as ref_2 on (false) left join public.supplier as ref_3 on (false) on (sample_0.n_comment = ref_1.c_name )where (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN 4 ELSE 4 END, CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END))order by c0, c1, c2, c3, c4 limit 1;on pg16devel:c0 | c1 | c2 | c3 | c4 ----+----+----+----+---- 1 | | | | (1 row)plan: QUERY PLAN --------------------------------------------------------------------------------------- Limit -> Sort Sort Key: ref_0.ps_suppkey, c_acctbal, o_totalprice, o_orderpriority, o_clerk -> Nested Loop Left Join -> Seq Scan on partsupp ref_0 -> Result One-Time Filter: false(7 rows)on pg15.2: c0 | c1 | c2 | c3 | c4 ----+----+----+----+----(0 rows)plan: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------- Limit -> Sort Sort Key: ref_0.ps_suppkey, c_acctbal, o_totalprice, o_orderpriority, o_clerk -> Hash Left Join Hash Cond: ((n_comment)::text = (c_name)::text) Filter: (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN 4 ELSE 4 END, CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END)) -> Nested Loop Left Join -> Seq Scan on partsupp ref_0 -> Result One-Time Filter: false -> Hash -> Result One-Time Filter: false(13 rows) regards, tender wang",
"msg_date": "Tue, 4 Apr 2023 10:53:32 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "same query but different result on pg16devel and pg15.2"
},
{
"msg_contents": "Attached file included table schema information, but no data.\n\ntender wang <[email protected]> 于2023年4月4日周二 10:53写道:\n\n> Hi hackers,\n> I encounter a problem, as shown below:\n>\n> query:\n> select\n> ref_0.ps_suppkey as c0,\n> ref_1.c_acctbal as c1,\n> ref_2.o_totalprice as c2,\n> ref_2.o_orderpriority as c3,\n> ref_2.o_clerk as c4\n> from\n> public.partsupp as ref_0\n> left join public.nation as sample_0\n> inner join public.customer as sample_1\n> on (false)\n> on (true)\n> left join public.customer as ref_1\n> right join public.orders as ref_2\n> on (false)\n> left join public.supplier as ref_3\n> on (false)\n> on (sample_0.n_comment = ref_1.c_name )\n> where (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN 4 ELSE 4 END,\n> CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END))\n> order by c0, c1, c2, c3, c4 limit 1;\n>\n> on pg16devel:\n> c0 | c1 | c2 | c3 | c4\n> ----+----+----+----+----\n> 1 | | | |\n> (1 row)\n> plan:\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------\n> Limit\n> -> Sort\n> Sort Key: ref_0.ps_suppkey, c_acctbal, o_totalprice,\n> o_orderpriority, o_clerk\n> -> Nested Loop Left Join\n> -> Seq Scan on partsupp ref_0\n> -> Result\n> One-Time Filter: false\n> (7 rows)\n>\n> on pg15.2:\n> c0 | c1 | c2 | c3 | c4\n> ----+----+----+----+----\n> (0 rows)\n> plan:\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit\n> -> Sort\n> Sort Key: ref_0.ps_suppkey, c_acctbal, o_totalprice,\n> o_orderpriority, o_clerk\n> -> Hash Left Join\n> Hash Cond: ((n_comment)::text = (c_name)::text)\n> Filter: (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL)\n> THEN 4 ELSE 4 END, CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95\n> END))\n> -> Nested Loop Left Join\n> -> Seq Scan on partsupp ref_0\n> -> Result\n> One-Time Filter: false\n> -> Hash\n> -> Result\n> One-Time Filter: false\n> (13 rows)\n>\n>\n>\n> regards, tender\n> wang\n>",
"msg_date": "Tue, 4 Apr 2023 13:53:52 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: same query but different result on pg16devel and pg15.2"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 10:53 AM tender wang <[email protected]> wrote:\n\n> Hi hackers,\n> I encounter a problem, as shown below:\n>\n> query:\n> select\n> ref_0.ps_suppkey as c0,\n> ref_1.c_acctbal as c1,\n> ref_2.o_totalprice as c2,\n> ref_2.o_orderpriority as c3,\n> ref_2.o_clerk as c4\n> from\n> public.partsupp as ref_0\n> left join public.nation as sample_0\n> inner join public.customer as sample_1\n> on (false)\n> on (true)\n> left join public.customer as ref_1\n> right join public.orders as ref_2\n> on (false)\n> left join public.supplier as ref_3\n> on (false)\n> on (sample_0.n_comment = ref_1.c_name )\n> where (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN 4 ELSE 4 END,\n> CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END))\n> order by c0, c1, c2, c3, c4 limit 1;\n>\n\nIt is the same issue as discussed in [1]. In this query the WHERE\ncondition is placed at the wrong place.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ru\n\nThanks\nRichard\n\nOn Tue, Apr 4, 2023 at 10:53 AM tender wang <[email protected]> wrote:Hi hackers, I encounter a problem, as shown below:query: select ref_0.ps_suppkey as c0, ref_1.c_acctbal as c1, ref_2.o_totalprice as c2, ref_2.o_orderpriority as c3, ref_2.o_clerk as c4from public.partsupp as ref_0 left join public.nation as sample_0 inner join public.customer as sample_1 on (false) on (true) left join public.customer as ref_1 right join public.orders as ref_2 on (false) left join public.supplier as ref_3 on (false) on (sample_0.n_comment = ref_1.c_name )where (8 <= NULLIF(CASE WHEN (o_orderkey IS NOT NULL) THEN 4 ELSE 4 END, CASE WHEN (o_orderdate >= o_orderdate) THEN 95 ELSE 95 END))order by c0, c1, c2, c3, c4 limit 1;It is the same issue as discussed in [1]. In this query the WHEREcondition is placed at the wrong place.[1] https://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ruThanksRichard",
"msg_date": "Tue, 4 Apr 2023 14:47:19 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same query but different result on pg16devel and pg15.2"
}
] |
[
{
"msg_contents": "Dear hackers,\n(CC: Amit and Julien)\n\nThis is a fork thread of Julien's thread, which allows to upgrade subscribers\nwithout losing changes [1].\n\nI briefly implemented a prototype for allowing to upgrade publisher node.\nIIUC the key lack was that replication slots used for logical replication could\nnot be copied to new node by pg_upgrade command, so this patch allows that.\nThis feature can be used when '--include-replication-slot' is specified. Also,\nI added a small test for the typical case. It may be helpful to understand.\n\nPg_upgrade internally executes pg_dump for dumping a database object from the old.\nThis feature follows this, adds a new option '--slot-only' to pg_dump command.\nWhen specified, it extracts needed info from old node and generate an SQL file\nthat executes pg_create_logical_replication_slot().\n\nThe notable deference from pre-existing is that restoring slots are done at the\ndifferent time. Currently pg_upgrade works with following steps:\n\n...\n1. dump schema from old nodes\n2. do pg_resetwal several times to new node\n3. restore schema to new node\n4. do pg_resetwal again to new node\n...\n\nThe probem is that if we create replication slots at step 3, the restart_lsn and\nconfirmed_flush_lsn are set to current_wal_insert_lsn at that time, whereas\npg_resetwal discards the WAL file. Such slots cannot extracting changes.\nTo handle the issue the resotring is seprarated into two phases. At the first phase\nrestoring is done at step 3, excepts replicatin slots. At the second phase\nreplication slots are restored at step 5, after doing pg_resetwal.\n\nBefore upgrading a publisher node, all the changes gerenated on publisher must\nbe sent and applied on subscirber. This is because restart_lsn and confirmed_flush_lsn\nof copied replication slots is same as current_wal_insert_lsn. New node resets\nthe information which WALs are really applied on subscriber and restart.\nBasically it is not problematic because before shutting donw the publisher, its\nwalsender processes confirm all data is replicated. See WalSndDone() and related code.\n\nCurrently physical slots are ignored because this is out-of-scope for me.\nI did not any analysis about it.\n\n[1]: https://www.postgresql.org/message-id/flat/20230217075433.u5mjly4d5cr4hcfe%40jrouhaud\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 4 Apr 2023 07:00:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san.\n\nThis is a WIP review. I'm yet to do more testing and more study of the\nPOC patch's design.\n\nWhile reading the code I kept a local list of my review comments.\nMeanwhile, there is a long weekend coming up here, so I thought it\nwould be better to pass these to you now rather than next week in case\nyou want to address them.\n\n======\nGeneral\n\n1.\nSince these two new options are made to work together, I think the\nnames should be more similar. e.g.\n\npg_dump: \"--slot_only\" --> \"--replication-slots-only\"\npg_upgrade: \"--include-replication-slot\" --> \"--include-replication-slots\"\n\nhelp/comments/commit-message all should change accordingly, but I did\nnot give separate review comments for each of these.\n\n~~~\n\n2.\nI felt there maybe should be some pg_dump test cases for that new\noption, rather than the current patch where it only seems to be\ntesting the new pg_dump option via the pg_upgrade TAP tests.\n\n======\nCommit message\n\n3.\nThis commit introduces a new option called \"--include-replication-slot\".\nThis allows nodes with logical replication slots to be upgraded. The commit can\nbe divided into two parts: one for pg_dump and another for pg_upgrade.\n\n~\n\n\"new option\" --> \"new pg_upgrade\" option\n\n~~~\n\n4.\nFor pg_upgrade, when '--include-replication-slot' is specified, it\nexecutes pg_dump\nwith added option and restore from the dump. Apart from restoring\nschema, pg_resetwal\nmust not be called after restoring replicaiton slots. This is because\nthe command\ndiscards WAL files and starts from a new segment, even if they are required by\nreplication slots. This leads an ERROR: \"requested WAL segment XXX has already\nbeen removed\". To avoid this, replication slots are restored at a different time\nthan other objects, after running pg_resetwal.\n\n~\n\n4a.\n\"with added option and restore from the dump\" --> \"with the new\n\"--slot-only\" option and restores from the dump\"\n\n~\n\n4b.\nTypo: /replicaiton/replication/\n\n~\n\n4c\n\"leads an ERROR\" --> \"leads to an ERROR\"\n\n======\n\ndoc/src/sgml/ref/pg_dump.sgml\n\n5.\n+ <varlistentry>\n+ <term><option>--slot-only</option></term>\n+ <listitem>\n+ <para>\n+ Dump only replication slots, neither the schema (data definitions) nor\n+ data. Mainly this is used for upgrading nodes.\n+ </para>\n+ </listitem>\n\nSUGGESTION\nDump only replication slots; not the schema (data definitions), nor\ndata. This is mainly used when upgrading nodes.\n\n======\n\ndoc/src/sgml/ref/pgupgrade.sgml\n\n6.\n+ <para>\n+ Transport replication slots. Currently this can work only for logical\n+ slots, and temporary slots are ignored. Note that pg_upgrade does not\n+ check the installation of plugins.\n+ </para>\n\nSUGGESTION\nUpgrade replication slots. Only logical replication slots are\ncurrently supported, and temporary slots are ignored. Note that...\n\n======\n\nsrc/bin/pg_dump/pg_dump.c\n\n7. main\n {\"exclude-table-data-and-children\", required_argument, NULL, 14},\n-\n+ {\"slot-only\", no_argument, NULL, 15},\n {NULL, 0, NULL, 0}\n\nThe blank line is misplaced.\n\n~~~\n\n8. main\n+ case 15: /* dump onlu replication slot(s) */\n+ dopt.slot_only = true;\n+ dopt.include_everything = false;\n+ break;\n\ntypo: /onlu/only/\n\n~~~\n\n9. main\n+ if (dopt.slot_only && dopt.dataOnly)\n+ pg_fatal(\"options --replicatin-slots and -a/--data-only cannot be\nused together\");\n+ if (dopt.slot_only && dopt.schemaOnly)\n+ pg_fatal(\"options --replicatin-slots and -s/--schema-only cannot be\nused together\");\n+\n\n9a.\ntypo: /replicatin/replication/\n\n~\n\n9b.\nI am wondering if these checks are enough. E.g. is \"slots-only\"\ncompatible with \"no-publications\" ?\n\n~~~\n\n10. main\n+ /*\n+ * If dumping replication slots are request, dumping them and skip others.\n+ */\n+ if (dopt.slot_only)\n+ {\n+ getRepliactionSlots(fout);\n+ goto dump;\n+ }\n\n10a.\nSUGGESTION\nIf dump replication-slots-only was requested, dump only them and skip\neverything else.\n\n~\n\n10b.\nThis code seems mutually exclusive to every other option. I'm\nwondering if this code even needs 'collectRoleNames', or should the\nslots option check be moved above that (and also above the 'Dumping\nLOs' etc...)\n\n~~~\n\n11. help\n\n+ printf(_(\" --slot-only dump only replication\nslots, no schema and data\\n\"));\n\n11a.\nSUGGESTION\n\"no schema and data\" --> \"no schema or data\"\n\n~\n\n11b.\nThis help is misplaced. It should be in alphabetical order consistent\nwith all the other help.\n\n~~~\n12. getRepliactionSlots\n\n+/*\n+ * getRepliactionSlots\n+ * get information about replication slots\n+ */\n+static void\n+getRepliactionSlots(Archive *fout)\n\nFunction name typo / getRepliactionSlots/ getReplicationSlots/\n(also in the comment)\n\n~~~\n\n13. getRepliactionSlots\n\n+ /* Check whether we should dump or not */\n+ if (fout->remoteVersion < 160000 && !dopt->slot_only)\n+ return;\n\nHmmm, is that condition correct? Shouldn't the && be || here?\n\n~~~\n\n14. dumpReplicationSlot\n\n+static void\n+dumpReplicationSlot(Archive *fout, const ReplicationSlotInfo *slotinfo)\n+{\n+ DumpOptions *dopt = fout->dopt;\n+ PQExpBuffer query;\n+ char *slotname;\n+\n+ if (!dopt->slot_only)\n+ return;\n+\n+ slotname = pg_strdup(slotinfo->dobj.name);\n+ query = createPQExpBuffer();\n+\n+ /*\n+ * XXX: For simplification, pg_create_logical_replication_slot() is used.\n+ * Is it sufficient?\n+ */\n+ appendPQExpBuffer(query, \"SELECT pg_create_logical_replication_slot('%s', \",\n+ slotname);\n+ appendStringLiteralAH(query, slotinfo->plugin, fout);\n+ appendPQExpBuffer(query, \", \");\n+ appendStringLiteralAH(query, slotinfo->twophase, fout);\n+ appendPQExpBuffer(query, \");\");\n+\n+ if (slotinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)\n+ ArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\n+ ARCHIVE_OPTS(.tag = slotname,\n+ .description = \"REPICATION SLOT\",\n+ .section = SECTION_POST_DATA,\n+ .createStmt = query->data));\n+\n+ /* XXX: do we have to dump security label? */\n+\n+ if (slotinfo->dobj.dump & DUMP_COMPONENT_COMMENT)\n+ dumpComment(fout, \"REPICATION SLOT\", slotname,\n+ NULL, NULL,\n+ slotinfo->dobj.catId, 0, slotinfo->dobj.dumpId);\n+\n+ pfree(slotname);\n+ destroyPQExpBuffer(query);\n+}\n\n14a.\nWouldn't it be better to check the \"slotinfo->dobj.dump &\nDUMP_COMPONENT_DEFINITION\" condition first, before building the query?\nFor example, see other function dumpIndexAttach().\n\n~\n\n14b.\nTypo: /REPICATION SLOT/REPLICATION SLOT/ in the ARCHIVE_OPTS description.\n\n~\n\n14c.\nTypo: /REPICATION SLOT/REPLICATION SLOT/ in the dumpComment parameter.\n\n======\n\nsrc/bin/pg_dump/pg_dump.h\n\n15. DumpableObjectType\n\n@@ -82,7 +82,8 @@ typedef enum\n DO_PUBLICATION,\n DO_PUBLICATION_REL,\n DO_PUBLICATION_TABLE_IN_SCHEMA,\n- DO_SUBSCRIPTION\n+ DO_SUBSCRIPTION,\n+ DO_REPICATION_SLOT\n } DumpableObjectType;\n\nTypo /DO_REPICATION_SLOT/DO_REPLICATION_SLOT/\n\n======\n\nsrc/bin/pg_upgrade/dump.c\n\n16. generate_old_dump\n\n+ /*\n+ * Dump replicaiton slots if needed.\n+ *\n+ * XXX We cannot dump replication slots at the same time as the schema\n+ * dump because we need to separate the timing of restoring replication\n+ * slots and other objects. Replication slots, in particular, should\n+ * not be restored before executing the pg_resetwal command because it\n+ * will remove WALs that are required by the slots.\n+ */\n\nTypo: /replicaiton/replication/\n\n======\n\nsrc/bin/pg_upgrade/pg_upgrade.c\n\n17. main\n\n+ /*\n+ * Create replication slots if requested.\n+ *\n+ * XXX This must be done after doing pg_resetwal command because the\n+ * command will remove required WALs.\n+ */\n+ if (user_opts.include_slots)\n+ {\n+ start_postmaster(&new_cluster, true);\n+ create_replicaiton_slots();\n+ stop_postmaster(false);\n+ }\n+\n\nI don't think that warrants a \"XXX\" style comment. It is just a \"Note:\".\n\n~~~\n\n18. create_replicaiton_slots\n+\n+/*\n+ * create_replicaiton_slots()\n+ *\n+ * Similar to create_new_objects() but only restores replication slots.\n+ */\n+static void\n+create_replicaiton_slots(void)\n\nTypo: /create_replicaiton_slots/create_replication_slots/\n\n(Function name and comment)\n\n~~~\n\n19. create_replicaiton_slots\n\n+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ {\n+ char slots_file_name[MAXPGPATH],\n+ log_file_name[MAXPGPATH];\n+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\n+ char *opts;\n+\n+ pg_log(PG_STATUS, \"%s\", old_db->db_name);\n+\n+ snprintf(slots_file_name, sizeof(slots_file_name),\n+ DB_DUMP_FILE_MASK_FOR_SLOTS, old_db->db_oid);\n+ snprintf(log_file_name, sizeof(log_file_name),\n+ DB_DUMP_LOG_FILE_MASK, old_db->db_oid);\n+\n+ opts = \"--echo-queries --set ON_ERROR_STOP=on --no-psqlrc\";\n+\n+ parallel_exec_prog(log_file_name,\n+ NULL,\n+ \"\\\"%s/psql\\\" %s %s --dbname %s -f \\\"%s/%s\\\"\",\n+ new_cluster.bindir,\n+ cluster_conn_opts(&new_cluster),\n+ opts,\n+ old_db->db_name,\n+ log_opts.dumpdir,\n+ slots_file_name);\n+ }\n\nThat 'opts' variable seems unnecessary. Why not just pass the string\nliteral directly when invoking parallel_exec_prog()?\n\nOr if not removed, then at make it const char psql_opts =\n\"--echo-queries --set ON_ERROR_STOP=on --no-psqlrc\";\n\n======\n\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n20.\n+#define DB_DUMP_FILE_MASK_FOR_SLOTS \"pg_upgrade_dump_%u_slots.custom\"\n\n20a.\nFor consistency with other mask names (e.g. DB_DUMP_LOG_FILE_MASK)\nprobably this should be called DB_DUMP_SLOTS_FILE_MASK.\n\n~\n\n20b.\nBecause the content of this dump/restore file is SQL (not custom\nbinary) wouldn't a filename suffix \".sql\" be better?\n\n======\n\n.../pg_upgrade/t/003_logical_replication.pl\n\n21.\nSome parts (formatting, comments, etc) in this file are inconsistent.\n\n21a\n\");\" is sometimes alone on a line, sometimes not\n\n~\n\n21b.\n\"Init\" versus \"Create\" nodes.\n\n~\n\n21c.\n# Check whether changes on new publisher are shipped to subscriber\n\nSUGGESTION\nCheck whether changes on the new publisher get replicated to the subscriber\n~\n\n21d.\n$result =\n $subscriber->safe_psql('postgres', \"SELECT count(*) FROM tbl\");\nis($result, qq(20),\n 'check changes are shipped to subscriber');\n\nFor symmetry with before/after, I think it would be better to do this\nsame command before the upgrade to confirm q(10) rows.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 6 Apr 2023 18:23:33 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi,\n\nOn Tue, Apr 04, 2023 at 07:00:01AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Dear hackers,\n> (CC: Amit and Julien)\n\n(thanks for the Cc)\n\n> This is a fork thread of Julien's thread, which allows to upgrade subscribers\n> without losing changes [1].\n>\n> I briefly implemented a prototype for allowing to upgrade publisher node.\n> IIUC the key lack was that replication slots used for logical replication could\n> not be copied to new node by pg_upgrade command, so this patch allows that.\n> This feature can be used when '--include-replication-slot' is specified. Also,\n> I added a small test for the typical case. It may be helpful to understand.\n>\n> Pg_upgrade internally executes pg_dump for dumping a database object from the old.\n> This feature follows this, adds a new option '--slot-only' to pg_dump command.\n> When specified, it extracts needed info from old node and generate an SQL file\n> that executes pg_create_logical_replication_slot().\n>\n> The notable deference from pre-existing is that restoring slots are done at the\n> different time. Currently pg_upgrade works with following steps:\n>\n> ...\n> 1. dump schema from old nodes\n> 2. do pg_resetwal several times to new node\n> 3. restore schema to new node\n> 4. do pg_resetwal again to new node\n> ...\n>\n> The probem is that if we create replication slots at step 3, the restart_lsn and\n> confirmed_flush_lsn are set to current_wal_insert_lsn at that time, whereas\n> pg_resetwal discards the WAL file. Such slots cannot extracting changes.\n> To handle the issue the resotring is seprarated into two phases. At the first phase\n> restoring is done at step 3, excepts replicatin slots. At the second phase\n> replication slots are restored at step 5, after doing pg_resetwal.\n>\n> Before upgrading a publisher node, all the changes gerenated on publisher must\n> be sent and applied on subscirber. This is because restart_lsn and confirmed_flush_lsn\n> of copied replication slots is same as current_wal_insert_lsn. New node resets\n> the information which WALs are really applied on subscriber and restart.\n> Basically it is not problematic because before shutting donw the publisher, its\n> walsender processes confirm all data is replicated. See WalSndDone() and related code.\n\nAs I mentioned in my original thread, I'm not very familiar with that code, but\nI'm a bit worried about \"all the changes generated on publisher must be send\nand applied\". Is that a hard requirement for the feature to work reliably? If\nyes, how does this work if some subscriber node isn't connected when the\npublisher node is stopped? I guess you could add a check in pg_upgrade to make\nsure that all logical slot are indeed caught up and fail if that's not the case\nrather than assuming that a clean shutdown implies it. It would be good to\ncover that in the TAP test, and also cover some corner cases, like any new row\nadded on the publisher node after the pg_upgrade but before the subscriber is\nreconnected is also replicated as expected.\n>\n> Currently physical slots are ignored because this is out-of-scope for me.\n> I did not any analysis about it.\n\nAgreed, but then shouldn't the option be named \"--logical-slots-only\" or\nsomething like that, same for all internal function names?\n\n\n",
"msg_date": "Fri, 7 Apr 2023 10:48:23 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Julien,\n\nThank you for giving comments!\n\n> As I mentioned in my original thread, I'm not very familiar with that code, but\n> I'm a bit worried about \"all the changes generated on publisher must be send\n> and applied\". Is that a hard requirement for the feature to work reliably?\n\nI think the requirement is needed because the existing WALs on old node cannot be\ntransported on new instance. The WAL hole from confirmed_flush to current position\ncould not be filled by newer instance.\n\n> If\n> yes, how does this work if some subscriber node isn't connected when the\n> publisher node is stopped? I guess you could add a check in pg_upgrade to make\n> sure that all logical slot are indeed caught up and fail if that's not the case\n> rather than assuming that a clean shutdown implies it. It would be good to\n> cover that in the TAP test, and also cover some corner cases, like any new row\n> added on the publisher node after the pg_upgrade but before the subscriber is\n> reconnected is also replicated as expected.\n\nHmm, good point. Current patch could not be handled the case because walsenders\nfor the such slots do not exist. I have tested your approach, however, I found that\nCHECKPOINT_SHUTDOWN record were generated twice when publisher was\nshutted down and started. It led that the confirmed_lsn of slots always was behind\nfrom WAL insert location and failed to upgrade every time.\nNow I do not have good idea to solve it... Do anyone have for this?\n\n> Agreed, but then shouldn't the option be named \"--logical-slots-only\" or\n> something like that, same for all internal function names?\n\nSeems right. Will be fixed in next version. Maybe \"--logical-replication-slots-only\"\nwill be used, per Peter's suggestion [1].\n\n[1]: https://www.postgresql.org/message-id/CAHut%2BPvpBsyxj9SrB1ZZ9gP7r1AA5QoTYjpzMcVSjQO2xQy7aw%40mail.gmail.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 7 Apr 2023 09:40:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Julien,\n\n> > Agreed, but then shouldn't the option be named \"--logical-slots-only\" or\n> > something like that, same for all internal function names?\n> \n> Seems right. Will be fixed in next version. Maybe\n> \"--logical-replication-slots-only\"\n> will be used, per Peter's suggestion [1].\n\nAfter considering more, I decided not to include the word \"logical\" in the option\nat this point. This is because we have not decided yet whether we dumps physical\nreplication slots or not. Current restriction has been occurred because of just\nlack of analysis and considerations, If we decide not to do that, then they will\nbe renamed accordingly.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 7 Apr 2023 12:51:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing briefly. PSA new version.\r\nIf you can I want to ask the opinion about the checking by pg_upgrade [1].\r\n\r\n> ======\r\n> General\r\n> \r\n> 1.\r\n> Since these two new options are made to work together, I think the\r\n> names should be more similar. e.g.\r\n> \r\n> pg_dump: \"--slot_only\" --> \"--replication-slots-only\"\r\n> pg_upgrade: \"--include-replication-slot\" --> \"--include-replication-slots\"\r\n> \r\n> help/comments/commit-message all should change accordingly, but I did\r\n> not give separate review comments for each of these.\r\n\r\nOK, I renamed. By the way, how do you think the suggestion raised by Julien?\r\nCurrently I did not address it because the restriction was caused by just lack of\r\nanalysis, and this may be not agreed in the community.\r\nOr, should we keep the name anyway?\r\n\r\n> 2.\r\n> I felt there maybe should be some pg_dump test cases for that new\r\n> option, rather than the current patch where it only seems to be\r\n> testing the new pg_dump option via the pg_upgrade TAP tests.\r\n\r\nHmm, I supposed that the option shoul be used only for upgrading, so I'm not sure\r\nit must be tested by only pg_dump.\r\n\r\n> Commit message\r\n> \r\n> 3.\r\n> This commit introduces a new option called \"--include-replication-slot\".\r\n> This allows nodes with logical replication slots to be upgraded. The commit can\r\n> be divided into two parts: one for pg_dump and another for pg_upgrade.\r\n> \r\n> ~\r\n> \r\n> \"new option\" --> \"new pg_upgrade\" option\r\n\r\nFixed.\r\n\r\n> 4.\r\n> For pg_upgrade, when '--include-replication-slot' is specified, it\r\n> executes pg_dump\r\n> with added option and restore from the dump. Apart from restoring\r\n> schema, pg_resetwal\r\n> must not be called after restoring replicaiton slots. This is because\r\n> the command\r\n> discards WAL files and starts from a new segment, even if they are required by\r\n> replication slots. This leads an ERROR: \"requested WAL segment XXX has already\r\n> been removed\". To avoid this, replication slots are restored at a different time\r\n> than other objects, after running pg_resetwal.\r\n> \r\n> ~\r\n> \r\n> 4a.\r\n> \"with added option and restore from the dump\" --> \"with the new\r\n> \"--slot-only\" option and restores from the dump\"\r\n\r\nFixed.\r\n\r\n> 4b.\r\n> Typo: /replicaiton/replication/\r\n\r\nFixed.\r\n\r\n> 4c\r\n> \"leads an ERROR\" --> \"leads to an ERROR\"\r\n\r\nFixed.\r\n\r\n> doc/src/sgml/ref/pg_dump.sgml\r\n> \r\n> 5.\r\n> + <varlistentry>\r\n> + <term><option>--slot-only</option></term>\r\n> + <listitem>\r\n> + <para>\r\n> + Dump only replication slots, neither the schema (data definitions) nor\r\n> + data. Mainly this is used for upgrading nodes.\r\n> + </para>\r\n> + </listitem>\r\n> \r\n> SUGGESTION\r\n> Dump only replication slots; not the schema (data definitions), nor\r\n> data. This is mainly used when upgrading nodes.\r\n\r\nFixed.\r\n\r\n> doc/src/sgml/ref/pgupgrade.sgml\r\n> \r\n> 6.\r\n> + <para>\r\n> + Transport replication slots. Currently this can work only for logical\r\n> + slots, and temporary slots are ignored. Note that pg_upgrade does not\r\n> + check the installation of plugins.\r\n> + </para>\r\n> \r\n> SUGGESTION\r\n> Upgrade replication slots. Only logical replication slots are\r\n> currently supported, and temporary slots are ignored. Note that...\r\n\r\nFixed.\r\n\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 7. main\r\n> {\"exclude-table-data-and-children\", required_argument, NULL, 14},\r\n> -\r\n> + {\"slot-only\", no_argument, NULL, 15},\r\n> {NULL, 0, NULL, 0}\r\n> \r\n> The blank line is misplaced.\r\n\r\nFixed.\r\n\r\n> 8. main\r\n> + case 15: /* dump onlu replication slot(s) */\r\n> + dopt.slot_only = true;\r\n> + dopt.include_everything = false;\r\n> + break;\r\n> \r\n> typo: /onlu/only/\r\n\r\nFixed.\r\n\r\n> 9. main\r\n> + if (dopt.slot_only && dopt.dataOnly)\r\n> + pg_fatal(\"options --replicatin-slots and -a/--data-only cannot be\r\n> used together\");\r\n> + if (dopt.slot_only && dopt.schemaOnly)\r\n> + pg_fatal(\"options --replicatin-slots and -s/--schema-only cannot be\r\n> used together\");\r\n> +\r\n> \r\n> 9a.\r\n> typo: /replicatin/replication/\r\n\r\nFixed. Additionally, wrong parameter reference was also fixed.\r\n\r\n> 9b.\r\n> I am wondering if these checks are enough. E.g. is \"slots-only\"\r\n> compatible with \"no-publications\" ?\r\n\r\nI think there are something what should be checked more. But I'm not sure about\r\n\"no-publication\". There is a possibility that non-core logical replication is used,\r\nand at that time these options are not contradicted.\r\n\r\n> 10. main\r\n> + /*\r\n> + * If dumping replication slots are request, dumping them and skip others.\r\n> + */\r\n> + if (dopt.slot_only)\r\n> + {\r\n> + getRepliactionSlots(fout);\r\n> + goto dump;\r\n> + }\r\n> \r\n> 10a.\r\n> SUGGESTION\r\n> If dump replication-slots-only was requested, dump only them and skip\r\n> everything else.\r\n\r\nFixed.\r\n\r\n> 10b.\r\n> This code seems mutually exclusive to every other option. I'm\r\n> wondering if this code even needs 'collectRoleNames', or should the\r\n> slots option check be moved above that (and also above the 'Dumping\r\n> LOs' etc...)\r\n\r\nI read again, and I found that collected username are used to check the owner of\r\nobjects. IIUC replicaiton slots are not owned by database users, so it is not\r\nneeded. Also, the LOs should not dumped here. Based on them, I moved getRepliactionSlots()\r\nabove them.\r\n\r\n> 11. help\r\n> \r\n> + printf(_(\" --slot-only dump only replication\r\n> slots, no schema and data\\n\"));\r\n> \r\n> 11a.\r\n> SUGGESTION\r\n> \"no schema and data\" --> \"no schema or data\"\r\n\r\nFixed.\r\n\r\n> 11b.\r\n> This help is misplaced. It should be in alphabetical order consistent\r\n> with all the other help.\r\n> \r\n> ~~~\r\n> 12. getRepliactionSlots\r\n> \r\n> +/*\r\n> + * getRepliactionSlots\r\n> + * get information about replication slots\r\n> + */\r\n> +static void\r\n> +getRepliactionSlots(Archive *fout)\r\n> \r\n> Function name typo / getRepliactionSlots/ getReplicationSlots/\r\n> (also in the comment)\r\n\r\nFixed.\r\n\r\n> 13. getRepliactionSlots\r\n> \r\n> + /* Check whether we should dump or not */\r\n> + if (fout->remoteVersion < 160000 && !dopt->slot_only)\r\n> + return;\r\n> \r\n> Hmmm, is that condition correct? Shouldn't the && be || here?\r\n\r\nRight, fixed.\r\n\r\n> 14. dumpReplicationSlot\r\n> \r\n> +static void\r\n> +dumpReplicationSlot(Archive *fout, const ReplicationSlotInfo *slotinfo)\r\n> +{\r\n> + DumpOptions *dopt = fout->dopt;\r\n> + PQExpBuffer query;\r\n> + char *slotname;\r\n> +\r\n> + if (!dopt->slot_only)\r\n> + return;\r\n> +\r\n> + slotname = pg_strdup(slotinfo->dobj.name);\r\n> + query = createPQExpBuffer();\r\n> +\r\n> + /*\r\n> + * XXX: For simplification, pg_create_logical_replication_slot() is used.\r\n> + * Is it sufficient?\r\n> + */\r\n> + appendPQExpBuffer(query, \"SELECT pg_create_logical_replication_slot('%s', \",\r\n> + slotname);\r\n> + appendStringLiteralAH(query, slotinfo->plugin, fout);\r\n> + appendPQExpBuffer(query, \", \");\r\n> + appendStringLiteralAH(query, slotinfo->twophase, fout);\r\n> + appendPQExpBuffer(query, \");\");\r\n> +\r\n> + if (slotinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)\r\n> + ArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\r\n> + ARCHIVE_OPTS(.tag = slotname,\r\n> + .description = \"REPICATION SLOT\",\r\n> + .section = SECTION_POST_DATA,\r\n> + .createStmt = query->data));\r\n> +\r\n> + /* XXX: do we have to dump security label? */\r\n> +\r\n> + if (slotinfo->dobj.dump & DUMP_COMPONENT_COMMENT)\r\n> + dumpComment(fout, \"REPICATION SLOT\", slotname,\r\n> + NULL, NULL,\r\n> + slotinfo->dobj.catId, 0, slotinfo->dobj.dumpId);\r\n> +\r\n> + pfree(slotname);\r\n> + destroyPQExpBuffer(query);\r\n> +}\r\n> \r\n> 14a.\r\n> Wouldn't it be better to check the \"slotinfo->dobj.dump &\r\n> DUMP_COMPONENT_DEFINITION\" condition first, before building the query?\r\n> For example, see other function dumpIndexAttach().\r\n\r\nThe style was chosen because previously I referred dumpSubscription(). But I read\r\nPG manual and understood that COMMENT and SECURITY LABEL cannot be set to replication\r\nslots. Therefore, I removed comments and dump for DUMP_COMPONENT_COMMENT, then\r\nfollowed the style.\r\n\r\n> 14b.\r\n> Typo: /REPICATION SLOT/REPLICATION SLOT/ in the ARCHIVE_OPTS\r\n> description.\r\n> \r\n> ~\r\n> \r\n> 14c.\r\n> Typo: /REPICATION SLOT/REPLICATION SLOT/ in the dumpComment parameter.\r\n\r\nBoth of them were fixed.\r\n\r\n> src/bin/pg_dump/pg_dump.h\r\n> \r\n> 15. DumpableObjectType\r\n> \r\n> @@ -82,7 +82,8 @@ typedef enum\r\n> DO_PUBLICATION,\r\n> DO_PUBLICATION_REL,\r\n> DO_PUBLICATION_TABLE_IN_SCHEMA,\r\n> - DO_SUBSCRIPTION\r\n> + DO_SUBSCRIPTION,\r\n> + DO_REPICATION_SLOT\r\n> } DumpableObjectType;\r\n> \r\n> Typo /DO_REPICATION_SLOT/DO_REPLICATION_SLOT/\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/dump.c\r\n> \r\n> 16. generate_old_dump\r\n> \r\n> + /*\r\n> + * Dump replicaiton slots if needed.\r\n> + *\r\n> + * XXX We cannot dump replication slots at the same time as the schema\r\n> + * dump because we need to separate the timing of restoring replication\r\n> + * slots and other objects. Replication slots, in particular, should\r\n> + * not be restored before executing the pg_resetwal command because it\r\n> + * will remove WALs that are required by the slots.\r\n> + */\r\n> \r\n> Typo: /replicaiton/replication/\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.c\r\n> \r\n> 17. main\r\n> \r\n> + /*\r\n> + * Create replication slots if requested.\r\n> + *\r\n> + * XXX This must be done after doing pg_resetwal command because the\r\n> + * command will remove required WALs.\r\n> + */\r\n> + if (user_opts.include_slots)\r\n> + {\r\n> + start_postmaster(&new_cluster, true);\r\n> + create_replicaiton_slots();\r\n> + stop_postmaster(false);\r\n> + }\r\n> +\r\n> \r\n> I don't think that warrants a \"XXX\" style comment. It is just a \"Note:\".\r\n\r\nFixed. Could you please tell me the classification of them if you can?\r\n\r\n> 18. create_replicaiton_slots\r\n> +\r\n> +/*\r\n> + * create_replicaiton_slots()\r\n> + *\r\n> + * Similar to create_new_objects() but only restores replication slots.\r\n> + */\r\n> +static void\r\n> +create_replicaiton_slots(void)\r\n> \r\n> Typo: /create_replicaiton_slots/create_replication_slots/\r\n> \r\n> (Function name and comment)\r\n\r\nAll of them were replaced.\r\n\r\n> 19. create_replicaiton_slots\r\n> \r\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> + {\r\n> + char slots_file_name[MAXPGPATH],\r\n> + log_file_name[MAXPGPATH];\r\n> + DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\r\n> + char *opts;\r\n> +\r\n> + pg_log(PG_STATUS, \"%s\", old_db->db_name);\r\n> +\r\n> + snprintf(slots_file_name, sizeof(slots_file_name),\r\n> + DB_DUMP_FILE_MASK_FOR_SLOTS, old_db->db_oid);\r\n> + snprintf(log_file_name, sizeof(log_file_name),\r\n> + DB_DUMP_LOG_FILE_MASK, old_db->db_oid);\r\n> +\r\n> + opts = \"--echo-queries --set ON_ERROR_STOP=on --no-psqlrc\";\r\n> +\r\n> + parallel_exec_prog(log_file_name,\r\n> + NULL,\r\n> + \"\\\"%s/psql\\\" %s %s --dbname %s -f \\\"%s/%s\\\"\",\r\n> + new_cluster.bindir,\r\n> + cluster_conn_opts(&new_cluster),\r\n> + opts,\r\n> + old_db->db_name,\r\n> + log_opts.dumpdir,\r\n> + slots_file_name);\r\n> + }\r\n> \r\n> That 'opts' variable seems unnecessary. Why not just pass the string\r\n> literal directly when invoking parallel_exec_prog()?\r\n> \r\n> Or if not removed, then at make it const char psql_opts =\r\n> \"--echo-queries --set ON_ERROR_STOP=on --no-psqlrc\";\r\n\r\nI had tried to follow the prepare_new_globals() style, but\r\nI preferred your suggestion. Fixed.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 20.\r\n> +#define DB_DUMP_FILE_MASK_FOR_SLOTS\r\n> \"pg_upgrade_dump_%u_slots.custom\"\r\n> \r\n> 20a.\r\n> For consistency with other mask names (e.g. DB_DUMP_LOG_FILE_MASK)\r\n> probably this should be called DB_DUMP_SLOTS_FILE_MASK.\r\n\r\nFixed.\r\n\r\n> 20b.\r\n> Because the content of this dump/restore file is SQL (not custom\r\n> binary) wouldn't a filename suffix \".sql\" be better?\r\n\r\nRight, fixed.\r\n\r\n> .../pg_upgrade/t/003_logical_replication.pl\r\n> \r\n> 21.\r\n> Some parts (formatting, comments, etc) in this file are inconsistent.\r\n> \r\n> 21a\r\n> \");\" is sometimes alone on a line, sometimes not\r\n\r\nI ran pgperltidy and lonely \");\" is removed.\r\n\r\n> 21b.\r\n> \"Init\" versus \"Create\" nodes.\r\n\r\n\"Initialize\" was chosen.\r\n\r\n> 21c.\r\n> # Check whether changes on new publisher are shipped to subscriber\r\n> \r\n> SUGGESTION\r\n> Check whether changes on the new publisher get replicated to the subscriber\r\n\r\nFixed.\r\n\r\n> 21d.\r\n> $result =\r\n> $subscriber->safe_psql('postgres', \"SELECT count(*) FROM tbl\");\r\n> is($result, qq(20),\r\n> 'check changes are shipped to subscriber');\r\n> \r\n> For symmetry with before/after, I think it would be better to do this\r\n> same command before the upgrade to confirm q(10) rows.\r\n\r\nAdded.\r\n\r\n[1]: https://www.postgresql.org/message-id/20230407024823.3j2s4doslsjemvis%40jrouhaud\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 7 Apr 2023 13:59:58 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 09:40:14AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>\n> > As I mentioned in my original thread, I'm not very familiar with that code, but\n> > I'm a bit worried about \"all the changes generated on publisher must be send\n> > and applied\". Is that a hard requirement for the feature to work reliably?\n>\n> I think the requirement is needed because the existing WALs on old node cannot be\n> transported on new instance. The WAL hole from confirmed_flush to current position\n> could not be filled by newer instance.\n\nI see, that was also the first blocker I could think of when Amit mentioned\nthat feature weeks ago and I also don't see how that whole could be filled\neither.\n\n> > If\n> > yes, how does this work if some subscriber node isn't connected when the\n> > publisher node is stopped? I guess you could add a check in pg_upgrade to make\n> > sure that all logical slot are indeed caught up and fail if that's not the case\n> > rather than assuming that a clean shutdown implies it. It would be good to\n> > cover that in the TAP test, and also cover some corner cases, like any new row\n> > added on the publisher node after the pg_upgrade but before the subscriber is\n> > reconnected is also replicated as expected.\n>\n> Hmm, good point. Current patch could not be handled the case because walsenders\n> for the such slots do not exist. I have tested your approach, however, I found that\n> CHECKPOINT_SHUTDOWN record were generated twice when publisher was\n> shutted down and started. It led that the confirmed_lsn of slots always was behind\n> from WAL insert location and failed to upgrade every time.\n> Now I do not have good idea to solve it... Do anyone have for this?\n\nI'm wondering if we could just check that each slot's LSN is exactly\nsizeof(CHECKPOINT_SHUTDOWN) ago or something like that? That's hackish, but if\npg_upgrade can run it means it was a clean shutdown so it should be safe to\nassume that what's the last record in the WAL was. For the double\nshutdown checkpoint, I'm not sure that I get the problem. The check should\nonly be done at the very beginning of pg_upgrade, so there should have been\nonly one shutdown checkpoint done right?\n\n\n",
"msg_date": "Fri, 7 Apr 2023 23:29:44 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 12:51:51PM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Dear Julien,\n> \n> > > Agreed, but then shouldn't the option be named \"--logical-slots-only\" or\n> > > something like that, same for all internal function names?\n> > \n> > Seems right. Will be fixed in next version. Maybe\n> > \"--logical-replication-slots-only\"\n> > will be used, per Peter's suggestion [1].\n> \n> After considering more, I decided not to include the word \"logical\" in the option\n> at this point. This is because we have not decided yet whether we dumps physical\n> replication slots or not. Current restriction has been occurred because of just\n> lack of analysis and considerations, If we decide not to do that, then they will\n> be renamed accordingly.\n\nWell, even if physical replication slots were eventually preserved during\npg_upgrade, maybe users would like to only keep one kind of the others so\nhaving both options could make sense.\n\nThat being said, I have a hard time believing that we could actually preserve\nphysical replication slots. I don't think that pg_upgrade final state is fully\nreproducible: not all object oids are preserved, and the various pg_restore\nare run in parallel so you're very likely to end up with small physical\ndifferences that would be incompatible with physical replication. Even if we\ncould make it totally reproducible, it would probably be at the cost of making\npg_upgrade orders of magnitude slower. And since many people are already\ncomplaining that it's too slow, that doesn't seem like something we would want.\n\n\n",
"msg_date": "Fri, 7 Apr 2023 23:39:02 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Julien,\n\n> Well, even if physical replication slots were eventually preserved during\n> pg_upgrade, maybe users would like to only keep one kind of the others so\n> having both options could make sense.\n\nYou meant to say that we can rename options like \"logical-*\" and later add a new\noption for physical slots if needed, right? PSA the new patch which handled the comment.\n\n> That being said, I have a hard time believing that we could actually preserve\n> physical replication slots. I don't think that pg_upgrade final state is fully\n> reproducible: not all object oids are preserved, and the various pg_restore\n> are run in parallel so you're very likely to end up with small physical\n> differences that would be incompatible with physical replication. Even if we\n> could make it totally reproducible, it would probably be at the cost of making\n> pg_upgrade orders of magnitude slower. And since many people are already\n> complaining that it's too slow, that doesn't seem like something we would want.\n\nYour point made sense to me. Thank you for giving your opinion.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 10 Apr 2023 09:16:09 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Julien,\n\nThank you for giving idea! I have analyzed about it.\n\n> > > If\n> > > yes, how does this work if some subscriber node isn't connected when the\n> > > publisher node is stopped? I guess you could add a check in pg_upgrade to\n> make\n> > > sure that all logical slot are indeed caught up and fail if that's not the case\n> > > rather than assuming that a clean shutdown implies it. It would be good to\n> > > cover that in the TAP test, and also cover some corner cases, like any new\n> row\n> > > added on the publisher node after the pg_upgrade but before the subscriber is\n> > > reconnected is also replicated as expected.\n> >\n> > Hmm, good point. Current patch could not be handled the case because\n> walsenders\n> > for the such slots do not exist. I have tested your approach, however, I found that\n> > CHECKPOINT_SHUTDOWN record were generated twice when publisher was\n> > shutted down and started. It led that the confirmed_lsn of slots always was\n> behind\n> > from WAL insert location and failed to upgrade every time.\n> > Now I do not have good idea to solve it... Do anyone have for this?\n> \n> I'm wondering if we could just check that each slot's LSN is exactly\n> sizeof(CHECKPOINT_SHUTDOWN) ago or something like that? That's hackish,\n> but if\n> pg_upgrade can run it means it was a clean shutdown so it should be safe to\n> assume that what's the last record in the WAL was. For the double\n> shutdown checkpoint, I'm not sure that I get the problem. The check should\n> only be done at the very beginning of pg_upgrade, so there should have been\n> only one shutdown checkpoint done right?\n\nI have analyzed about the point but it seemed to be difficult. This is because\nsome additional records like followings may be inserted. PSA the script which is\nused for testing. Note that \"double CHECKPOINT_SHUTDOWN\" issue might be wrong,\nso I wanted to withdraw it once. Sorry for noise.\n\n* HEAP/HEAP2 records. These records may be inserted by checkpointer.\n\nIIUC, if there are tuples which have not been flushed yet when shutdown is requested,\nthe checkpointer writes back all of them into heap file. At that time many WAL\nrecords are generated. I think we cannot predict the number of records beforehand.\n\n* INVALIDATION(S) records. These records may be inserted by VACUUM.\n\nThere is a possibility that autovacuum runs and generate WAL records. I think we\ncannot predict the number of records beforehand because it depends on the number\nof objects.\n\n* RUNNING_XACTS record\n\nIt might be a timing issue, but I found that sometimes background writer generated\na XLOG_RUNNING record. According to the function BackgroundWriterMain(), it will be\ngenerated when the process spends 15 seconds since last logging and there are\nimportant records. I think it is difficult to predict whether this will be appeared or not.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 10 Apr 2023 09:18:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are a few more review comments for patch v3-0001.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n+ <varlistentry>\n+ <term><option>--include-logical-replication-slots</option></term>\n+ <listitem>\n+ <para>\n+ Upgrade logical replication slots. Only permanent replication slots\n+ included. Note that pg_upgrade does not check the installation of\n+ plugins.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nMissing word.\n\n\"Only permanent replication slots included.\" --> \"Only permanent\nreplication slots are included.\"\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n2. help\n\n@@ -1119,6 +1145,8 @@ help(const char *progname)\n printf(_(\" --no-unlogged-table-data do not dump unlogged table data\\n\"));\n printf(_(\" --on-conflict-do-nothing add ON CONFLICT DO NOTHING\nto INSERT commands\\n\"));\n printf(_(\" --quote-all-identifiers quote all identifiers, even\nif not key words\\n\"));\n+ printf(_(\" --logical-replication-slots-only\\n\"\n+ \" dump only logical replication slots,\nno schema or data\\n\"));\n printf(_(\" --rows-per-insert=NROWS number of rows per INSERT;\nimplies --inserts\\n\"));\nA previous review comment ([1] #11b) seems to have been missed. This\nhelp is misplaced. It should be in alphabetical order consistent with\nall the other help.\n\n======\nsrc/bin/pg_dump/pg_dump.h\n\n3. _LogicalReplicationSlotInfo\n\n+/*\n+ * The LogicalReplicationSlotInfo struct is used to represent replication\n+ * slots.\n+ * XXX: add more attrbutes if needed\n+ */\n+typedef struct _LogicalReplicationSlotInfo\n+{\n+ DumpableObject dobj;\n+ char *plugin;\n+ char *slottype;\n+ char *twophase;\n+} LogicalReplicationSlotInfo;\n+\n\n4a.\nThe indent of the 'LogicalReplicationSlotInfo' looks a bit strange,\nunlike others in this file. Is it OK?\n\n~\n\n4b.\nThere was no typedefs.list file in this patch. Maybe the above\nwhitespace problem is a result of that omission.\n\n======\n.../pg_upgrade/t/003_logical_replication.pl\n\n5.\n\n+# Run pg_upgrade. pg_upgrade_output.d is removed at the end\n+command_ok(\n+ [\n+ 'pg_upgrade', '--no-sync',\n+ '-d', $old_publisher->data_dir,\n+ '-D', $new_publisher->data_dir,\n+ '-b', $bindir,\n+ '-B', $bindir,\n+ '-s', $new_publisher->host,\n+ '-p', $old_publisher->port,\n+ '-P', $new_publisher->port,\n+ $mode, '--include-logical-replication-slot'\n+ ],\n+ 'run of pg_upgrade for new publisher');\n\n5a.\nHow can this test even be working as-expected with those options?\n\nHere it is passing option '--include-logical-replication-slot' but\nAFAIK the proper option name everywhere else in this patch is\n'--include-logical-replication-slots' (with the 's')\n\n~\n\n5b.\nI'm not sure that \"pg_upgrade for new publisher\" makes sense.\n\nIt's more like \"pg_upgrade of old publisher\", or simply \"pg_upgrade of\npublisher\"\n\n------\n[1] https://www.postgresql.org/message-id/TYCPR01MB5870E212F5012FD6272CE1E3F5969%40TYCPR01MB5870.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 11 Apr 2023 17:54:03 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 12:00 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n...\n> > 17. main\n> >\n> > + /*\n> > + * Create replication slots if requested.\n> > + *\n> > + * XXX This must be done after doing pg_resetwal command because the\n> > + * command will remove required WALs.\n> > + */\n> > + if (user_opts.include_slots)\n> > + {\n> > + start_postmaster(&new_cluster, true);\n> > + create_replicaiton_slots();\n> > + stop_postmaster(false);\n> > + }\n> > +\n> >\n> > I don't think that warrants a \"XXX\" style comment. It is just a \"Note:\".\n>\n> Fixed. Could you please tell me the classification of them if you can?\n\nHopefully, someone will correct me if this explanation is wrong, but\nmy understanding of the different prefixes is like this --\n\n\"XXX\" is used as a marker for future developers to consider maybe\nrevisiting/improving something that the comment refers to\ne.g.\n/* XXX - it would be better to code this using blah but for now we did\nnot.... */\n/* XXX - option 'foo' is not currently supported but... */\n/* XXX - it might be worth considering adding more checks or an assert\nhere because... */\n\nOTOH, \"Note\" is just for highlighting why something is the way it is,\nbut with no implication that it should be revisited/changed in the\nfuture.\ne.g.\n/* Note: We deliberately do not test the state here because... */\n/* Note: This memory must be zeroed because... */\n/* Note: This string has no '\\0' terminator so... */\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 11 Apr 2023 18:20:43 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for giving comments! PSA new version.\r\n\r\n> ======\r\n> doc/src/sgml/ref/pgupgrade.sgml\r\n> \r\n> 1.\r\n> + <varlistentry>\r\n> + <term><option>--include-logical-replication-slots</option></term>\r\n> + <listitem>\r\n> + <para>\r\n> + Upgrade logical replication slots. Only permanent replication slots\r\n> + included. Note that pg_upgrade does not check the installation of\r\n> + plugins.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> \r\n> Missing word.\r\n> \r\n> \"Only permanent replication slots included.\" --> \"Only permanent\r\n> replication slots are included.\"\r\n\r\nFixed.\r\n\r\n> ======\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 2. help\r\n> \r\n> @@ -1119,6 +1145,8 @@ help(const char *progname)\r\n> printf(_(\" --no-unlogged-table-data do not dump unlogged table\r\n> data\\n\"));\r\n> printf(_(\" --on-conflict-do-nothing add ON CONFLICT DO NOTHING\r\n> to INSERT commands\\n\"));\r\n> printf(_(\" --quote-all-identifiers quote all identifiers, even\r\n> if not key words\\n\"));\r\n> + printf(_(\" --logical-replication-slots-only\\n\"\r\n> + \" dump only logical replication slots,\r\n> no schema or data\\n\"));\r\n> printf(_(\" --rows-per-insert=NROWS number of rows per INSERT;\r\n> implies --inserts\\n\"));\r\n> A previous review comment ([1] #11b) seems to have been missed. This\r\n> help is misplaced. It should be in alphabetical order consistent with\r\n> all the other help.\r\n\r\nSorry, fixed.\r\n\r\n> src/bin/pg_dump/pg_dump.h\r\n> \r\n> 3. _LogicalReplicationSlotInfo\r\n> \r\n> +/*\r\n> + * The LogicalReplicationSlotInfo struct is used to represent replication\r\n> + * slots.\r\n> + * XXX: add more attrbutes if needed\r\n> + */\r\n> +typedef struct _LogicalReplicationSlotInfo\r\n> +{\r\n> + DumpableObject dobj;\r\n> + char *plugin;\r\n> + char *slottype;\r\n> + char *twophase;\r\n> +} LogicalReplicationSlotInfo;\r\n> +\r\n> \r\n> 4a.\r\n> The indent of the 'LogicalReplicationSlotInfo' looks a bit strange,\r\n> unlike others in this file. Is it OK?\r\n\r\nI was betrayed by pgindent because of the reason you pointed out.\r\nFixed.\r\n\r\n> 4b.\r\n> There was no typedefs.list file in this patch. Maybe the above\r\n> whitespace problem is a result of that omission.\r\n\r\nYour analysis is correct. Added.\r\n\r\n> .../pg_upgrade/t/003_logical_replication.pl\r\n> \r\n> 5.\r\n> \r\n> +# Run pg_upgrade. pg_upgrade_output.d is removed at the end\r\n> +command_ok(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync',\r\n> + '-d', $old_publisher->data_dir,\r\n> + '-D', $new_publisher->data_dir,\r\n> + '-b', $bindir,\r\n> + '-B', $bindir,\r\n> + '-s', $new_publisher->host,\r\n> + '-p', $old_publisher->port,\r\n> + '-P', $new_publisher->port,\r\n> + $mode, '--include-logical-replication-slot'\r\n> + ],\r\n> + 'run of pg_upgrade for new publisher');\r\n> \r\n> 5a.\r\n> How can this test even be working as-expected with those options?\r\n> \r\n> Here it is passing option '--include-logical-replication-slot' but\r\n> AFAIK the proper option name everywhere else in this patch is\r\n> '--include-logical-replication-slots' (with the 's')\r\n\r\nThis is because getopt_long implemented by GNU can accept incomplete options if\r\ncollect one can be predicted from input. E.g. pg_upgrade on linux can accept\r\n`--ve` as `--verbose`, whereas the binary built on Windows cannot.\r\n\r\nAnyway, the difference was not my expectation. Fixed.\r\n\r\n> 5b.\r\n> I'm not sure that \"pg_upgrade for new publisher\" makes sense.\r\n> \r\n> It's more like \"pg_upgrade of old publisher\", or simply \"pg_upgrade of\r\n> publisher\"\r\n>\r\n\r\nFixed.\r\n\r\nAdditionally, I fixed two bugs which are detected by AddressSanitizer.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 11 Apr 2023 10:27:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for giving explanation.\r\n\r\n> \r\n> Hopefully, someone will correct me if this explanation is wrong, but\r\n> my understanding of the different prefixes is like this --\r\n> \r\n> \"XXX\" is used as a marker for future developers to consider maybe\r\n> revisiting/improving something that the comment refers to\r\n> e.g.\r\n> /* XXX - it would be better to code this using blah but for now we did\r\n> not.... */\r\n> /* XXX - option 'foo' is not currently supported but... */\r\n> /* XXX - it might be worth considering adding more checks or an assert\r\n> here because... */\r\n> \r\n> OTOH, \"Note\" is just for highlighting why something is the way it is,\r\n> but with no implication that it should be revisited/changed in the\r\n> future.\r\n> e.g.\r\n> /* Note: We deliberately do not test the state here because... */\r\n> /* Note: This memory must be zeroed because... */\r\n> /* Note: This string has no '\\0' terminator so... */\r\n\r\nI confirmed that current \"XXX\" comments must be removed by improving\r\nor some decision. Therefore I kept current annotation.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 11 Apr 2023 10:30:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\n\nMy PoC does not read and copy logical mappings files to new node, but I\ndid not analyzed in detail whether it is correct. Now I have done this and\nconsidered that they do not have to be copied because transactions which executed\nat the same time as rewriting are no longer decoded. How do you think?\nFollowings my analysis.\n\n## What is logical mappings files?\n\nLogical mappings file is used to track the system catalogs while logical decoding\neven if its heap file is written. Sometimes catalog heaps files are modified, or\ncompletely replaced to new files via VACUUM FULL or CLUSTER, but reorder buffer\ncannot not track new one as-is. Mappings files allow to do them.\n\nThe file contains key-value relations for old-to-new tuples. Also, the name of\nfiles contains the LSN where the triggered event is happen.\n\nMappings files are needed when transactions which modify catalogs are decoded.\nIf the LSN of files are older than restart_lsn, they are no longer needed then\nremoved. Please see CheckPointLogicalRewriteHeap().\n\n## Is it needed?\n\nI think this is not needed.\nCurrently pg_upgrade dumps important information from old publisher and then\nexecute pg_create_logical_replication_slot() on new one. Apart from\npg_copy_logical_replication_slot(), retart_lsn and confirmed_flush_lsn for old\nslot is not taken over to the new slot. They are recalculated on new node while\ncreating. This means that transactions which have modified catalog heaps on the\nold publisher are no longer decoded on new publisher.\n\nTherefore, the mappings files on old publisher are not needed for new one.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 11 Apr 2023 10:40:54 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "FYI, here are some minor review comments for v4-0001\n\n======\nsrc/bin/pg_dump/pg_backup.h\n\n1.\n+ int logical_slot_only;\n\nThe field should be plural - \"logical_slots_only\"\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n2.\n+ appendPQExpBufferStr(query,\n+ \"SELECT r.slot_name, r.plugin, r.two_phase \"\n+ \"FROM pg_replication_slots r \"\n+ \"WHERE r.database = current_database() AND temporary = false \"\n+ \"AND wal_status IN ('reserved', 'extended');\");\n\nThe alias 'r' may not be needed at all here, but since you already\nhave it IMO it looks a bit strange that you used it for only some of\nthe columns but not others.\n\n~~~\n\n3.\n+\n+ /* FIXME: force dumping */\n+ slotinfo[i].dobj.dump = DUMP_COMPONENT_ALL;\n\nWhy the \"FIXME\" here? Are you intending to replace this code with\nsomething else?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 12 Apr 2023 15:25:31 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for giving comments. PSA new version.\r\n\r\n> src/bin/pg_dump/pg_backup.h\r\n> \r\n> 1.\r\n> + int logical_slot_only;\r\n> \r\n> The field should be plural - \"logical_slots_only\"\r\n\r\nFixed.\r\n\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 2.\r\n> + appendPQExpBufferStr(query,\r\n> + \"SELECT r.slot_name, r.plugin, r.two_phase \"\r\n> + \"FROM pg_replication_slots r \"\r\n> + \"WHERE r.database = current_database() AND temporary = false \"\r\n> + \"AND wal_status IN ('reserved', 'extended');\");\r\n> \r\n> The alias 'r' may not be needed at all here, but since you already\r\n> have it IMO it looks a bit strange that you used it for only some of\r\n> the columns but not others.\r\n\r\nRight, I removed alias. Moreover, the namespace 'pg_catalog' is now specified.\r\n\r\n> 3.\r\n> +\r\n> + /* FIXME: force dumping */\r\n> + slotinfo[i].dobj.dump = DUMP_COMPONENT_ALL;\r\n> \r\n> Why the \"FIXME\" here? Are you intending to replace this code with\r\n> something else?\r\n\r\nI was added FIXME because I was not sure whether we must add selectDumpable...()\r\nfunction was needed or not. Now I have been thinking that such a functions are not\r\nneeded, so replaced comments. More detail, please see following:\r\n\r\nReplication slots cannot be a member of extension because pg_create_logical_replication_slot()\r\ncannot be called within the install script. This means that checkExtensionMembership()\r\nis not needed. Moreover, we do not have have any options to include/exclude slots\r\nin dumping, so checking their name like selectDumpableExtension() is not needed.\r\nBased on them, I think the function is not needed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 12 Apr 2023 07:55:28 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san.\n\nI do not have any more review comments for the v5 patch, but here are\na few remaining nitpick items.\n\n======\nGeneral\n\n1.\nThere were a couple of comments that I thought would appear less\nsquished (aka more readable) if there was a blank line preceding the\nXXX.\n\n1a. This one is in getLogicalReplicationSlots\n\n+ /*\n+ * Get replication slots.\n+ *\n+ * XXX: Which information must be extracted from old node? Currently three\n+ * attributes are extracted because they are used by\n+ * pg_create_logical_replication_slot().\n+ * XXX: Do we have to support physical slots?\n+ */\n\n~\n\n1b. This one is for the LogicalReplicationSlotInfo typedef\n\n+/*\n+ * The LogicalReplicationSlotInfo struct is used to represent replication\n+ * slots.\n+ * XXX: add more attrbutes if needed\n+ */\n\nBTW -- I just noticed there is a typo in that comment. /attrbutes/attributes/\n\n======\nsrc/bin/pg_dump/pg_dump_sort.c\n\n2. describeDumpableObject\n\n+ case DO_LOGICAL_REPLICATION_SLOT:\n+ snprintf(buf, bufsize,\n+ \"REPLICATION SLOT (ID %d NAME %s)\",\n+ obj->dumpId, obj->name);\n+ return;\n\nSince everything else was changed to say logical replication slot,\nshould this string be changed to \"LOGICAL REPLICATION SLOT (ID %d NAME\n%s)\"?\n\n======\n.../pg_upgrade/t/003_logical_replication.pl\n\n3.\nShould the name of this TAP test file really be 03_logical_replication_slots.pl?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 13 Apr 2023 19:52:33 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for checking. Then we can wait comments from others.\r\nPSA modified version.\r\n\r\n> 1.\r\n> There were a couple of comments that I thought would appear less\r\n> squished (aka more readable) if there was a blank line preceding the\r\n> XXX.\r\n> \r\n> 1a. This one is in getLogicalReplicationSlots\r\n> \r\n> + /*\r\n> + * Get replication slots.\r\n> + *\r\n> + * XXX: Which information must be extracted from old node? Currently three\r\n> + * attributes are extracted because they are used by\r\n> + * pg_create_logical_replication_slot().\r\n> + * XXX: Do we have to support physical slots?\r\n> + */\r\n\r\nAdded.\r\n\r\n> 1b. This one is for the LogicalReplicationSlotInfo typedef\r\n> \r\n> +/*\r\n> + * The LogicalReplicationSlotInfo struct is used to represent replication\r\n> + * slots.\r\n> + * XXX: add more attrbutes if needed\r\n> + */\r\n\r\nAdded.\r\n\r\n> BTW -- I just noticed there is a typo in that comment. /attrbutes/attributes/\r\n\r\nGood finding, replaced.\r\n\r\n> src/bin/pg_dump/pg_dump_sort.c\r\n> \r\n> 2. describeDumpableObject\r\n> \r\n> + case DO_LOGICAL_REPLICATION_SLOT:\r\n> + snprintf(buf, bufsize,\r\n> + \"REPLICATION SLOT (ID %d NAME %s)\",\r\n> + obj->dumpId, obj->name);\r\n> + return;\r\n> \r\n> Since everything else was changed to say logical replication slot,\r\n> should this string be changed to \"LOGICAL REPLICATION SLOT (ID %d NAME\r\n> %s)\"?\r\n\r\nI missed to replace, changed.\r\n\r\n> .../pg_upgrade/t/003_logical_replication.pl\r\n> \r\n> 3.\r\n> Should the name of this TAP test file really be 03_logical_replication_slots.pl?\r\n>\r\n\r\nHmm, not sure. Currently I renamed once according to your advice, but personally\r\nanother feature which allows to upgrade subscriber[1] should be tested in the same\r\nperl file. That's why I named as \"003_logical_replication.pl\"\r\n\r\n[1]: https://www.postgresql.org/message-id/20230217075433.u5mjly4d5cr4hcfe%40jrouhaud\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 14 Apr 2023 05:53:37 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi,\n\nSorry for the delay, I didn't had time to come back to it until this afternoon.\n\nOn Mon, Apr 10, 2023 at 09:18:46AM +0000, Hayato Kuroda (Fujitsu) wrote:\n>\n> I have analyzed about the point but it seemed to be difficult. This is because\n> some additional records like followings may be inserted. PSA the script which is\n> used for testing. Note that \"double CHECKPOINT_SHUTDOWN\" issue might be wrong,\n> so I wanted to withdraw it once. Sorry for noise.\n>\n> * HEAP/HEAP2 records. These records may be inserted by checkpointer.\n>\n> IIUC, if there are tuples which have not been flushed yet when shutdown is requested,\n> the checkpointer writes back all of them into heap file. At that time many WAL\n> records are generated. I think we cannot predict the number of records beforehand.\n>\n> * INVALIDATION(S) records. These records may be inserted by VACUUM.\n>\n> There is a possibility that autovacuum runs and generate WAL records. I think we\n> cannot predict the number of records beforehand because it depends on the number\n> of objects.\n>\n> * RUNNING_XACTS record\n>\n> It might be a timing issue, but I found that sometimes background writer generated\n> a XLOG_RUNNING record. According to the function BackgroundWriterMain(), it will be\n> generated when the process spends 15 seconds since last logging and there are\n> important records. I think it is difficult to predict whether this will be appeared or not.\n\nI don't think that your analysis is correct. Slots are guaranteed to be\nstopped after all the normal backends have been stopped, exactly to avoid such\nextraneous records.\n\nWhat is happening here is that the slot's confirmed_flush_lsn is properly\nupdated in memory and ends up being the same as the current LSN before the\nshutdown. But as it's a logical slot and those records aren't decoded, the\nslot isn't marked as dirty and therefore isn't saved to disk. You don't see\nthat behavior when doing a manual checkpoint before (per your script comment),\nas in that case the checkpoint also tries to save the slot to disk but then\nfinds a slot that was marked as dirty and therefore saves it.\n\nIn your script's scenario, when you restart the server the previous slot data\nis restored and the confirmed_flush_lsn goes backward, which explains those\nextraneous records.\n\nIt's probably totally harmless to throw away that value for now (and probably\nalso doesn't lead to crazy amount of work after restart, I really don't know\nmuch about the logical slot code), but clearly becomes problematic with your\nusecase. One easy way to fix this is to teach the checkpoint code to force\nsaving the logical slots to disk even if they're not marked as dirty during a\nshutdown checkpoint, as done in the attached v1 patch (renamed as .txt to not\ninterfere with the cfbot). With this patch applied I reliably only see a final\nshutdown checkpoint record with your scenario.\n\nNow such a change will make shutdown a bit more expensive when using logical\nreplication, even if in 99% of cases you will not need to save the\nconfirmed_flush_lsn value, so I don't know if that's acceptable or not.",
"msg_date": "Fri, 14 Apr 2023 14:12:48 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Julien,\n\n> Sorry for the delay, I didn't had time to come back to it until this afternoon.\n\nNo issues, everyone is busy:-).\n\n> I don't think that your analysis is correct. Slots are guaranteed to be\n> stopped after all the normal backends have been stopped, exactly to avoid such\n> extraneous records.\n> \n> What is happening here is that the slot's confirmed_flush_lsn is properly\n> updated in memory and ends up being the same as the current LSN before the\n> shutdown. But as it's a logical slot and those records aren't decoded, the\n> slot isn't marked as dirty and therefore isn't saved to disk. You don't see\n> that behavior when doing a manual checkpoint before (per your script comment),\n> as in that case the checkpoint also tries to save the slot to disk but then\n> finds a slot that was marked as dirty and therefore saves it.\n> \n> In your script's scenario, when you restart the server the previous slot data\n> is restored and the confirmed_flush_lsn goes backward, which explains those\n> extraneous records.\n\nSo you meant to say that the key point was that some records which are not sent\nto subscriber do not mark slots as dirty, hence the updated confirmed_flush was\nnot written into slot file. Is it right? LogicalConfirmReceivedLocation() is called\nby walsender when the process gets reply from worker process, so your analysis\nseems correct.\n\n> It's probably totally harmless to throw away that value for now (and probably\n> also doesn't lead to crazy amount of work after restart, I really don't know\n> much about the logical slot code), but clearly becomes problematic with your\n> usecase. One easy way to fix this is to teach the checkpoint code to force\n> saving the logical slots to disk even if they're not marked as dirty during a\n> shutdown checkpoint, as done in the attached v1 patch (renamed as .txt to not\n> interfere with the cfbot). With this patch applied I reliably only see a final\n> shutdown checkpoint record with your scenario.\n> \n> Now such a change will make shutdown a bit more expensive when using logical\n> replication, even if in 99% of cases you will not need to save the\n> confirmed_flush_lsn value, so I don't know if that's acceptable or not.\n\nIn any case we these records must be advanced. IIUC, currently such records are\nread after rebooting but ingored, and this patch just skips them. I have not measured,\nbut there is a possibility that is not additional overhead, just a trade-off.\n\nCurrently I did not come up with another solution, so I have included your patch.\nPlease see 0002.\n\nAdditionally, I added a checking functions in 0003.\nAccording to pg_resetwal and other functions, the length of CHECKPOINT_SHUTDOWN\nrecord seems (SizeOfXLogRecord + SizeOfXLogRecordDataHeaderShort + sizeof(CheckPoint)).\nTherefore, the function ensures that the difference between current insert position\nand confirmed_flush_lsn is less than (above + page header).\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Fri, 14 Apr 2023 10:30:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, 14 Apr 2023 at 16:00, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Julien,\n>\n> > Sorry for the delay, I didn't had time to come back to it until this afternoon.\n>\n> No issues, everyone is busy:-).\n>\n> > I don't think that your analysis is correct. Slots are guaranteed to be\n> > stopped after all the normal backends have been stopped, exactly to avoid such\n> > extraneous records.\n> >\n> > What is happening here is that the slot's confirmed_flush_lsn is properly\n> > updated in memory and ends up being the same as the current LSN before the\n> > shutdown. But as it's a logical slot and those records aren't decoded, the\n> > slot isn't marked as dirty and therefore isn't saved to disk. You don't see\n> > that behavior when doing a manual checkpoint before (per your script comment),\n> > as in that case the checkpoint also tries to save the slot to disk but then\n> > finds a slot that was marked as dirty and therefore saves it.\n> >\n> > In your script's scenario, when you restart the server the previous slot data\n> > is restored and the confirmed_flush_lsn goes backward, which explains those\n> > extraneous records.\n>\n> So you meant to say that the key point was that some records which are not sent\n> to subscriber do not mark slots as dirty, hence the updated confirmed_flush was\n> not written into slot file. Is it right? LogicalConfirmReceivedLocation() is called\n> by walsender when the process gets reply from worker process, so your analysis\n> seems correct.\n>\n> > It's probably totally harmless to throw away that value for now (and probably\n> > also doesn't lead to crazy amount of work after restart, I really don't know\n> > much about the logical slot code), but clearly becomes problematic with your\n> > usecase. One easy way to fix this is to teach the checkpoint code to force\n> > saving the logical slots to disk even if they're not marked as dirty during a\n> > shutdown checkpoint, as done in the attached v1 patch (renamed as .txt to not\n> > interfere with the cfbot). With this patch applied I reliably only see a final\n> > shutdown checkpoint record with your scenario.\n> >\n> > Now such a change will make shutdown a bit more expensive when using logical\n> > replication, even if in 99% of cases you will not need to save the\n> > confirmed_flush_lsn value, so I don't know if that's acceptable or not.\n>\n> In any case we these records must be advanced. IIUC, currently such records are\n> read after rebooting but ingored, and this patch just skips them. I have not measured,\n> but there is a possibility that is not additional overhead, just a trade-off.\n>\n> Currently I did not come up with another solution, so I have included your patch.\n> Please see 0002.\n>\n> Additionally, I added a checking functions in 0003.\n> According to pg_resetwal and other functions, the length of CHECKPOINT_SHUTDOWN\n> record seems (SizeOfXLogRecord + SizeOfXLogRecordDataHeaderShort + sizeof(CheckPoint)).\n> Therefore, the function ensures that the difference between current insert position\n> and confirmed_flush_lsn is less than (above + page header).\n\nThanks for the patches.\nCurrently the two_phase enabled slots are not getting restored from\nthe dumped contents, this is because we are passing the twophase value\nas the second parameter which indicates if it is temporary or not to\nthe pg_create_logical_replication_slot function as in [1], while\nrestoring it is internally creating the slot as a temporary slot in\nthis case:\n+ appendPQExpBuffer(query, \"SELECT\npg_catalog.pg_create_logical_replication_slot('%s', \",\n+ slotname);\n+ appendStringLiteralAH(query, slotinfo->plugin, fout);\n+ appendPQExpBuffer(query, \", \");\n+ appendStringLiteralAH(query, slotinfo->twophase, fout);\n+ appendPQExpBuffer(query, \");\");\n+\n+ ArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\n+ ARCHIVE_OPTS(.tag = slotname,\n+\n.description = \"REPLICATION SLOT\",\n+\n.section = SECTION_POST_DATA,\n+\n.createStmt = query->data));\n+\n+ pfree(slotname);\n+ destroyPQExpBuffer(query);\n+ }\n+}\n\nSince we are dumping only the permanent slots, we could update\ntemporary parameter as false:\n+ appendPQExpBuffer(query, \"SELECT\npg_catalog.pg_create_logical_replication_slot('%s', \",\n+ slotname);\n+ appendStringLiteralAH(query, slotinfo->plugin, fout);\n+ appendPQExpBuffer(query, \", f, \");\n+ appendStringLiteralAH(query, slotinfo->twophase, fout);\n+ appendPQExpBuffer(query, \");\");\n\n[1] - https://www.postgresql.org/docs/devel/functions-admin.html\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 19 Apr 2023 15:11:08 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, 14 Apr 2023 at 16:00, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Julien,\n>\n> > Sorry for the delay, I didn't had time to come back to it until this afternoon.\n>\n> No issues, everyone is busy:-).\n>\n> > I don't think that your analysis is correct. Slots are guaranteed to be\n> > stopped after all the normal backends have been stopped, exactly to avoid such\n> > extraneous records.\n> >\n> > What is happening here is that the slot's confirmed_flush_lsn is properly\n> > updated in memory and ends up being the same as the current LSN before the\n> > shutdown. But as it's a logical slot and those records aren't decoded, the\n> > slot isn't marked as dirty and therefore isn't saved to disk. You don't see\n> > that behavior when doing a manual checkpoint before (per your script comment),\n> > as in that case the checkpoint also tries to save the slot to disk but then\n> > finds a slot that was marked as dirty and therefore saves it.\n> >\n> > In your script's scenario, when you restart the server the previous slot data\n> > is restored and the confirmed_flush_lsn goes backward, which explains those\n> > extraneous records.\n>\n> So you meant to say that the key point was that some records which are not sent\n> to subscriber do not mark slots as dirty, hence the updated confirmed_flush was\n> not written into slot file. Is it right? LogicalConfirmReceivedLocation() is called\n> by walsender when the process gets reply from worker process, so your analysis\n> seems correct.\n>\n> > It's probably totally harmless to throw away that value for now (and probably\n> > also doesn't lead to crazy amount of work after restart, I really don't know\n> > much about the logical slot code), but clearly becomes problematic with your\n> > usecase. One easy way to fix this is to teach the checkpoint code to force\n> > saving the logical slots to disk even if they're not marked as dirty during a\n> > shutdown checkpoint, as done in the attached v1 patch (renamed as .txt to not\n> > interfere with the cfbot). With this patch applied I reliably only see a final\n> > shutdown checkpoint record with your scenario.\n> >\n> > Now such a change will make shutdown a bit more expensive when using logical\n> > replication, even if in 99% of cases you will not need to save the\n> > confirmed_flush_lsn value, so I don't know if that's acceptable or not.\n>\n> In any case we these records must be advanced. IIUC, currently such records are\n> read after rebooting but ingored, and this patch just skips them. I have not measured,\n> but there is a possibility that is not additional overhead, just a trade-off.\n>\n> Currently I did not come up with another solution, so I have included your patch.\n> Please see 0002.\n>\n> Additionally, I added a checking functions in 0003\n> According to pg_resetwal and other functions, the length of CHECKPOINT_SHUTDOWN\n> record seems (SizeOfXLogRecord + SizeOfXLogRecordDataHeaderShort + sizeof(CheckPoint)).\n> Therefore, the function ensures that the difference between current insert position\n> and confirmed_flush_lsn is less than (above + page header).\n\nLogical replication slots can be created only if wal_level >= logical,\ncurrently we do not have any check to see if wal_level >= logical if\n\"--include-logical-replication-slots\" option is specified. If\ninclude-logical-replication-slots is specified with pg_upgrade, we\nwill be creating replication slots after a lot of steps like\nperforming prechecks, analyzing, freezing, deleting, restoring,\ncopying, setting related objects and then create replication slot,\nwhere we will be erroring out after a lot of time(Many cases\npg_upgrade takes a lot of hours to perform these operations). I feel\nit would be better to add a check in the beginning itself somewhere in\ncheck_new_cluster to see if wal_level is set appropriately in case of\ninclude-logical-replication-slot option to detect and throw an error\nearly itself.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 19 Apr 2023 16:35:18 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for giving a comment! New patch will be available soon.\r\n\r\n> Thanks for the patches.\r\n> Currently the two_phase enabled slots are not getting restored from\r\n> the dumped contents, this is because we are passing the twophase value\r\n> as the second parameter which indicates if it is temporary or not to\r\n> the pg_create_logical_replication_slot function as in [1], while\r\n> restoring it is internally creating the slot as a temporary slot in\r\n> this case:\r\n> + appendPQExpBuffer(query, \"SELECT\r\n> pg_catalog.pg_create_logical_replication_slot('%s', \",\r\n> + slotname);\r\n> + appendStringLiteralAH(query, slotinfo->plugin, fout);\r\n> + appendPQExpBuffer(query, \", \");\r\n> + appendStringLiteralAH(query, slotinfo->twophase, fout);\r\n> + appendPQExpBuffer(query, \");\");\r\n> +\r\n> + ArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\r\n> + ARCHIVE_OPTS(.tag = slotname,\r\n> +\r\n> .description = \"REPLICATION SLOT\",\r\n> +\r\n> .section = SECTION_POST_DATA,\r\n> +\r\n> .createStmt = query->data));\r\n> +\r\n> + pfree(slotname);\r\n> + destroyPQExpBuffer(query);\r\n> + }\r\n> +}\r\n> \r\n> Since we are dumping only the permanent slots, we could update\r\n> temporary parameter as false:\r\n> + appendPQExpBuffer(query, \"SELECT\r\n> pg_catalog.pg_create_logical_replication_slot('%s', \",\r\n> + slotname);\r\n> + appendStringLiteralAH(query, slotinfo->plugin, fout);\r\n> + appendPQExpBuffer(query, \", f, \");\r\n> + appendStringLiteralAH(query, slotinfo->twophase, fout);\r\n> + appendPQExpBuffer(query, \");\");\r\n> \r\n> [1] - https://www.postgresql.org/docs/devel/functions-admin.html\r\n\r\nYeah, you are right. I misread the interface of the function.\r\nFixed and added new test.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 20 Apr 2023 05:28:42 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! PSA new patchset.\r\n\r\n> > Additionally, I added a checking functions in 0003\r\n> > According to pg_resetwal and other functions, the length of\r\n> CHECKPOINT_SHUTDOWN\r\n> > record seems (SizeOfXLogRecord + SizeOfXLogRecordDataHeaderShort +\r\n> sizeof(CheckPoint)).\r\n> > Therefore, the function ensures that the difference between current insert\r\n> position\r\n> > and confirmed_flush_lsn is less than (above + page header).\r\n> \r\n> Logical replication slots can be created only if wal_level >= logical,\r\n> currently we do not have any check to see if wal_level >= logical if\r\n> \"--include-logical-replication-slots\" option is specified. If\r\n> include-logical-replication-slots is specified with pg_upgrade, we\r\n> will be creating replication slots after a lot of steps like\r\n> performing prechecks, analyzing, freezing, deleting, restoring,\r\n> copying, setting related objects and then create replication slot,\r\n> where we will be erroring out after a lot of time(Many cases\r\n> pg_upgrade takes a lot of hours to perform these operations). I feel\r\n> it would be better to add a check in the beginning itself somewhere in\r\n> check_new_cluster to see if wal_level is set appropriately in case of\r\n> include-logical-replication-slot option to detect and throw an error\r\n> early itself.\r\n\r\nI see your point. Moreover, I think max_replication_slots != 0 must be also checked.\r\nI added a checking function and related test in 0001.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 20 Apr 2023 05:31:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 20, 2023 at 05:31:16AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Dear Vignesh,\n>\n> Thank you for reviewing! PSA new patchset.\n>\n> > > Additionally, I added a checking functions in 0003\n> > > According to pg_resetwal and other functions, the length of\n> > CHECKPOINT_SHUTDOWN\n> > > record seems (SizeOfXLogRecord + SizeOfXLogRecordDataHeaderShort +\n> > sizeof(CheckPoint)).\n> > > Therefore, the function ensures that the difference between current insert\n> > position\n> > > and confirmed_flush_lsn is less than (above + page header).\n\nI think that this test should be different when just checking for the\nprerequirements (live_check / --check) compared to actually doing the upgrade,\nas it's almost guaranteed that the slots won't have sent everything when the\nsource server is up and running.\n\nMaybe simply check that all logical slots are currently active when running the\nlive check, so at least there's a good chance that they will still be at\nshutdown, and will therefore send all the data to the subscribers? Having a\nregression tests for that scenario would also be a good idea. Having an\nuncommitted write transaction should be enough to cover it.\n\n\n",
"msg_date": "Thu, 20 Apr 2023 14:53:59 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, 20 Apr 2023 at 11:01, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thank you for reviewing! PSA new patchset.\n>\n> > > Additionally, I added a checking functions in 0003\n> > > According to pg_resetwal and other functions, the length of\n> > CHECKPOINT_SHUTDOWN\n> > > record seems (SizeOfXLogRecord + SizeOfXLogRecordDataHeaderShort +\n> > sizeof(CheckPoint)).\n> > > Therefore, the function ensures that the difference between current insert\n> > position\n> > > and confirmed_flush_lsn is less than (above + page header).\n> >\n> > Logical replication slots can be created only if wal_level >= logical,\n> > currently we do not have any check to see if wal_level >= logical if\n> > \"--include-logical-replication-slots\" option is specified. If\n> > include-logical-replication-slots is specified with pg_upgrade, we\n> > will be creating replication slots after a lot of steps like\n> > performing prechecks, analyzing, freezing, deleting, restoring,\n> > copying, setting related objects and then create replication slot,\n> > where we will be erroring out after a lot of time(Many cases\n> > pg_upgrade takes a lot of hours to perform these operations). I feel\n> > it would be better to add a check in the beginning itself somewhere in\n> > check_new_cluster to see if wal_level is set appropriately in case of\n> > include-logical-replication-slot option to detect and throw an error\n> > early itself.\n>\n> I see your point. Moreover, I think max_replication_slots != 0 must be also checked.\n> I added a checking function and related test in 0001.\n\nThanks for the updated patch.\nA Few comments:\n1) if the verbose option is enabled, we should print the new slot\ninformation, we could add a function print_slot_infos similar to\nprint_rel_infos which could print slot name and two_phase is enabled\nor not.\n+ end_progress_output();\n+ check_ok();\n+\n+ /* update new_cluster info now that we have objects in the databases */\n+ get_db_and_rel_infos(&new_cluster);\n+}\n\n2) Since we will be using this option with pg_upgrade, should we use\nthis along with the --binary-upgrade option only?\n+ if (dopt.logical_slots_only && dopt.dataOnly)\n+ pg_fatal(\"options --logical-replication-slots-only and\n-a/--data-only cannot be used together\");\n+ if (dopt.logical_slots_only && dopt.schemaOnly)\n+ pg_fatal(\"options --logical-replication-slots-only and\n-s/--schema-only cannot be used together\");\n\n3) Since it two_phase is boolean, can we use bool data type instead of string:\n+ slotinfo[i].dobj.objType = DO_LOGICAL_REPLICATION_SLOT;\n+\n+ slotinfo[i].dobj.catId.tableoid = InvalidOid;\n+ slotinfo[i].dobj.catId.oid = InvalidOid;\n+ AssignDumpId(&slotinfo[i].dobj);\n+\n+ slotinfo[i].dobj.name = pg_strdup(PQgetvalue(res, i,\ni_slotname));\n+\n+ slotinfo[i].plugin = pg_strdup(PQgetvalue(res, i, i_plugin));\n+ slotinfo[i].twophase = pg_strdup(PQgetvalue(res, i,\ni_twophase));\n\nWe can change it to something like:\nif (strcmp(PQgetvalue(res, i, i_twophase), \"t\") == 0)\nslotinfo[i].twophase = true;\nelse\nslotinfo[i].twophase = false;\n\n4) The comments are inconsistent, some have termination characters and\nsome don't. We can keep it consistent:\n+# Can be changed to test the other modes.\n+my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n+\n+# Initialize old publisher node\n+my $old_publisher = PostgreSQL::Test::Cluster->new('old_publisher');\n+$old_publisher->init(allows_streaming => 'logical');\n+$old_publisher->start;\n+\n+# Initialize subscriber node\n+my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n+$subscriber->init(allows_streaming => 'logical');\n+$subscriber->start;\n+\n+# Schema setup\n+$old_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tbl AS SELECT generate_series(1,10) AS a\");\n+$subscriber->safe_psql('postgres', \"CREATE TABLE tbl (a int)\");\n+\n+# Initialize new publisher node\n\n5) should we use free instead of pfree as used in other function like\ndumpForeignServer:\n+ appendPQExpBuffer(query, \");\");\n+\n+ ArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\n+ ARCHIVE_OPTS(.tag = slotname,\n+\n.description = \"REPLICATION SLOT\",\n+\n.section = SECTION_POST_DATA,\n+\n.createStmt = query->data));\n+\n+ pfree(slotname);\n+ destroyPQExpBuffer(query);\n+ }\n+}\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 20 Apr 2023 17:11:17 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Julien,\n\nThank you for giving comments! PSA new version.\n\n> I think that this test should be different when just checking for the\n> prerequirements (live_check / --check) compared to actually doing the upgrade,\n> as it's almost guaranteed that the slots won't have sent everything when the\n> source server is up and running.\n\nHmm, you assumed that the user application is still running and data is coming\ncontinuously when doing --check, right? Personally I have thought that the\n--check operation is executed just before the actual upgrading, therefore I'm not\nsure your assumption is real problem. And I could not find any checks which their\ncontents are changed based on the --check option.\n\nAnyway, I included your opinion in 0004 patch. We can ask some other reviewers\nabout the necessity.\n\n> Maybe simply check that all logical slots are currently active when running the\n> live check,\n\nYeah, if we support the case checking pg_replication_slots.active may be sufficient.\nActually this cannot handle the case that pg_create_logical_replication_slot()\nis executed just before upgrading, but I'm not sure it should be.\n\n> so at least there's a good chance that they will still be at\n> shutdown, and will therefore send all the data to the subscribers? Having a\n> regression tests for that scenario would also be a good idea. Having an\n> uncommitted write transaction should be enough to cover it.\n\nI think background_psql() can be used for the purpose. Before doing pg_upgrade\n--check, a transaction is opened and kept. It means that the confirmed_flush has\nbeen not reached to the current WAL position yet, but the checking says OK\nbecause all slots are active.\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 24 Apr 2023 12:03:05 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for giving comments. New patchset can be available in [1].\r\n\r\n> Thanks for the updated patch.\r\n> A Few comments:\r\n> 1) if the verbose option is enabled, we should print the new slot\r\n> information, we could add a function print_slot_infos similar to\r\n> print_rel_infos which could print slot name and two_phase is enabled\r\n> or not.\r\n> + end_progress_output();\r\n> + check_ok();\r\n> +\r\n> + /* update new_cluster info now that we have objects in the databases */\r\n> + get_db_and_rel_infos(&new_cluster);\r\n> +}\r\n\r\nI was not sure we should add the print because any other objects like publication\r\nand subscription seem not to be printed, but added.\r\nWhile implementing it, I thought that calling get_db_and_rel_infos() again\r\nwas not efficient because free_db_and_rel_infos() will be called at that time. So I added\r\nget_logical_slot_infos() instead.\r\nAdditionally, I added a check for check_new_cluster_is_empty() for making ensure that\r\nthere are no logical slots on new node.\r\n\r\n> 2) Since we will be using this option with pg_upgrade, should we use\r\n> this along with the --binary-upgrade option only?\r\n> + if (dopt.logical_slots_only && dopt.dataOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -a/--data-only cannot be used together\");\r\n> + if (dopt.logical_slots_only && dopt.schemaOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -s/--schema-only cannot be used together\");\r\n\r\nRight, I added the check.\r\n\r\n> 3) Since it two_phase is boolean, can we use bool data type instead of string:\r\n> + slotinfo[i].dobj.objType = DO_LOGICAL_REPLICATION_SLOT;\r\n> +\r\n> + slotinfo[i].dobj.catId.tableoid = InvalidOid;\r\n> + slotinfo[i].dobj.catId.oid = InvalidOid;\r\n> + AssignDumpId(&slotinfo[i].dobj);\r\n> +\r\n> + slotinfo[i].dobj.name = pg_strdup(PQgetvalue(res, i,\r\n> i_slotname));\r\n> +\r\n> + slotinfo[i].plugin = pg_strdup(PQgetvalue(res, i, i_plugin));\r\n> + slotinfo[i].twophase = pg_strdup(PQgetvalue(res, i,\r\n> i_twophase));\r\n> \r\n> We can change it to something like:\r\n> if (strcmp(PQgetvalue(res, i, i_twophase), \"t\") == 0)\r\n> slotinfo[i].twophase = true;\r\n> else\r\n> slotinfo[i].twophase = false;\r\n\r\nSeems right, fixed.\r\n\r\n> 4) The comments are inconsistent, some have termination characters and\r\n> some don't. We can keep it consistent:\r\n> +# Can be changed to test the other modes.\r\n> +my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\r\n> +\r\n> +# Initialize old publisher node\r\n> +my $old_publisher = PostgreSQL::Test::Cluster->new('old_publisher');\r\n> +$old_publisher->init(allows_streaming => 'logical');\r\n> +$old_publisher->start;\r\n> +\r\n> +# Initialize subscriber node\r\n> +my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');\r\n> +$subscriber->init(allows_streaming => 'logical');\r\n> +$subscriber->start;\r\n> +\r\n> +# Schema setup\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"CREATE TABLE tbl AS SELECT generate_series(1,10) AS a\");\r\n> +$subscriber->safe_psql('postgres', \"CREATE TABLE tbl (a int)\");\r\n> +\r\n> +# Initialize new publisher node\r\n\r\nRemoved all termination.\r\n\r\n> 5) should we use free instead of pfree as used in other function like\r\n> dumpForeignServer:\r\n> + appendPQExpBuffer(query, \");\");\r\n> +\r\n> + ArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\r\n> + ARCHIVE_OPTS(.tag = slotname,\r\n> +\r\n> .description = \"REPLICATION SLOT\",\r\n> +\r\n> .section = SECTION_POST_DATA,\r\n> +\r\n> .createStmt = query->data));\r\n> +\r\n> + pfree(slotname);\r\n> + destroyPQExpBuffer(query);\r\n> + }\r\n> +}\r\n\r\nActually it works because for the client, the pfree() is just a wrapper of pg_free(),\r\nbut I agreed that it should be fixed. So did that.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58669413A5A2E3E50BD0B7E7F5679%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 24 Apr 2023 12:04:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi,\n\nOn Mon, Apr 24, 2023 at 12:03:05PM +0000, Hayato Kuroda (Fujitsu) wrote:\n>\n> > I think that this test should be different when just checking for the\n> > prerequirements (live_check / --check) compared to actually doing the upgrade,\n> > as it's almost guaranteed that the slots won't have sent everything when the\n> > source server is up and running.\n>\n> Hmm, you assumed that the user application is still running and data is coming\n> continuously when doing --check, right? Personally I have thought that the\n> --check operation is executed just before the actual upgrading, therefore I'm not\n> sure your assumption is real problem.\n\nThe checks are always executed before doing the upgrade, to prevent it if\nsomething isn't right. But you can also just do those check on a live\ninstance, so you can get a somewhat strong guarantee that the upgrade operation\nwill succeed before needing to stop all services and shut down postgres. It's\nbasically free to run those checks and can avoid an unnecessary service\ninterruption so I'm pretty sure people use it quite often.\n\n> And I could not find any checks which their\n> contents are changed based on the --check option.\n\nYes, because other checks are things that you can actually fix when the\ninstance is running, like getting rid of tables with oids. The only semi\nexception if for 2pc which can be continuously prepared and committed, but if\nyou hit that problem at least you know you have to stop cleanly your XA-like\napplication and make sure there are no 2pc left.\n\n> Yeah, if we support the case checking pg_replication_slots.active may be sufficient.\n> Actually this cannot handle the case that pg_create_logical_replication_slot()\n> is executed just before upgrading, but I'm not sure it should be.\n\nIt shouldn't, same for any of the other checks. The live check can't predict\nthe future, it just tells you if there's anything that would prevent the\nupgrade *at the moment it's executed*.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 22:22:56 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hackers,\n\n> Thank you for giving comments! PSA new version.\n\nNote that due to the current version could not work well on FreeBSD, maybe\nbecause of the timing issue[1]. I'm now analyzing the reason and will post\nthe fixed version.\n\n[1]: https://cirrus-ci.com/build/4676441267240960\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 25 Apr 2023 13:06:05 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On 24.04.23 14:03, Hayato Kuroda (Fujitsu) wrote:\n>> so at least there's a good chance that they will still be at\n>> shutdown, and will therefore send all the data to the subscribers? Having a\n>> regression tests for that scenario would also be a good idea. Having an\n>> uncommitted write transaction should be enough to cover it.\n> \n> I think background_psql() can be used for the purpose. Before doing pg_upgrade\n> --check, a transaction is opened and kept. It means that the confirmed_flush has\n> been not reached to the current WAL position yet, but the checking says OK\n> because all slots are active.\n\nA suggestion: You could write some/most tests against test_decoding \nrather than the publication/subscription system. That way, you can \navoid many timing issues in the tests and you can check more exactly \nthat the slots produce the output you want. This would also help ensure \nthat this new facility works for other logical decoding output plugins \nbesides the built-in one.\n\n\n\n",
"msg_date": "Wed, 26 Apr 2023 09:18:33 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> A suggestion: You could write some/most tests against test_decoding\r\n> rather than the publication/subscription system. That way, you can\r\n> avoid many timing issues in the tests and you can check more exactly\r\n> that the slots produce the output you want. This would also help ensure\r\n> that this new facility works for other logical decoding output plugins\r\n> besides the built-in one.\r\n\r\nGood point. I think almost tests except --check part can be rewritten.\r\nPSA new patchset.\r\n\r\nAdditionally, I fixed followings:\r\n\r\n- Added initialization for slot_arr.*. This is needed to check whether \r\n the entry has already been allocated, in get_logical_slot_infos().\r\n Previously double-free was occurred in some platform.\r\n- fixed condition in get_logical_slot_infos()\r\n- Changed the expected size of page header to longer one(SizeOfXLogLongPHD).\r\n If the WAL page is the first one in the WAL segment file, the long header seems\r\n to be used.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 26 Apr 2023 12:00:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On 2023-Apr-07, Julien Rouhaud wrote:\n\n> That being said, I have a hard time believing that we could actually preserve\n> physical replication slots. I don't think that pg_upgrade final state is fully\n> reproducible: not all object oids are preserved, and the various pg_restore\n> are run in parallel so you're very likely to end up with small physical\n> differences that would be incompatible with physical replication. Even if we\n> could make it totally reproducible, it would probably be at the cost of making\n> pg_upgrade orders of magnitude slower. And since many people are already\n> complaining that it's too slow, that doesn't seem like something we would want.\n\nA point on preserving physical replication slots: because we change WAL\nformat from one major version to the next (adding new messages or\nchanging format for other messages), we can't currently rely on physical\nslots working across different major versions.\n\nSo IMO, for now don't bother with physical replication slot\npreservation, but do keep the option name as specific to logical slots.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 2 May 2023 12:55:18 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi,\n\nOn Tue, May 02, 2023 at 12:55:18PM +0200, Alvaro Herrera wrote:\n> On 2023-Apr-07, Julien Rouhaud wrote:\n>\n> > That being said, I have a hard time believing that we could actually preserve\n> > physical replication slots. I don't think that pg_upgrade final state is fully\n> > reproducible: not all object oids are preserved, and the various pg_restore\n> > are run in parallel so you're very likely to end up with small physical\n> > differences that would be incompatible with physical replication. Even if we\n> > could make it totally reproducible, it would probably be at the cost of making\n> > pg_upgrade orders of magnitude slower. And since many people are already\n> > complaining that it's too slow, that doesn't seem like something we would want.\n>\n> A point on preserving physical replication slots: because we change WAL\n> format from one major version to the next (adding new messages or\n> changing format for other messages), we can't currently rely on physical\n> slots working across different major versions.\n\nI don't think anyone suggested to do physical replication over different major\nversions. My understanding was that it would be used to pg_upgrade a\n\"physical cluster\" (e.g. a primary and physical standby server) at the same\ntime, and then simply starting them up again would lead to a working physical\nreplication on the new version.\n\nI guess one could try to keep using the slots for other needs (PITR backup with\npg_receivewal or something similar), and then you would indeed have to be aware\nthat you won't be able to do anything with the new WAL records until you do a\nfresh base backup, but that's a problem that you can already face after a\nnormal pg_upgrade (although in most cases it's probably quite obvious for now\nas the timeline isn't preserved).\n\n\n",
"msg_date": "Tue, 2 May 2023 19:43:53 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, 2 May 2023, 19:43 Julien Rouhaud, <[email protected]> wrote:\n\n> Hi,\n>\n> On Tue, May 02, 2023 at 12:55:18PM +0200, Alvaro Herrera wrote:\n> > On 2023-Apr-07, Julien Rouhaud wrote:\n> >\n> > > That being said, I have a hard time believing that we could actually\n> preserve\n> > > physical replication slots. I don't think that pg_upgrade final state\n> is fully\n> > > reproducible: not all object oids are preserved, and the various\n> pg_restore\n> > > are run in parallel so you're very likely to end up with small physical\n> > > differences that would be incompatible with physical replication.\n> Even if we\n> > > could make it totally reproducible, it would probably be at the cost\n> of making\n> > > pg_upgrade orders of magnitude slower. And since many people are\n> already\n> > > complaining that it's too slow, that doesn't seem like something we\n> would want.\n> >\n> > A point on preserving physical replication slots: because we change WAL\n> > format from one major version to the next (adding new messages or\n> > changing format for other messages), we can't currently rely on physical\n> > slots working across different major versions.\n>\n> I don't think anyone suggested to do physical replication over different\n> major\n> versions. My understanding was that it would be used to pg_upgrade a\n> \"physical cluster\" (e.g. a primary and physical standby server) at the same\n> time, and then simply starting them up again would lead to a working\n> physical\n> replication on the new version.\n>\n> I guess one could try to keep using the slots for other needs (PITR backup\n> with\n> pg_receivewal or something similar), and then you would indeed have to be\n> aware\n> that you won't be able to do anything with the new WAL records until you\n> do a\n> fresh base backup, but that's a problem that you can already face after a\n> normal pg_upgrade (although in most cases it's probably quite obvious for\n> now\n> as the timeline isn't preserved).\n>\n\nif what you meant is that the slot may have to send a record generated by\nan older major version, then unless I'm missing something the same\nrestriction could be added to such a feature as what's being discussed in\nthis thread for the logical replication slots. so only a final shutdown\ncheckpoint record would be present after the flushed WAL position. it may\nbe possible to work around that, if there weren't all the other problems I\nmentioned.\n\n>\n\nOn Tue, 2 May 2023, 19:43 Julien Rouhaud, <[email protected]> wrote:Hi,\n\nOn Tue, May 02, 2023 at 12:55:18PM +0200, Alvaro Herrera wrote:\n> On 2023-Apr-07, Julien Rouhaud wrote:\n>\n> > That being said, I have a hard time believing that we could actually preserve\n> > physical replication slots. I don't think that pg_upgrade final state is fully\n> > reproducible: not all object oids are preserved, and the various pg_restore\n> > are run in parallel so you're very likely to end up with small physical\n> > differences that would be incompatible with physical replication. Even if we\n> > could make it totally reproducible, it would probably be at the cost of making\n> > pg_upgrade orders of magnitude slower. And since many people are already\n> > complaining that it's too slow, that doesn't seem like something we would want.\n>\n> A point on preserving physical replication slots: because we change WAL\n> format from one major version to the next (adding new messages or\n> changing format for other messages), we can't currently rely on physical\n> slots working across different major versions.\n\nI don't think anyone suggested to do physical replication over different major\nversions. My understanding was that it would be used to pg_upgrade a\n\"physical cluster\" (e.g. a primary and physical standby server) at the same\ntime, and then simply starting them up again would lead to a working physical\nreplication on the new version.\n\nI guess one could try to keep using the slots for other needs (PITR backup with\npg_receivewal or something similar), and then you would indeed have to be aware\nthat you won't be able to do anything with the new WAL records until you do a\nfresh base backup, but that's a problem that you can already face after a\nnormal pg_upgrade (although in most cases it's probably quite obvious for now\nas the timeline isn't preserved).if what you meant is that the slot may have to send a record generated by an older major version, then unless I'm missing something the same restriction could be added to such a feature as what's being discussed in this thread for the logical replication slots. so only a final shutdown checkpoint record would be present after the flushed WAL position. it may be possible to work around that, if there weren't all the other problems I mentioned.",
"msg_date": "Tue, 2 May 2023 20:02:49 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On 2023-May-02, Julien Rouhaud wrote:\n\n> On Tue, May 02, 2023 at 12:55:18PM +0200, Alvaro Herrera wrote:\n\n> > A point on preserving physical replication slots: because we change WAL\n> > format from one major version to the next (adding new messages or\n> > changing format for other messages), we can't currently rely on physical\n> > slots working across different major versions.\n> \n> I don't think anyone suggested to do physical replication over different major\n> versions.\n\nThey didn't, but a man can dream. (Anyway, we agree on it not working\nfor various reasons.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"\n\n\n",
"msg_date": "Wed, 3 May 2023 12:40:34 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Alvaro,\r\n\r\nThanks for giving suggestion!\r\n\r\n> A point on preserving physical replication slots: because we change WAL\r\n> format from one major version to the next (adding new messages or\r\n> changing format for other messages), we can't currently rely on physical\r\n> slots working across different major versions.\r\n> \r\n> So IMO, for now don't bother with physical replication slot\r\n> preservation, but do keep the option name as specific to logical slots.\r\n\r\nBased on the Julien's advice, We have already decided not to include physical\r\nslots in this patch and the option name has been changed.\r\nI think you said explicitly that we are going correct way. Thanks!\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 4 May 2023 04:03:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san. Here are some review comments for the v10-0001 patch.\n\n======\n\nGeneral.\n\n1. pg_dump option is documented to the user.\n\nI'm not sure about exposing the new pg_dump\n--logical-replication-slots-only option to the user.\n\nI thought this pg_dump option was intended only to be called\n*internally* by the pg_upgrade.\nBut, this patch is also documenting the new option for the user (in\ncase they want to call it independently?)\n\nMaybe exposing it is OK, but if you do that then I thought perhaps\nthere should also be some additional pg_dump tests just for this\noption (i.e. tested independently of the pg_upgrade)\n\n======\nCommit message\n\n2.\nFor pg_upgrade, when '--include-logical-replication-slots' is\nspecified, it executes\npg_dump with the new \"--logical-replication-slots-only\" option and\nrestores from the\ndump. Apart from restoring schema, pg_resetwal must not be called\nafter restoring\nreplication slots. This is because the command discards WAL files and\nstarts from a\nnew segment, even if they are required by replication slots. This\nleads to an ERROR:\n\"requested WAL segment XXX has already been removed\". To avoid this,\nreplication slots\nare restored at a different time than other objects, after running pg_resetwal.\n\n~~\n\nThe \"Apart from\" sentence maybe could do with some rewording. I\nnoticed there is a code comment (below fragment) that says the same as\nthis, but more clearly. Maybe it is better to use that code-comment\nwording in the comment message.\n\n+ * XXX We cannot dump replication slots at the same time as the schema\n+ * dump because we need to separate the timing of restoring\n+ * replication slots and other objects. Replication slots, in\n+ * particular, should not be restored before executing the pg_resetwal\n+ * command because it will remove WALs that are required by the slots.\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n3. main\n\n+ if (dopt.logical_slots_only && !dopt.binary_upgrade)\n+ pg_fatal(\"options --logical-replication-slots-only requires option\n--binary-upgrade\");\n+\n+ if (dopt.logical_slots_only && dopt.dataOnly)\n+ pg_fatal(\"options --logical-replication-slots-only and\n-a/--data-only cannot be used together\");\n+ if (dopt.logical_slots_only && dopt.schemaOnly)\n+ pg_fatal(\"options --logical-replication-slots-only and\n-s/--schema-only cannot be used together\");\n+\n\nConsider if it might be simpler to combine together all those\ndopt.logical_slots_only checks.\n\nSUGGESTION\n\nif (dopt.logical_slots_only)\n{\n if (!dopt.binary_upgrade)\n pg_fatal(\"options --logical-replication-slots-only requires\noption --binary-upgrade\");\n\n if (dopt.dataOnly)\n pg_fatal(\"options --logical-replication-slots-only and\n-a/--data-only cannot be used together\");\n if (dopt.schemaOnly)\n pg_fatal(\"options --logical-replication-slots-only and\n-s/--schema-only cannot be used together\");\n}\n\n~~~\n\n4. getLogicalReplicationSlots\n\n+ /* Check whether we should dump or not */\n+ if (fout->remoteVersion < 160000 || !dopt->logical_slots_only)\n+ return;\n\nI'm not sure if this check is necessary. Given the way this function\nis called, is it possible for this check to fail? Maybe that quick\nexit would be better code as an Assert?\n\n~~~\n\n5. dumpLogicalReplicationSlot\n\n+dumpLogicalReplicationSlot(Archive *fout,\n+ const LogicalReplicationSlotInfo *slotinfo)\n+{\n+ DumpOptions *dopt = fout->dopt;\n+\n+ if (!dopt->logical_slots_only)\n+ return;\n\n(Similar to the previous comment). Is it even possible to arrive here\nwhen dopt->logical_slots_only is false. Maybe that quick exit would be\nbetter coded as an Assert?\n\n~\n\n6.\n+ PQExpBuffer query = createPQExpBuffer();\n+ char *slotname = pg_strdup(slotinfo->dobj.name);\n\nI wondered if it was really necessary to strdup/free this slotname.\ne.g. And if it is, then why don't you do this for the slotinfo->plugin\nfield?\n\n======\nsrc/bin/pg_upgrade/check.c\n\n7. check_and_dump_old_cluster\n\n /* Extract a list of databases and tables from the old cluster */\n get_db_and_rel_infos(&old_cluster);\n+ get_logical_slot_infos(&old_cluster);\n\nIs it correct to associate this new call with that existing comment\nabout \"databases and tables\"?\n\n~~~\n\n8. check_new_cluster\n\n@@ -188,6 +190,7 @@ void\n check_new_cluster(void)\n {\n get_db_and_rel_infos(&new_cluster);\n+ get_logical_slot_infos(&new_cluster);\n\n check_new_cluster_is_empty();\n\n@@ -210,6 +213,9 @@ check_new_cluster(void)\n check_for_prepared_transactions(&new_cluster);\n\n check_for_new_tablespace_dir(&new_cluster);\n+\n+ if (user_opts.include_logical_slots)\n+ check_for_parameter_settings(&new_cluster);\n\nCan the get_logical_slot_infos() be done later, guarded by that the\nsame condition if (user_opts.include_logical_slots)?\n\n~~~\n\n9. check_new_cluster_is_empty\n\n+ * If --include-logical-replication-slots is required, check the\n+ * existing of slots\n+ */\n\nDid you mean to say \"check the existence of slots\"?\n\n~~~\n\n10. check_for_parameter_settings\n\n+ if (strcmp(wal_level, \"logical\") != 0)\n+ pg_fatal(\"wal_level must be \\\"logical\\\", but set to \\\"%s\\\"\",\n+ wal_level);\n\n/but set to/but is set to/\n\n\n======\nsrc/bin/pg_upgrade/info.c\n\n11. get_db_and_rel_infos\n\n+ {\n get_rel_infos(cluster, &cluster->dbarr.dbs[dbnum]);\n\n+ /*\n+ * Additionally, slot_arr must be initialized because they will be\n+ * checked later.\n+ */\n+ cluster->dbarr.dbs[dbnum].slot_arr.nslots = 0;\n+ cluster->dbarr.dbs[dbnum].slot_arr.slots = NULL;\n+ }\n\n11a.\nI think probably it would have been easier to just use 'pg_malloc0'\ninstead of 'pg_malloc' in the get_db_infos, then this code would not\nbe necessary.\n\n~\n\n11b.\nBTW, shouldn't this function also be calling free_logical_slot_infos()\ntoo? That will also have the same effect (initializing the slot_arr)\nbut without having to change anything else.\n\n~~~\n\n12. get_logical_slot_infos\n+/*\n+ * Higher level routine to generate LogicalSlotInfoArr for all databases.\n+ */\n+void\n+get_logical_slot_infos(ClusterInfo *cluster)\n\nTo be consistent with the other nearby function headers it should have\nanother line saying just get_logical_slot_infos().\n\n~~~\n\n13. get_logical_slot_infos\n\n+void\n+get_logical_slot_infos(ClusterInfo *cluster)\n+{\n+ int dbnum;\n+\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ {\n+ if (cluster->dbarr.dbs[dbnum].slot_arr.slots)\n+ free_logical_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\n+\n+ get_logical_slot_infos_per_db(cluster, &cluster->dbarr.dbs[dbnum]);\n+ }\n+\n+ if (cluster == &old_cluster)\n+ pg_log(PG_VERBOSE, \"\\nsource databases:\");\n+ else\n+ pg_log(PG_VERBOSE, \"\\ntarget databases:\");\n+\n+ if (log_opts.verbose)\n+ {\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ {\n+ pg_log(PG_VERBOSE, \"Database: %s\", cluster->dbarr.dbs[dbnum].db_name);\n+ print_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\n+ }\n+ }\n+}\n\nI didn't see why there are 2 loops exactly the same. I think with some\nminor refactoring these can both be done in the same loop can't they?\n\nSUGGESTION 1:\n\nif (cluster == &old_cluster)\n pg_log(PG_VERBOSE, \"\\nsource databases:\");\nelse\n pg_log(PG_VERBOSE, \"\\ntarget databases:\");\n\nfor (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n{\n if (cluster->dbarr.dbs[dbnum].slot_arr.slots)\n free_logical_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\n\n get_logical_slot_infos_per_db(cluster, &cluster->dbarr.dbs[dbnum]);\n\n if (log_opts.verbose)\n {\n pg_log(PG_VERBOSE, \"Database: %s\", cluster->dbarr.dbs[dbnum].db_name);\n print_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\n }\n}\n\n~\n\nI expected it could be simplified further still by using some variables\n\nSUGGESTION 2:\n\nif (cluster == &old_cluster)\n pg_log(PG_VERBOSE, \"\\nsource databases:\");\nelse\n pg_log(PG_VERBOSE, \"\\ntarget databases:\");\n\nfor (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n{\nDbInfo *pDbInfo = &cluster->dbarr.dbs[dbnum];\n if (pDbInfo->slot_arr.slots)\n free_logical_slot_infos(&pDbInfo->slot_arr);\n\n get_logical_slot_infos_per_db(cluster, pDbInfo);\n\n if (log_opts.verbose)\n {\n pg_log(PG_VERBOSE, \"Database: %s\", pDbInfo->db_name);\n print_slot_infos(&pDbInfo->slot_arr);\n }\n}\n\n~~~\n\n14. get_logical_slot_infos_per_db\n\n+ char query[QUERY_ALLOC];\n+\n+ query[0] = '\\0'; /* initialize query string to empty */\n+\n+ snprintf(query + strlen(query), sizeof(query) - strlen(query),\n+ \"SELECT slot_name, plugin, two_phase \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE database = current_database() AND temporary = false \"\n+ \"AND wal_status IN ('reserved', 'extended');\");\n\nI didn't understand the purpose of those calls to 'strlen(query)'\nsince the string was initialised to empty-string immediately above.\n\n~~~\n\n15.\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ int slotnum;\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ pg_log(PG_VERBOSE, \"slotname: %s: plugin: %s: two_phase %d\",\n+ slot_arr->slots[slotnum].slotname,\n+ slot_arr->slots[slotnum].plugin,\n+ slot_arr->slots[slotnum].two_phase);\n+}\n\nIMO those colons don't make sense.\n\nBEFORE\n\"slotname: %s: plugin: %s: two_phase %d\"\n\nSUGGESTION\n\"slotname: %s, plugin: %s, two_phase: %d\"\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n16. LogicalSlotInfo\n\n+typedef struct\n+{\n+ char *slotname; /* slot name */\n+ char *plugin; /* plugin */\n+ bool two_phase; /* Can the slot decode 2PC? */\n+} LogicalSlotInfo;\n\nThe RelInfo had a comment for the typedef struct, so I think the\nLogicalSlotInfo struct also should have a comment.\n\n~~~\n\n17. DbInfo\n\n RelInfoArr rel_arr; /* array of all user relinfos */\n+ LogicalSlotInfoArr slot_arr; /* array of all logicalslotinfos */\n } DbInfo;\n\nShould the comment say \"LogicalSlotInfo\" instead of \"logicalslotinfos\"?\n\n======\n.../t/003_logical_replication_slots.pl\n\n18. RESULTS\n\nI run this by 'make check' in the src/bin/pg_upgrade folder.\n\nFor some reason, the test does not work for me. The results I get are:\n\n# +++ tap check in src/bin/pg_upgrade +++\nt/001_basic.pl ...................... ok\nt/002_pg_upgrade.pl ................. ok\nt/003_logical_replication_slots.pl .. 3/? # Tests were run but no plan\nwas declared and done_testing() was not seen.\nt/003_logical_replication_slots.pl .. Dubious, test returned 29 (wstat\n7424, 0x1d00)\nAll 4 subtests passed\n\nTest Summary Report\n-------------------\nt/003_logical_replication_slots.pl (Wstat: 7424 Tests: 4 Failed: 0)\n Non-zero exit status: 29\n Parse errors: No plan found in TAP output\nFiles=3, Tests=27, 128 wallclock secs ( 0.04 usr 0.01 sys + 18.02\ncusr 6.06 csys = 24.13 CPU)\nResult: FAIL\nmake: *** [check] Error 1\n\n~\n\nAnd the log file\n(tmp_check/log/003_logical_replication_slots_old_node.log) shows the\nfollowing ERROR:\n\n2023-05-09 12:19:25.330 AEST [32572] 003_logical_replication_slots.pl\nLOG: statement: SELECT\npg_create_logical_replication_slot('test_slot', 'test_decoding',\nfalse, true);\n2023-05-09 12:19:25.331 AEST [32572] 003_logical_replication_slots.pl\nERROR: could not access file \"test_decoding\": No such file or\ndirectory\n2023-05-09 12:19:25.331 AEST [32572] 003_logical_replication_slots.pl\nSTATEMENT: SELECT pg_create_logical_replication_slot('test_slot',\n'test_decoding', false, true);\n2023-05-09 12:19:25.335 AEST [32564] LOG: received immediate shutdown request\n2023-05-09 12:19:25.337 AEST [32564] LOG: database system is shut down\n\n~\n\nIs it a bug? Or, if I am doing something wrong please let me know how\nto run the test.\n\n~~~\n\n19.\n+# Clean up\n+rmtree($new_node->data_dir . \"/pg_upgrade_output.d\");\n+$new_node->append_conf('postgresql.conf', \"wal_level = 'logical'\");\n+$new_node->append_conf('postgresql.conf', \"max_replication_slots = 0\");\n\nI think the last 2 lines are not \"clean up\". They are preparations for\nthe subsequent test, so maybe they should be commented as such.\n\n~~~\n\n20.\n+# Clean up\n+rmtree($new_node->data_dir . \"/pg_upgrade_output.d\");\n+$new_node->append_conf('postgresql.conf', \"max_replication_slots = 10\");\n\nI think the last line is not \"clean up\". It is preparation for the\nsubsequent test, so maybe it should be commented as such.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 9 May 2023 13:09:49 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> \r\n> General.\r\n> \r\n> 1. pg_dump option is documented to the user.\r\n> \r\n> I'm not sure about exposing the new pg_dump\r\n> --logical-replication-slots-only option to the user.\r\n> \r\n> I thought this pg_dump option was intended only to be called\r\n> *internally* by the pg_upgrade.\r\n> But, this patch is also documenting the new option for the user (in\r\n> case they want to call it independently?)\r\n> \r\n> Maybe exposing it is OK, but if you do that then I thought perhaps\r\n> there should also be some additional pg_dump tests just for this\r\n> option (i.e. tested independently of the pg_upgrade)\r\n\r\nRight, I have written the document for the moment, but it should not\r\nIf it is not exposed. Removed from the doc.\r\n\r\n> Commit message\r\n> \r\n> 2.\r\n> For pg_upgrade, when '--include-logical-replication-slots' is\r\n> specified, it executes\r\n> pg_dump with the new \"--logical-replication-slots-only\" option and\r\n> restores from the\r\n> dump. Apart from restoring schema, pg_resetwal must not be called\r\n> after restoring\r\n> replication slots. This is because the command discards WAL files and\r\n> starts from a\r\n> new segment, even if they are required by replication slots. This\r\n> leads to an ERROR:\r\n> \"requested WAL segment XXX has already been removed\". To avoid this,\r\n> replication slots\r\n> are restored at a different time than other objects, after running pg_resetwal.\r\n> \r\n> ~~\r\n> \r\n> The \"Apart from\" sentence maybe could do with some rewording. I\r\n> noticed there is a code comment (below fragment) that says the same as\r\n> this, but more clearly. Maybe it is better to use that code-comment\r\n> wording in the comment message.\r\n> \r\n> + * XXX We cannot dump replication slots at the same time as the schema\r\n> + * dump because we need to separate the timing of restoring\r\n> + * replication slots and other objects. Replication slots, in\r\n> + * particular, should not be restored before executing the pg_resetwal\r\n> + * command because it will remove WALs that are required by the slots.\r\n\r\nChanged.\r\n\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 3. main\r\n> \r\n> + if (dopt.logical_slots_only && !dopt.binary_upgrade)\r\n> + pg_fatal(\"options --logical-replication-slots-only requires option\r\n> --binary-upgrade\");\r\n> +\r\n> + if (dopt.logical_slots_only && dopt.dataOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -a/--data-only cannot be used together\");\r\n> + if (dopt.logical_slots_only && dopt.schemaOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -s/--schema-only cannot be used together\");\r\n> +\r\n> \r\n> Consider if it might be simpler to combine together all those\r\n> dopt.logical_slots_only checks.\r\n> \r\n> SUGGESTION\r\n> \r\n> if (dopt.logical_slots_only)\r\n> {\r\n> if (!dopt.binary_upgrade)\r\n> pg_fatal(\"options --logical-replication-slots-only requires\r\n> option --binary-upgrade\");\r\n> \r\n> if (dopt.dataOnly)\r\n> pg_fatal(\"options --logical-replication-slots-only and\r\n> -a/--data-only cannot be used together\");\r\n> if (dopt.schemaOnly)\r\n> pg_fatal(\"options --logical-replication-slots-only and\r\n> -s/--schema-only cannot be used together\");\r\n> }\r\n\r\nRight, fixed.\r\n\r\n> 4. getLogicalReplicationSlots\r\n> \r\n> + /* Check whether we should dump or not */\r\n> + if (fout->remoteVersion < 160000 || !dopt->logical_slots_only)\r\n> + return;\r\n> \r\n> I'm not sure if this check is necessary. Given the way this function\r\n> is called, is it possible for this check to fail? Maybe that quick\r\n> exit would be better code as an Assert?\r\n\r\nI think the version check must be needed because it is not done yet.\r\n(Actually I'm not sure the restriction is needed, but now I will keep)\r\nAbout dopt->logical_slots_only, I agreed to remove that. \r\n\r\n> 5. dumpLogicalReplicationSlot\r\n> \r\n> +dumpLogicalReplicationSlot(Archive *fout,\r\n> + const LogicalReplicationSlotInfo *slotinfo)\r\n> +{\r\n> + DumpOptions *dopt = fout->dopt;\r\n> +\r\n> + if (!dopt->logical_slots_only)\r\n> + return;\r\n> \r\n> (Similar to the previous comment). Is it even possible to arrive here\r\n> when dopt->logical_slots_only is false. Maybe that quick exit would be\r\n> better coded as an Assert?\r\n\r\nI think it is not possible, so changed to Assert().\r\n\r\n> 6.\r\n> + PQExpBuffer query = createPQExpBuffer();\r\n> + char *slotname = pg_strdup(slotinfo->dobj.name);\r\n> \r\n> I wondered if it was really necessary to strdup/free this slotname.\r\n> e.g. And if it is, then why don't you do this for the slotinfo->plugin\r\n> field?\r\n\r\nThis was a debris for my testing. Removed.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 7. check_and_dump_old_cluster\r\n> \r\n> /* Extract a list of databases and tables from the old cluster */\r\n> get_db_and_rel_infos(&old_cluster);\r\n> + get_logical_slot_infos(&old_cluster);\r\n> \r\n> Is it correct to associate this new call with that existing comment\r\n> about \"databases and tables\"?\r\n\r\nAdded a comment.\r\n\r\n> 8. check_new_cluster\r\n> \r\n> @@ -188,6 +190,7 @@ void\r\n> check_new_cluster(void)\r\n> {\r\n> get_db_and_rel_infos(&new_cluster);\r\n> + get_logical_slot_infos(&new_cluster);\r\n> \r\n> check_new_cluster_is_empty();\r\n> \r\n> @@ -210,6 +213,9 @@ check_new_cluster(void)\r\n> check_for_prepared_transactions(&new_cluster);\r\n> \r\n> check_for_new_tablespace_dir(&new_cluster);\r\n> +\r\n> + if (user_opts.include_logical_slots)\r\n> + check_for_parameter_settings(&new_cluster);\r\n> \r\n> Can the get_logical_slot_infos() be done later, guarded by that the\r\n> same condition if (user_opts.include_logical_slots)?\r\n\r\nAdded.\r\n\r\n> 9. check_new_cluster_is_empty\r\n> \r\n> + * If --include-logical-replication-slots is required, check the\r\n> + * existing of slots\r\n> + */\r\n> \r\n> Did you mean to say \"check the existence of slots\"?\r\n\r\nYes, it is my typo. Fixed.\r\n\r\n> 10. check_for_parameter_settings\r\n> \r\n> + if (strcmp(wal_level, \"logical\") != 0)\r\n> + pg_fatal(\"wal_level must be \\\"logical\\\", but set to \\\"%s\\\"\",\r\n> + wal_level);\r\n> \r\n> /but set to/but is set to/\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 11. get_db_and_rel_infos\r\n> \r\n> + {\r\n> get_rel_infos(cluster, &cluster->dbarr.dbs[dbnum]);\r\n> \r\n> + /*\r\n> + * Additionally, slot_arr must be initialized because they will be\r\n> + * checked later.\r\n> + */\r\n> + cluster->dbarr.dbs[dbnum].slot_arr.nslots = 0;\r\n> + cluster->dbarr.dbs[dbnum].slot_arr.slots = NULL;\r\n> + }\r\n> \r\n> 11a.\r\n> I think probably it would have been easier to just use 'pg_malloc0'\r\n> instead of 'pg_malloc' in the get_db_infos, then this code would not\r\n> be necessary.\r\n\r\nI was not sure whether it is OK to change like that because of the\r\nperformance efficiency. But OK, fixed.\r\n\r\n> 11b.\r\n> BTW, shouldn't this function also be calling free_logical_slot_infos()\r\n> too? That will also have the same effect (initializing the slot_arr)\r\n> but without having to change anything else.\r\n> \r\n> ~~~\r\n> \r\n> 12. get_logical_slot_infos\r\n> +/*\r\n> + * Higher level routine to generate LogicalSlotInfoArr for all databases.\r\n> + */\r\n> +void\r\n> +get_logical_slot_infos(ClusterInfo *cluster)\r\n> \r\n> To be consistent with the other nearby function headers it should have\r\n> another line saying just get_logical_slot_infos().\r\n\r\nAdded.\r\n\r\n> 13. get_logical_slot_infos\r\n> \r\n> +void\r\n> +get_logical_slot_infos(ClusterInfo *cluster)\r\n> +{\r\n> + int dbnum;\r\n> +\r\n> + for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> + {\r\n> + if (cluster->dbarr.dbs[dbnum].slot_arr.slots)\r\n> + free_logical_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\r\n> +\r\n> + get_logical_slot_infos_per_db(cluster, &cluster->dbarr.dbs[dbnum]);\r\n> + }\r\n> +\r\n> + if (cluster == &old_cluster)\r\n> + pg_log(PG_VERBOSE, \"\\nsource databases:\");\r\n> + else\r\n> + pg_log(PG_VERBOSE, \"\\ntarget databases:\");\r\n> +\r\n> + if (log_opts.verbose)\r\n> + {\r\n> + for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> + {\r\n> + pg_log(PG_VERBOSE, \"Database: %s\", cluster->dbarr.dbs[dbnum].db_name);\r\n> + print_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\r\n> + }\r\n> + }\r\n> +}\r\n> \r\n> I didn't see why there are 2 loops exactly the same. I think with some\r\n> minor refactoring these can both be done in the same loop can't they?\r\n\r\nThe style follows get_db_and_rel_infos(), but... \r\n\r\n> SUGGESTION 1:\r\n> \r\n> if (cluster == &old_cluster)\r\n> pg_log(PG_VERBOSE, \"\\nsource databases:\");\r\n> else\r\n> pg_log(PG_VERBOSE, \"\\ntarget databases:\");\r\n> \r\n> for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> {\r\n> if (cluster->dbarr.dbs[dbnum].slot_arr.slots)\r\n> free_logical_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\r\n> \r\n> get_logical_slot_infos_per_db(cluster, &cluster->dbarr.dbs[dbnum]);\r\n> \r\n> if (log_opts.verbose)\r\n> {\r\n> pg_log(PG_VERBOSE, \"Database: %s\",\r\n> cluster->dbarr.dbs[dbnum].db_name);\r\n> print_slot_infos(&cluster->dbarr.dbs[dbnum].slot_arr);\r\n> }\r\n> }\r\n> \r\n> ~\r\n> \r\n> I expected it could be simplified further still by using some variables\r\n> \r\n> SUGGESTION 2:\r\n> \r\n> if (cluster == &old_cluster)\r\n> pg_log(PG_VERBOSE, \"\\nsource databases:\");\r\n> else\r\n> pg_log(PG_VERBOSE, \"\\ntarget databases:\");\r\n> \r\n> for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> {\r\n> DbInfo *pDbInfo = &cluster->dbarr.dbs[dbnum];\r\n> if (pDbInfo->slot_arr.slots)\r\n> free_logical_slot_infos(&pDbInfo->slot_arr);\r\n> \r\n> get_logical_slot_infos_per_db(cluster, pDbInfo);\r\n> \r\n> if (log_opts.verbose)\r\n> {\r\n> pg_log(PG_VERBOSE, \"Database: %s\", pDbInfo->db_name);\r\n> print_slot_infos(&pDbInfo->slot_arr);\r\n> }\r\n> }\r\n\r\nI chose SUGGESTION 2.\r\n\r\n> 14. get_logical_slot_infos_per_db\r\n> \r\n> + char query[QUERY_ALLOC];\r\n> +\r\n> + query[0] = '\\0'; /* initialize query string to empty */\r\n> +\r\n> + snprintf(query + strlen(query), sizeof(query) - strlen(query),\r\n> + \"SELECT slot_name, plugin, two_phase \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE database = current_database() AND temporary = false \"\r\n> + \"AND wal_status IN ('reserved', 'extended');\");\r\n> \r\n> I didn't understand the purpose of those calls to 'strlen(query)'\r\n> since the string was initialised to empty-string immediately above.\r\n\r\nRemoved.\r\n\r\n> 15.\r\n> +static void\r\n> +print_slot_infos(LogicalSlotInfoArr *slot_arr)\r\n> +{\r\n> + int slotnum;\r\n> +\r\n> + for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + pg_log(PG_VERBOSE, \"slotname: %s: plugin: %s: two_phase %d\",\r\n> + slot_arr->slots[slotnum].slotname,\r\n> + slot_arr->slots[slotnum].plugin,\r\n> + slot_arr->slots[slotnum].two_phase);\r\n> +}\r\n> \r\n> IMO those colons don't make sense.\r\n> \r\n> BEFORE\r\n> \"slotname: %s: plugin: %s: two_phase %d\"\r\n> \r\n> SUGGESTION\r\n> \"slotname: %s, plugin: %s, two_phase: %d\"\r\n\r\nFixed. I followed print_rel_infos() style, but I prefer yours.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 16. LogicalSlotInfo\r\n> \r\n> +typedef struct\r\n> +{\r\n> + char *slotname; /* slot name */\r\n> + char *plugin; /* plugin */\r\n> + bool two_phase; /* Can the slot decode 2PC? */\r\n> +} LogicalSlotInfo;\r\n> \r\n> The RelInfo had a comment for the typedef struct, so I think the\r\n> LogicalSlotInfo struct also should have a comment.\r\n\r\nAdded.\r\n\r\n> 17. DbInfo\r\n> \r\n> RelInfoArr rel_arr; /* array of all user relinfos */\r\n> + LogicalSlotInfoArr slot_arr; /* array of all logicalslotinfos */\r\n> } DbInfo;\r\n> \r\n> Should the comment say \"LogicalSlotInfo\" instead of \"logicalslotinfos\"?\r\n\r\nRight, fixed.\r\n\r\n> .../t/003_logical_replication_slots.pl\r\n> \r\n> 18. RESULTS\r\n> \r\n> I run this by 'make check' in the src/bin/pg_upgrade folder.\r\n> \r\n> For some reason, the test does not work for me. The results I get are:\r\n> \r\n> # +++ tap check in src/bin/pg_upgrade +++\r\n> t/001_basic.pl ...................... ok\r\n> t/002_pg_upgrade.pl ................. ok\r\n> t/003_logical_replication_slots.pl .. 3/? # Tests were run but no plan\r\n> was declared and done_testing() was not seen.\r\n> t/003_logical_replication_slots.pl .. Dubious, test returned 29 (wstat\r\n> 7424, 0x1d00)\r\n> All 4 subtests passed\r\n> \r\n> Test Summary Report\r\n> -------------------\r\n> t/003_logical_replication_slots.pl (Wstat: 7424 Tests: 4 Failed: 0)\r\n> Non-zero exit status: 29\r\n> Parse errors: No plan found in TAP output\r\n> Files=3, Tests=27, 128 wallclock secs ( 0.04 usr 0.01 sys + 18.02\r\n> cusr 6.06 csys = 24.13 CPU)\r\n> Result: FAIL\r\n> make: *** [check] Error 1\r\n> \r\n> ~\r\n> \r\n> And the log file\r\n> (tmp_check/log/003_logical_replication_slots_old_node.log) shows the\r\n> following ERROR:\r\n> \r\n> 2023-05-09 12:19:25.330 AEST [32572] 003_logical_replication_slots.pl\r\n> LOG: statement: SELECT\r\n> pg_create_logical_replication_slot('test_slot', 'test_decoding',\r\n> false, true);\r\n> 2023-05-09 12:19:25.331 AEST [32572] 003_logical_replication_slots.pl\r\n> ERROR: could not access file \"test_decoding\": No such file or\r\n> directory\r\n> 2023-05-09 12:19:25.331 AEST [32572] 003_logical_replication_slots.pl\r\n> STATEMENT: SELECT pg_create_logical_replication_slot('test_slot',\r\n> 'test_decoding', false, true);\r\n> 2023-05-09 12:19:25.335 AEST [32564] LOG: received immediate shutdown\r\n> request\r\n> 2023-05-09 12:19:25.337 AEST [32564] LOG: database system is shut down\r\n> \r\n> ~\r\n> \r\n> Is it a bug? Or, if I am doing something wrong please let me know how\r\n> to run the test.\r\n\r\nGood point. I could not find the problem because I used meson build system.\r\nWhen I used the traditional make, the ERROR could be reproduced. \r\nIIUC the problem was occurred the dependency between pg_upgrade and test_decoding\r\nwas not set in the Makefile. Hence, I added a variable EXTRA_INSTALL to Makefile in\r\norder to clarify the dependency. This followed other directories like pg_basebackup.\r\n\r\n> 19.\r\n> +# Clean up\r\n> +rmtree($new_node->data_dir . \"/pg_upgrade_output.d\");\r\n> +$new_node->append_conf('postgresql.conf', \"wal_level = 'logical'\");\r\n> +$new_node->append_conf('postgresql.conf', \"max_replication_slots = 0\");\r\n> \r\n> I think the last 2 lines are not \"clean up\". They are preparations for\r\n> the subsequent test, so maybe they should be commented as such.\r\n\r\nRight, it is a preparation for the next. Added a comment.\r\n\r\n> 20.\r\n> +# Clean up\r\n> +rmtree($new_node->data_dir . \"/pg_upgrade_output.d\");\r\n> +$new_node->append_conf('postgresql.conf', \"max_replication_slots = 10\");\r\n> \r\n> I think the last line is not \"clean up\". It is preparation for the\r\n> subsequent test, so maybe it should be commented as such.\r\n\r\nAdded a comment.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 9 May 2023 09:43:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san. I checked again the v11-0001.\n\nHere are a few more review comments.\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n1. help\n\n printf(_(\" --inserts dump data as INSERT\ncommands, rather than COPY\\n\"));\n printf(_(\" --load-via-partition-root load partitions via the\nroot table\\n\"));\n+ printf(_(\" --logical-replication-slots-only\\n\"\n+ \" dump only logical replication slots,\nno schema or data\\n\"));\n printf(_(\" --no-comments do not dump comments\\n\"));\n\nNow you removed the PG Docs for the internal pg_dump option based on\nmy previous review comment (see [2]#1). So does it mean this \"help\"\nalso be removed so this option will be completely invisible to the\nuser? I am not sure, but if you do choose to remove this help then\nprobably a comment should be added here to explain why it is\ndeliberately not listed.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n2. check_new_cluster\n\nAlthough you wrote \"Added\", I don't think my previous comment ([1]#8)\nwas yet addressed.\n\nWhat I mean to say ask was: can that call to get_logical_slot_infos()\nbe done later, only when you know that option was specified?\n\ne.g\n\nBEFORE\nget_logical_slot_infos(&new_cluster);\n...\nif (user_opts.include_logical_slots)\n check_for_parameter_settings(&new_cluster);\n\nSUGGESTION\nif (user_opts.include_logical_slots)\n{\n get_logical_slot_infos(&new_cluster);\n check_for_parameter_settings(&new_cluster);\n}\n\n======\nsrc/bin/pg_upgrade/info.c\n\n3. get_db_and_rel_infos\n\n> src/bin/pg_upgrade/info.c\n>\n> 11. get_db_and_rel_infos\n>\n> + {\n> get_rel_infos(cluster, &cluster->dbarr.dbs[dbnum]);\n>\n> + /*\n> + * Additionally, slot_arr must be initialized because they will be\n> + * checked later.\n> + */\n> + cluster->dbarr.dbs[dbnum].slot_arr.nslots = 0;\n> + cluster->dbarr.dbs[dbnum].slot_arr.slots = NULL;\n> + }\n>\n> 11a.\n> I think probably it would have been easier to just use 'pg_malloc0'\n> instead of 'pg_malloc' in the get_db_infos, then this code would not\n> be necessary.\nI was not sure whether it is OK to change like that because of the\nperformance efficiency. But OK, fixed.\n> 11b.\n> BTW, shouldn't this function also be calling free_logical_slot_infos()\n> too? That will also have the same effect (initializing the slot_arr)\n> but without having to change anything else.\n\n~\n\nAbove is your reply ([2]11a). If you were not sure about the malloc0\nthen I think the suggestion ([1]#12b) achieves the same thing and\ninitializes those fields. You did not reply to 12b, so I wondered if\nyou accidentally missed that point.\n\n~~~\n\n4. get_logical_slot_infos\n\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ {\n+ DbInfo *pDbInfo = &cluster->dbarr.dbs[dbnum];\n+\n+ if (pDbInfo->slot_arr.slots)\n+ free_logical_slot_infos(&pDbInfo->slot_arr);\n\nMaybe it is ok, but it seems unusual that this\nget_logical_slot_infos() is also doing a free. I didn't notice this\nsame pattern with the other get_XXX functions. Why is it needed? Even\nif pDbInfo->slot_arr.slots was not NULL, is the information stale or\nwill you just end up re-fetching the same info?\n\n======\n.../pg_upgrade/t/003_logical_replication_slots.pl\n\n5.\n+# Preparations for the subsequent test. The case max_replication_slots is set\n+# to 0 is prohibit.\n\n/prohibit/prohibited/\n\n------\n[1] My v10 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPtpQaKVfqr-8KUtGZqei1C9gWF0%2BY8n1UafvAQeS4G_hg%40mail.gmail.com\n[2] Kuroda-san's reply to my v10 review -\nhttps://www.postgresql.org/message-id/TYAPR01MB5866A537AC9AD46E49345A78F5769%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 11 May 2023 12:12:10 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, May 11, 2023 at 10:12 AM Peter Smith <[email protected]> wrote:\r\n> Hi Kuroda-san. I checked again the v11-0001.\r\n> \r\n> Here are a few more review comments.\r\n> \r\n> ======\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 1. help\r\n> \r\n> printf(_(\" --inserts dump data as INSERT\r\n> commands, rather than COPY\\n\"));\r\n> printf(_(\" --load-via-partition-root load partitions via the\r\n> root table\\n\"));\r\n> + printf(_(\" --logical-replication-slots-only\\n\"\r\n> + \" dump only logical replication slots,\r\n> no schema or data\\n\"));\r\n> printf(_(\" --no-comments do not dump comments\\n\"));\r\n> \r\n> Now you removed the PG Docs for the internal pg_dump option based on\r\n> my previous review comment (see [2]#1). So does it mean this \"help\"\r\n> also be removed so this option will be completely invisible to the\r\n> user? I am not sure, but if you do choose to remove this help then\r\n> probably a comment should be added here to explain why it is\r\n> deliberately not listed.\r\n\r\nI'm not sure if there is any reason to not expose this new option? Do we have\r\nconcerns that users who use this new option by mistake may cause data\r\ninconsistencies?\r\n\r\nBTW, I think that all options of pg_dump (please see the array of long_options\r\nin the main function of the pg_dump.c file) are currently exposed to the user.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 11 May 2023 03:17:25 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, May 9, 2023 at 17:44 PM Hayato Kuroda (Fujitsu) <[email protected]> wrote:\r\n> Thank you for reviewing! PSA new version.\r\n\r\nThanks for your patches.\r\nHere are some comments for 0001 patch:\r\n\r\n1. In the function getLogicalReplicationSlots\r\n```\r\n+\t\t/*\r\n+\t\t * Note: Currently we do not have any options to include/exclude slots\r\n+\t\t * in dumping, so all the slots must be selected.\r\n+\t\t */\r\n+\t\tslotinfo[i].dobj.dump = DUMP_COMPONENT_ALL;\r\n```\r\nI think currently we are only dumping the definition of logical replication\r\nslots. It seems better to set it as DUMP_COMPONENT_DEFINITION here.\r\n\r\n2. In the function dumpLogicalReplicationSlot\r\n```\r\n+\t\tArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\r\n+\t\t\t\t\t ARCHIVE_OPTS(.tag = slotname,\r\n+\t\t\t\t\t\t\t\t .description = \"REPLICATION SLOT\",\r\n+\t\t\t\t\t\t\t\t .section = SECTION_POST_DATA,\r\n+\t\t\t\t\t\t\t\t .createStmt = query->data));\r\n```\r\nI think if we do not set the member dropStmt in macro ARCHIVE_OPTS here, when we\r\nspecifying the option \"--logical-replication-slots-only\" and option \"-c/--clean\"\r\ntogether, the \"-c/--clean\" will not work.\r\n\r\nI think that we could use the function pg_drop_replication_slot to set this\r\nmember. Then, in the main function in the pg_dump.c file, we should add a check\r\nto prevent specifying option \"--logical-replication-slots-only\" and\r\noption \"--if-exists\" together.\r\nOr, we could simply add a check to prevent specifying option\r\n\"--logical-replication-slots-only\" and option \"-c/--clean\" together.\r\nWhat do you think?\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 11 May 2023 03:18:21 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Wang,\r\n\r\n> I'm not sure if there is any reason to not expose this new option? Do we have\r\n> concerns that users who use this new option by mistake may cause data\r\n> inconsistencies?\r\n>\r\n> BTW, I think that all options of pg_dump (please see the array of long_options\r\n> in the main function of the pg_dump.c file) are currently exposed to the user.\r\n\r\nApart from another database object, --logical-replication-slot-only does not provide\r\nthe \"perfect\" copy. As you might know, some attributes like xmin and restart_lsn\r\nare not copied, it just creates similar replication slots which have same name,\r\nplugin, and options. I think these things may be confused for users.\r\n\r\nMoreover, I cannot come up with use-case which DBAs use the option alone.\r\nIf there is a good one, I can decide to remove the limitation.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 11 May 2023 06:35:21 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1. In the function getLogicalReplicationSlots\r\n> ```\r\n> +\t\t/*\r\n> +\t\t * Note: Currently we do not have any options to include/exclude\r\n> slots\r\n> +\t\t * in dumping, so all the slots must be selected.\r\n> +\t\t */\r\n> +\t\tslotinfo[i].dobj.dump = DUMP_COMPONENT_ALL;\r\n> ```\r\n> I think currently we are only dumping the definition of logical replication\r\n> slots. It seems better to set it as DUMP_COMPONENT_DEFINITION here.\r\n\r\nRight. Actually it was harmless because another flags like DUMP_COMPONENT_DEFINITION\r\nare not checked in dumpLogicalReplicationSlot(), but changed.\r\n\r\n> 2. In the function dumpLogicalReplicationSlot\r\n> ```\r\n> +\t\tArchiveEntry(fout, slotinfo->dobj.catId, slotinfo->dobj.dumpId,\r\n> +\t\t\t\t\t ARCHIVE_OPTS(.tag = slotname,\r\n> +\r\n> \t .description = \"REPLICATION SLOT\",\r\n> +\t\t\t\t\t\t\t\t .section =\r\n> SECTION_POST_DATA,\r\n> +\r\n> \t .createStmt = query->data));\r\n> ```\r\n> I think if we do not set the member dropStmt in macro ARCHIVE_OPTS here, when\r\n> we\r\n> specifying the option \"--logical-replication-slots-only\" and option \"-c/--clean\"\r\n> together, the \"-c/--clean\" will not work.\r\n> \r\n> I think that we could use the function pg_drop_replication_slot to set this\r\n> member. Then, in the main function in the pg_dump.c file, we should add a check\r\n> to prevent specifying option \"--logical-replication-slots-only\" and\r\n> option \"--if-exists\" together.\r\n> Or, we could simply add a check to prevent specifying option\r\n> \"--logical-replication-slots-only\" and option \"-c/--clean\" together.\r\n> What do you think?\r\n\r\nI chose not to allow to combine with -c. Assuming that this option is used only\r\nby the pg_upgrade, it is ensured that new node does not have any logical replication\r\nslots. So the remove function is not needed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 11 May 2023 08:55:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! New patch can be available at [1].\r\n\r\n> 1. help\r\n> \r\n> printf(_(\" --inserts dump data as INSERT\r\n> commands, rather than COPY\\n\"));\r\n> printf(_(\" --load-via-partition-root load partitions via the\r\n> root table\\n\"));\r\n> + printf(_(\" --logical-replication-slots-only\\n\"\r\n> + \" dump only logical replication slots,\r\n> no schema or data\\n\"));\r\n> printf(_(\" --no-comments do not dump comments\\n\"));\r\n> \r\n> Now you removed the PG Docs for the internal pg_dump option based on\r\n> my previous review comment (see [2]#1). So does it mean this \"help\"\r\n> also be removed so this option will be completely invisible to the\r\n> user? I am not sure, but if you do choose to remove this help then\r\n> probably a comment should be added here to explain why it is\r\n> deliberately not listed.\r\n\r\nRemoved from help and comments were added instead.\r\n\r\n> 2. check_new_cluster\r\n> \r\n> Although you wrote \"Added\", I don't think my previous comment ([1]#8)\r\n> was yet addressed.\r\n> \r\n> What I mean to say ask was: can that call to get_logical_slot_infos()\r\n> be done later, only when you know that option was specified?\r\n> \r\n> e.g\r\n> \r\n> BEFORE\r\n> get_logical_slot_infos(&new_cluster);\r\n> ...\r\n> if (user_opts.include_logical_slots)\r\n> check_for_parameter_settings(&new_cluster);\r\n> \r\n> SUGGESTION\r\n> if (user_opts.include_logical_slots)\r\n> {\r\n> get_logical_slot_infos(&new_cluster);\r\n> check_for_parameter_settings(&new_cluster);\r\n> }\r\n\r\nSorry for missing your comments. But I think get_logical_slot_infos() cannot be\r\nexecuted later. In check_new_cluster_is_empty(), we must check not to exist any\r\nreplication slots on the new node because all of WALs will be truncated. Infos\r\nrelated with slots are stored in get_logical_slot_infos(), so it must be executed\r\nbefore check_new_cluster_is_empty(). Another possibility is to execute\r\ncheck_for_parameter_settings() earlier, and I tried to do. The style seems little\r\nbit strange, but it worked well. How do you think?\r\n\r\n> 3. get_db_and_rel_infos\r\n> \r\n> > src/bin/pg_upgrade/info.c\r\n> >\r\n> > 11. get_db_and_rel_infos\r\n> >\r\n> > + {\r\n> > get_rel_infos(cluster, &cluster->dbarr.dbs[dbnum]);\r\n> >\r\n> > + /*\r\n> > + * Additionally, slot_arr must be initialized because they will be\r\n> > + * checked later.\r\n> > + */\r\n> > + cluster->dbarr.dbs[dbnum].slot_arr.nslots = 0;\r\n> > + cluster->dbarr.dbs[dbnum].slot_arr.slots = NULL;\r\n> > + }\r\n> >\r\n> > 11a.\r\n> > I think probably it would have been easier to just use 'pg_malloc0'\r\n> > instead of 'pg_malloc' in the get_db_infos, then this code would not\r\n> > be necessary.\r\n> I was not sure whether it is OK to change like that because of the\r\n> performance efficiency. But OK, fixed.\r\n> > 11b.\r\n> > BTW, shouldn't this function also be calling free_logical_slot_infos()\r\n> > too? That will also have the same effect (initializing the slot_arr)\r\n> > but without having to change anything else.\r\n> \r\n> ~\r\n> \r\n> Above is your reply ([2]11a). If you were not sure about the malloc0\r\n> then I think the suggestion ([1]#12b) achieves the same thing and\r\n> initializes those fields. You did not reply to 12b, so I wondered if\r\n> you accidentally missed that point.\r\n\r\nSorry, this part is no longer needed. Please see below.\r\n\r\n> 4. get_logical_slot_infos\r\n> \r\n> + for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> + {\r\n> + DbInfo *pDbInfo = &cluster->dbarr.dbs[dbnum];\r\n> +\r\n> + if (pDbInfo->slot_arr.slots)\r\n> + free_logical_slot_infos(&pDbInfo->slot_arr);\r\n> \r\n> Maybe it is ok, but it seems unusual that this\r\n> get_logical_slot_infos() is also doing a free. I didn't notice this\r\n> same pattern with the other get_XXX functions. Why is it needed? Even\r\n> if pDbInfo->slot_arr.slots was not NULL, is the information stale or\r\n> will you just end up re-fetching the same info?\r\n\r\nAfter considering more, I decided to remove the free function.\r\n\r\nThe reason why I did is that get_logical_slot_infos() for the new cluster is\r\ncalled twice, one is for checking purpose in check_new_cluster() and\r\nanother is for updating the cluster info in create_logical_replication_slots().\r\nAt the first calling, we assume that logical slots do not exist on new node, but\r\neven if the case a small memory are is allocated by pg_malloc(0).\r\n(If there are some slots, it is not called twice.)\r\nBut I noticed that it can be avoided by adding the if-statement, so I did.\r\n\r\nAdditionally, the pg_malloc0() in get_db_and_rel_infos() is no more needed\r\nbecause we do not have to check the un-initialized area.\r\n\r\n> .../pg_upgrade/t/003_logical_replication_slots.pl\r\n> \r\n> 5.\r\n> +# Preparations for the subsequent test. The case max_replication_slots is set\r\n> +# to 0 is prohibit.\r\n> \r\n> /prohibit/prohibited/\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866A3B91F56056A803B94DAF5749%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 11 May 2023 08:56:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san. Here are some comments for patch v12-0001.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_new_cluster\n\n+ if (user_opts.include_logical_slots)\n+ {\n+ get_logical_slot_infos(&new_cluster);\n+ check_for_parameter_settings(&new_cluster);\n+ }\n+\n check_new_cluster_is_empty();\n~\n\nThe code is OK, but maybe your reply/explanation (see [2] #2) saying\nget_logical_slot_infos() needs to be called before\ncheck_new_cluster_is_empty() would be good to have in a comment here?\n\n======\nsrc/bin/pg_upgrade/info.c\n\n2. get_logical_slot_infos\n\n+ if (ntups)\n+ slotinfos = (LogicalSlotInfo *) pg_malloc(sizeof(LogicalSlotInfo) * ntups);\n+ else\n+ {\n+ slotinfos = NULL;\n+ goto cleanup;\n+ }\n+\n+ i_slotname = PQfnumber(res, \"slot_name\");\n+ i_plugin = PQfnumber(res, \"plugin\");\n+ i_twophase = PQfnumber(res, \"two_phase\");\n+\n+ for (slotnum = 0; slotnum < ntups; slotnum++)\n+ {\n+ LogicalSlotInfo *curr = &slotinfos[num_slots++];\n+\n+ curr->slotname = pg_strdup(PQgetvalue(res, slotnum, i_slotname));\n+ curr->plugin = pg_strdup(PQgetvalue(res, slotnum, i_plugin));\n+ curr->two_phase = (strcmp(PQgetvalue(res, slotnum, i_twophase), \"t\") == 0);\n+ }\n+\n+cleanup:\n+ PQfinish(conn);\n\nIMO the goto/label coding is not warranted here - a simple if/else can\ndo the same thing.\n\n~~~\n\n3. free_db_and_rel_infos, free_logical_slot_infos\n\nstatic void\nfree_db_and_rel_infos(DbInfoArr *db_arr)\n{\nint dbnum;\n\nfor (dbnum = 0; dbnum < db_arr->ndbs; dbnum++)\n{\nfree_rel_infos(&db_arr->dbs[dbnum].rel_arr);\npg_free(db_arr->dbs[dbnum].db_name);\n}\npg_free(db_arr->dbs);\ndb_arr->dbs = NULL;\ndb_arr->ndbs = 0;\n}\n\n~\n\nIn v12 now you removed the free_logical_slot_infos(). But isn't it\nbetter to still call free_logical_slot_infos() from the above\nfree_db_and_rel_infos() still so the slot memory\n(dbinfo->slot_arr.slots) won't stay lying around?\n\n~~~\n\n4. get_logical_slot_infos, print_slot_infos\n\nIn another thread [1] I am posting some minor patch changes to the\nVERBOSE logging (changes to double-quotes and commas etc.). Please\nkeep a watch on that thread because if gets pushed then this one will\nbe impacted. e.g. your logging here ought also to include the same\nsuggested double quotes.\n\n------\n[1] pg_upgrade logs -\nhttps://www.postgresql.org/message-id/flat/CAHut%2BPuOB4bUwkYAjA_NkTrYaocKy6W3ZYK5Pin305R7mNSLgA%40mail.gmail.com\n[2] Kuroda-san reply to my v11 review -\nhttps://www.postgresql.org/message-id/TYAPR01MB5866BD618DEE62AF1836E612F5749%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 12 May 2023 10:56:50 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1. check_new_cluster\r\n> \r\n> + if (user_opts.include_logical_slots)\r\n> + {\r\n> + get_logical_slot_infos(&new_cluster);\r\n> + check_for_parameter_settings(&new_cluster);\r\n> + }\r\n> +\r\n> check_new_cluster_is_empty();\r\n> ~\r\n> \r\n> The code is OK, but maybe your reply/explanation (see [2] #2) saying\r\n> get_logical_slot_infos() needs to be called before\r\n> check_new_cluster_is_empty() would be good to have in a comment here?\r\n\r\nIndeed, added.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 2. get_logical_slot_infos\r\n> \r\n> + if (ntups)\r\n> + slotinfos = (LogicalSlotInfo *) pg_malloc(sizeof(LogicalSlotInfo) * ntups);\r\n> + else\r\n> + {\r\n> + slotinfos = NULL;\r\n> + goto cleanup;\r\n> + }\r\n> +\r\n> + i_slotname = PQfnumber(res, \"slot_name\");\r\n> + i_plugin = PQfnumber(res, \"plugin\");\r\n> + i_twophase = PQfnumber(res, \"two_phase\");\r\n> +\r\n> + for (slotnum = 0; slotnum < ntups; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *curr = &slotinfos[num_slots++];\r\n> +\r\n> + curr->slotname = pg_strdup(PQgetvalue(res, slotnum, i_slotname));\r\n> + curr->plugin = pg_strdup(PQgetvalue(res, slotnum, i_plugin));\r\n> + curr->two_phase = (strcmp(PQgetvalue(res, slotnum, i_twophase), \"t\") == 0);\r\n> + }\r\n> +\r\n> +cleanup:\r\n> + PQfinish(conn);\r\n> \r\n> IMO the goto/label coding is not warranted here - a simple if/else can\r\n> do the same thing.\r\n\r\nYeah, I could simplify by if-statement. Additionally, some definitions of variables\r\nare moved to the code block.\r\n\r\n> 3. free_db_and_rel_infos, free_logical_slot_infos\r\n> \r\n> static void\r\n> free_db_and_rel_infos(DbInfoArr *db_arr)\r\n> {\r\n> int dbnum;\r\n> \r\n> for (dbnum = 0; dbnum < db_arr->ndbs; dbnum++)\r\n> {\r\n> free_rel_infos(&db_arr->dbs[dbnum].rel_arr);\r\n> pg_free(db_arr->dbs[dbnum].db_name);\r\n> }\r\n> pg_free(db_arr->dbs);\r\n> db_arr->dbs = NULL;\r\n> db_arr->ndbs = 0;\r\n> }\r\n> \r\n> ~\r\n> \r\n> In v12 now you removed the free_logical_slot_infos(). But isn't it\r\n> better to still call free_logical_slot_infos() from the above\r\n> free_db_and_rel_infos() still so the slot memory\r\n> (dbinfo->slot_arr.slots) won't stay lying around?\r\n\r\nThe free_db_and_rel_infos() is called at restore phase, and slot_arr has malloc'd\r\nmembers only when logical slots are defined on new_cluster. In this case the FATAL\r\nerror is occured in the checking phase, so there is no possibility to reach restore\r\nphase.\r\n\r\n> 4. get_logical_slot_infos, print_slot_infos\r\n> \r\n> In another thread [1] I am posting some minor patch changes to the\r\n> VERBOSE logging (changes to double-quotes and commas etc.). Please\r\n> keep a watch on that thread because if gets pushed then this one will\r\n> be impacted. e.g. your logging here ought also to include the same\r\n> suggested double quotes.\r\n\r\nI thought it would be pushed soon, so the suggestion was included.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 15 May 2023 06:29:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san.\n\nI looked at the latest patch v13-0001. Here are some minor comments.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n1. get_logical_slot_infos_per_db\n\nI noticed that the way this is coded, 'ntups' and 'num_slots' seems to\nhave exactly the same meaning. IMO you can simplify this by removing\n'ntups'.\n\nBEFORE\n+ int ntups;\n+ int num_slots = 0;\n\nSUGGESTION\n+ int num_slots;\n\n~\n\nBEFORE\n+ ntups = PQntuples(res);\n+\n+ if (ntups)\n+ {\n\nSUGGESTION\n+ num_slots = PQntuples(res);\n+\n+ if (num_slots)\n+ {\n\n~\n\nBEFORE\n+ slotinfos = (LogicalSlotInfo *) pg_malloc(sizeof(LogicalSlotInfo) * ntups);\n\nSUGGESTION\n+ slotinfos = (LogicalSlotInfo *) pg_malloc(sizeof(LogicalSlotInfo) *\nnum_slots);\n\n~\n\nBEFORE\n+ for (slotnum = 0; slotnum < ntups; slotnum++)\n+ {\n+ LogicalSlotInfo *curr = &slotinfos[num_slots++];\n\nSUGGESTION\n+ for (slotnum = 0; slotnum < ntups; slotnum++)\n+ {\n+ LogicalSlotInfo *curr = &slotinfos[slotnum];\n\n======\n\n2. get_logical_slot_infos, print_slot_infos\n\n> >\n> > In another thread [1] I am posting some minor patch changes to the\n> > VERBOSE logging (changes to double-quotes and commas etc.). Please\n> > keep a watch on that thread because if gets pushed then this one will\n> > be impacted. e.g. your logging here ought also to include the same\n> > suggested double quotes.\n>\n> I thought it would be pushed soon, so the suggestion was included.\n\nOK, but I think you have accidentally missed adding similar new double\nquotes to all other VERBOSE logging in your patch.\n\ne.g. see get_logical_slot_infos:\npg_log(PG_VERBOSE, \"Database: %s\", pDbInfo->db_name);\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 15 May 2023 17:39:57 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! PSA new version patchset.\r\n\r\n> 1. get_logical_slot_infos_per_db\r\n> \r\n> I noticed that the way this is coded, 'ntups' and 'num_slots' seems to\r\n> have exactly the same meaning. IMO you can simplify this by removing\r\n> 'ntups'.\r\n> \r\n> BEFORE\r\n> + int ntups;\r\n> + int num_slots = 0;\r\n> \r\n> SUGGESTION\r\n> + int num_slots;\r\n> \r\n> ~\r\n> \r\n> BEFORE\r\n> + ntups = PQntuples(res);\r\n> +\r\n> + if (ntups)\r\n> + {\r\n> \r\n> SUGGESTION\r\n> + num_slots = PQntuples(res);\r\n> +\r\n> + if (num_slots)\r\n> + {\r\n> \r\n> ~\r\n> \r\n> BEFORE\r\n> + slotinfos = (LogicalSlotInfo *) pg_malloc(sizeof(LogicalSlotInfo) * ntups);\r\n> \r\n> SUGGESTION\r\n> + slotinfos = (LogicalSlotInfo *) pg_malloc(sizeof(LogicalSlotInfo) *\r\n> num_slots);\r\n> \r\n> ~\r\n> \r\n> BEFORE\r\n> + for (slotnum = 0; slotnum < ntups; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *curr = &slotinfos[num_slots++];\r\n> \r\n> SUGGESTION\r\n> + for (slotnum = 0; slotnum < ntups; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *curr = &slotinfos[slotnum];\r\n\r\nRight, fixed.\r\n\r\n> 2. get_logical_slot_infos, print_slot_infos\r\n> \r\n> > >\r\n> > > In another thread [1] I am posting some minor patch changes to the\r\n> > > VERBOSE logging (changes to double-quotes and commas etc.). Please\r\n> > > keep a watch on that thread because if gets pushed then this one will\r\n> > > be impacted. e.g. your logging here ought also to include the same\r\n> > > suggested double quotes.\r\n> >\r\n> > I thought it would be pushed soon, so the suggestion was included.\r\n> \r\n> OK, but I think you have accidentally missed adding similar new double\r\n> quotes to all other VERBOSE logging in your patch.\r\n> \r\n> e.g. see get_logical_slot_infos:\r\n> pg_log(PG_VER\r\nBOSE, \"Database: %s\", pDbInfo->db_name);\r\n> \r\n\r\nOh, I missed it. Fixed. I grepped patches and could not find other lines\r\nwhich should be double-quoted.\r\n\r\nIn addition, I ran pgindent again for 0001.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 16 May 2023 06:15:00 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san,\n\nI confirmed the patch changes from v13-0001 to v14-0001 have addressed\nthe comments from my previous post, and the cfbot is passing OK, so I\ndon't have any more review comments at this time.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 17 May 2023 09:47:33 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tues, May 16, 2023 at 14:15 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> Dear Peter,\r\n> \r\n> Thanks for reviewing! PSA new version patchset.\r\n\r\nThanks for updating the patch set.\r\n\r\nHere are some comments:\r\n===\r\nFor patches 0001\r\n\r\n1. The latest patch set fails to apply because the new commit (0245f8d) in HEAD.\r\n\r\n~~~\r\n\r\n2. In file pg_dump.h.\r\n```\r\n+/*\r\n+ * The LogicalReplicationSlotInfo struct is used to represent replication\r\n+ * slots.\r\n+ *\r\n+ * XXX: add more attributes if needed\r\n+ */\r\n+typedef struct _LogicalReplicationSlotInfo\r\n+{\r\n+\tDumpableObject dobj;\r\n+\tchar\t *plugin;\r\n+\tchar\t *slottype;\r\n+\tbool\t\ttwophase;\r\n+} LogicalReplicationSlotInfo;\r\n```\r\n\r\nDo we need the structure member \"slottype\"? It seems we do not use \"slottype\"\r\nbecause we only dump logical replication slot.\r\n\r\n===\r\nFor patch 0002\r\n\r\n3. In the function SaveSlotToPath\r\n```\r\n-\t/* and don't do anything if there's nothing to write */\r\n-\tif (!was_dirty)\r\n+\t/*\r\n+\t * and don't do anything if there's nothing to write, unless it's this is\r\n+\t * called for a logical slot during a shutdown checkpoint, as we want to\r\n+\t * persist the confirmed_flush_lsn in that case, even if that's the only\r\n+\t * modification.\r\n+\t */\r\n+\tif (!was_dirty && !is_shutdown && !SlotIsLogical(slot))\r\n```\r\nIt seems that the code isn't consistent with our expectation.\r\nIf this is called for a physical slot during a shutdown checkpoint and there's\r\nnothing to write, I think it will also persist physical slots to disk.\r\n\r\n===\r\nFor patch 0003\r\n\r\n4. In the function check_for_parameter_settings\r\n```\r\n+\t/* --include-logical-replication-slots can be used since PG16. */\r\n+\tif (GET_MAJOR_VERSION(new_cluster->major_version < 1600))\r\n+\t\treturn;\r\n```\r\nIt seems that there is a slight mistake (the input of GET_MAJOR_VERSION) in the\r\nif-condition:\r\nGET_MAJOR_VERSION(new_cluster->major_version < 1600)\r\n->\r\nGET_MAJOR_VERSION(new_cluster->major_version) <= 1500\r\n\r\nPlease also check the similar if-conditions in the below two functions\r\ncheck_for_confirmed_flush_lsn (in 0003 patch)\r\ncheck_are_logical_slots_active (in 0004 patch)\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Mon, 22 May 2023 03:21:44 +0000",
"msg_from": "\"Wei Wang (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> For patches 0001\r\n> \r\n> 1. The latest patch set fails to apply because the new commit (0245f8d) in HEAD.\r\n\r\nI didn't notice that. Thanks, fixed.\r\n\r\n> 2. In file pg_dump.h.\r\n> ```\r\n> +/*\r\n> + * The LogicalReplicationSlotInfo struct is used to represent replication\r\n> + * slots.\r\n> + *\r\n> + * XXX: add more attributes if needed\r\n> + */\r\n> +typedef struct _LogicalReplicationSlotInfo\r\n> +{\r\n> +\tDumpableObject dobj;\r\n> +\tchar\t *plugin;\r\n> +\tchar\t *slottype;\r\n> +\tbool\t\ttwophase;\r\n> +} LogicalReplicationSlotInfo;\r\n> ```\r\n> \r\n> Do we need the structure member \"slottype\"? It seems we do not use \"slottype\"\r\n> because we only dump logical replication slot.\r\n\r\nAs you said, this attribute is not needed. This is a garbage of previous efforts.\r\nRemoved.\r\n\r\n> For patch 0002\r\n> \r\n> 3. In the function SaveSlotToPath\r\n> ```\r\n> -\t/* and don't do anything if there's nothing to write */\r\n> -\tif (!was_dirty)\r\n> +\t/*\r\n> +\t * and don't do anything if there's nothing to write, unless it's this is\r\n> +\t * called for a logical slot during a shutdown checkpoint, as we want to\r\n> +\t * persist the confirmed_flush_lsn in that case, even if that's the only\r\n> +\t * modification.\r\n> +\t */\r\n> +\tif (!was_dirty && !is_shutdown && !SlotIsLogical(slot))\r\n> ```\r\n> It seems that the code isn't consistent with our expectation.\r\n> If this is called for a physical slot during a shutdown checkpoint and there's\r\n> nothing to write, I think it will also persist physical slots to disk.\r\n\r\nYou meant to say that we should not change handlings for physical case, right?\r\n\r\n> For patch 0003\r\n> \r\n> 4. In the function check_for_parameter_settings\r\n> ```\r\n> +\t/* --include-logical-replication-slots can be used since PG\t16. */\r\n> +\tif (GET_MAJOR_VERSION(new_cluster->major_version < 1600))\r\n> +\t\treturn;\r\n> ```\r\n> It seems that there is a slight mistake (the input of GET_MAJOR_VERSION) in the\r\n> if-condition:\r\n> GET_MAJOR_VERSION(new_cluster->major_version < 1600)\r\n> ->\r\n> GET_MAJOR_VERSION(new_cluster->major_version) <= 1500\r\n> \r\n> Please also check the similar if-conditions in the below two functions\r\n> check_for_confirmed_flush_lsn (in 0003 patch)\r\n> check_are_logical_slots_active (in 0004 patch)\r\n\r\nDone. I grepped with GET_MAJOR_VERSION, and confirmed they were fixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 22 May 2023 10:20:31 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, 22 May 2023 at 15:50, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Wang,\n>\n> Thank you for reviewing! PSA new version.\n\nThanks for the updated patch, Few comments:\nFew comments\n1) check_for_parameter_settings, check_for_confirmed_flush_lsn and\ncheck_are_logical_slots_active functions all have the same messages,\nwe can keep it unique so that it is easy for user to interpret:\n+check_for_parameter_settings(ClusterInfo *new_cluster)\n+{\n+ PGresult *res;\n+ PGconn *conn = connectToServer(new_cluster, \"template1\");\n+ int max_replication_slots;\n+ char *wal_level;\n+\n+ prep_status(\"Checking for logical replication slots\");\n+\n+ res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\n\n+check_for_confirmed_flush_lsn(ClusterInfo *cluster)\n+{\n+ int i,\n+ ntups,\n+ i_slotname;\n+ bool is_error = false;\n+ PGresult *res;\n+ DbInfo *active_db = &cluster->dbarr.dbs[0];\n+ PGconn *conn = connectToServer(cluster, active_db->db_name);\n+\n+ Assert(user_opts.include_logical_slots);\n+\n+ /* --include-logical-replication-slots can be used since PG16. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1500)\n+ return;\n+\n+ prep_status(\"Checking for logical replication slots\");\n\n+check_are_logical_slots_active(ClusterInfo *cluster)\n+{\n+ int i,\n+ ntups,\n+ i_slotname;\n+ bool is_error = false;\n+ PGresult *res;\n+ DbInfo *active_db = &cluster->dbarr.dbs[0];\n+ PGconn *conn = connectToServer(cluster, active_db->db_name);\n+\n+ Assert(user_opts.include_logical_slots);\n+\n+ /* --include-logical-replication-slots can be used since PG16. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1500)\n+ return;\n+\n+ prep_status(\"Checking for logical replication slots\");\n\n2) This function can be placed above get_logical_slot_infos and the\nprototype from this file can be removed:\n+/*\n+ * get_logical_slot_infos_per_db()\n+ *\n+ * gets the LogicalSlotInfos for all the logical replication slots of\nthe database\n+ * referred to by \"dbinfo\".\n+ */\n+static void\n+get_logical_slot_infos_per_db(ClusterInfo *cluster, DbInfo *dbinfo)\n+{\n+ PGconn *conn = connectToServer(cluster,\n+\n dbinfo->db_name);\n\n3) LogicalReplicationSlotInfo should be placed after LogicalRepWorker\nto keep the order consistent:\n LogicalRepCommitPreparedTxnData\n LogicalRepCtxStruct\n+LogicalReplicationSlotInfo\n LogicalRepMsgType\n\n4) \"existence of slots\" be changed to \"existence of slots.\"\n+ /*\n+ * If --include-logical-replication-slots is required, check the\n+ * existence of slots\n+ */\n\n5) This comment can be removed:\n+ *\n+ * XXX: add more attributes if needed\n+ */\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 30 May 2023 20:28:28 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, 22 May 2023 at 15:50, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Wang,\n>\n> Thank you for reviewing! PSA new version.\n>\n> > For patches 0001\n> >\n> > 1. The latest patch set fails to apply because the new commit (0245f8d) in HEAD.\n>\n> I didn't notice that. Thanks, fixed.\n>\n> > 2. In file pg_dump.h.\n> > ```\n> > +/*\n> > + * The LogicalReplicationSlotInfo struct is used to represent replication\n> > + * slots.\n> > + *\n> > + * XXX: add more attributes if needed\n> > + */\n> > +typedef struct _LogicalReplicationSlotInfo\n> > +{\n> > + DumpableObject dobj;\n> > + char *plugin;\n> > + char *slottype;\n> > + bool twophase;\n> > +} LogicalReplicationSlotInfo;\n> > ```\n> >\n> > Do we need the structure member \"slottype\"? It seems we do not use \"slottype\"\n> > because we only dump logical replication slot.\n>\n> As you said, this attribute is not needed. This is a garbage of previous efforts.\n> Removed.\n>\n> > For patch 0002\n> >\n> > 3. In the function SaveSlotToPath\n> > ```\n> > - /* and don't do anything if there's nothing to write */\n> > - if (!was_dirty)\n> > + /*\n> > + * and don't do anything if there's nothing to write, unless it's this is\n> > + * called for a logical slot during a shutdown checkpoint, as we want to\n> > + * persist the confirmed_flush_lsn in that case, even if that's the only\n> > + * modification.\n> > + */\n> > + if (!was_dirty && !is_shutdown && !SlotIsLogical(slot))\n> > ```\n> > It seems that the code isn't consistent with our expectation.\n> > If this is called for a physical slot during a shutdown checkpoint and there's\n> > nothing to write, I think it will also persist physical slots to disk.\n>\n> You meant to say that we should not change handlings for physical case, right?\n>\n> > For patch 0003\n> >\n> > 4. In the function check_for_parameter_settings\n> > ```\n> > + /* --include-logical-replication-slots can be used since PG 16. */\n> > + if (GET_MAJOR_VERSION(new_cluster->major_version < 1600))\n> > + return;\n> > ```\n> > It seems that there is a slight mistake (the input of GET_MAJOR_VERSION) in the\n> > if-condition:\n> > GET_MAJOR_VERSION(new_cluster->major_version < 1600)\n> > ->\n> > GET_MAJOR_VERSION(new_cluster->major_version) <= 1500\n> >\n> > Please also check the similar if-conditions in the below two functions\n> > check_for_confirmed_flush_lsn (in 0003 patch)\n> > check_are_logical_slots_active (in 0004 patch)\n>\n> Done. I grepped with GET_MAJOR_VERSION, and confirmed they were fixed.\n\nFew minor comments:\n1) we could remove the variable slotname from the below code by using\nPQgetvalue directly in pg_log:\n+ for (i = 0; i < ntups; i++)\n+ {\n+ char *slotname;\n+\n+ is_error = true;\n+\n+ slotname = PQgetvalue(res, i, i_slotname);\n+\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\"\nis not active\",\n+ slotname);\n+ }\n\n2) This include \"catalog/pg_control.h\" should be after inclusion pg_collation.h\n #include \"catalog/pg_authid_d.h\"\n+#include \"catalog/pg_control.h\"\n #include \"catalog/pg_collation.h\"\n\n3) This spurious addition line change might not be required in this patch:\n --- a/src/bin/pg_upgrade/t/003_logical_replication_slots.pl\n+++ b/src/bin/pg_upgrade/t/003_logical_replication_slots.pl\n@@ -85,11 +85,39 @@ $old_node->safe_psql(\n ]);\n\n my $result = $old_node->safe_psql('postgres',\n- \"SELECT count (*) FROM\npg_logical_slot_get_changes('test_slot', NULL, NULL)\"\n+ \"SELECT count(*) FROM\npg_logical_slot_peek_changes('test_slot', NULL, NULL)\"\n );\n+\n is($result, qq(12), 'ensure WALs are not consumed yet');\n $old_node->stop;\n\n4) This inclusion \"#include \"access/xlogrecord.h\" is not required:\n #include \"postgres_fe.h\"\n\n+#include \"access/xlogrecord.h\"\n+#include \"access/xlog_internal.h\"\n #include \"catalog/pg_authid_d.h\"\n\n5)\"thepublisher's\" should be \"the publisher's\"\n When a live check is requested, there is a possibility of additional changes\noccurring, which may cause the current WAL position to exceed the\nconfirmed_flush_lsn\nof the slot. As a result, we check the confirmed_flush_lsn of each logical slot\ninstead. This is sufficient as all the WAL records will be sent during\nthepublisher's\nshutdown.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 5 Jun 2023 14:48:04 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! New version will be attached the next post.\r\n\r\n> Few comments\r\n> 1) check_for_parameter_settings, check_for_confirmed_flush_lsn and\r\n> check_are_logical_slots_active functions all have the same messages,\r\n> we can keep it unique so that it is easy for user to interpret:\r\n> +check_for_parameter_settings(ClusterInfo *new_cluster)\r\n> +{\r\n> + PGresult *res;\r\n> + PGconn *conn = connectToServer(new_cluster, \"template1\");\r\n> + int max_replication_slots;\r\n> + char *wal_level;\r\n> +\r\n> + prep_status(\"Checking for logical replication slots\");\r\n> +\r\n> + res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\r\n> \r\n> +check_for_confirmed_flush_lsn(ClusterInfo *cluster)\r\n> +{\r\n> + int i,\r\n> + ntups,\r\n> + i_slotname;\r\n> + bool is_error = false;\r\n> + PGresult *res;\r\n> + DbInfo *active_db = &cluster->dbarr.dbs[0];\r\n> + PGconn *conn = connectToServer(cluster, active_db->db_name);\r\n> +\r\n> + Assert(user_opts.include_logical_slots);\r\n> +\r\n> + /* --include-logical-replication-slots can be used since PG16. */\r\n> + if (GET_MAJOR_VERSION(cluster->major_version) <= 1500)\r\n> + return;\r\n> +\r\n> + prep_status(\"Checking for logical replication slots\");\r\n> \r\n> +check_are_logical_slots_active(ClusterInfo *cluster)\r\n> +{\r\n> + int i,\r\n> + ntups,\r\n> + i_slotname;\r\n> + bool is_error = false;\r\n> + PGresult *res;\r\n> + DbInfo *active_db = &cluster->dbarr.dbs[0];\r\n> + PGconn *conn = connectToServer(cluster, active_db->db_name);\r\n> +\r\n> + Assert(user_opts.include_logical_slots);\r\n> +\r\n> + /* --include-logical-replication-slots can be used since PG16. */\r\n> + if (GET_MAJOR_VERSION(cluster->major_version) <= 1500)\r\n> + return;\r\n> +\r\n> + prep_status(\"Checking for logical replication slots\");\r\n\r\nChanged. How do you think?\r\n\r\n> 2) This function can be placed above get_logical_slot_infos and the\r\n> prototype from this file can be removed:\r\n> +/*\r\n> + * get_logical_slot_infos_per_db()\r\n> + *\r\n> + * gets the LogicalSlotInfos for all the logical replication slots of\r\n> the database\r\n> + * referred to by \"dbinfo\".\r\n> + */\r\n> +static void\r\n> +get_logical_slot_infos_per_db(ClusterInfo *cluster, DbInfo *dbinfo)\r\n> +{\r\n> + PGconn *conn = connectToServer(cluster,\r\n> +\r\n> dbinfo->db_name);\r\n\r\nRemoved.\r\n\r\n> 3) LogicalReplicationSlotInfo should be placed after LogicalRepWorker\r\n> to keep the order consistent:\r\n> LogicalRepCommitPreparedTxnData\r\n> LogicalRepCtxStruct\r\n> +LogicalReplicationSlotInfo\r\n> LogicalRepMsgType\r\n\r\nIndeed, fixed.\r\n\r\n> 4) \"existence of slots\" be changed to \"existence of slots.\"\r\n> + /*\r\n> + * If --include-logical-replication-slots is required, check the\r\n> + * existence of slots\r\n> + */\r\n\r\nThe comma was added.\r\n\r\n> 5) This comment can be removed:\r\n> + *\r\n> + * XXX: add more attributes if needed\r\n> + */\r\n\r\nRemoved. Additionally, another XXX which mentioned about physical slots was also removed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 8 Jun 2023 03:03:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! PSA new version patch set.\r\n\r\n> Few minor comments:\r\n> 1) we could remove the variable slotname from the below code by using\r\n> PQgetvalue directly in pg_log:\r\n> + for (i = 0; i < ntups; i++)\r\n> + {\r\n> + char *slotname;\r\n> +\r\n> + is_error = true;\r\n> +\r\n> + slotname = PQgetvalue(res, i, i_slotname);\r\n> +\r\n> + pg_log(PG_WARNING,\r\n> + \"\\nWARNING: logical replication slot \\\"%s\\\"\r\n> is not active\",\r\n> + slotname);\r\n> + }\r\n\r\nRemoved. Such codes were in two functions, and both of them were fixed.\r\n\r\n> 2) This include \"catalog/pg_control.h\" should be after inclusion pg_collation.h\r\n> #include \"catalog/pg_authid_d.h\"\r\n> +#include \"catalog/pg_control.h\"\r\n> #include \"catalog/pg_collation.h\"\r\n\r\nMoved.\r\n\r\n> 3) This spurious addition line change might not be required in this patch:\r\n> --- a/src/bin/pg_upgrade/t/003_logical_replication_slots.pl\r\n> +++ b/src/bin/pg_upgrade/t/003_logical_replication_slots.pl\r\n> @@ -85,11 +85,39 @@ $old_node->safe_psql(\r\n> ]);\r\n> \r\n> my $result = $old_node->safe_psql('postgres',\r\n> - \"SELECT count (*) FROM\r\n> pg_logical_slot_get_changes('test_slot', NULL, NULL)\"\r\n> + \"SELECT count(*) FROM\r\n> pg_logical_slot_peek_changes('test_slot', NULL, NULL)\"\r\n> );\r\n> +\r\n> is($result, qq(12), 'ensure WALs are not consumed yet');\r\n> $old_node->stop;\r\n\r\nI removed the line.\r\nIn the first place, what I wanted to check here was that pg_upgrade failed because\r\nWALs were not consumed. So if pg_logical_slot_get_changes() was called here, all\r\nof WALs were consumed here and the subsequent command was sucseeded. This was not\r\nhappy for us and that's why changed to pg_logical_slot_peek_changes().\r\nBut after considering more, I thought that calling the function was not the mandatory\r\nbecause no one needed the output.So removed.\r\n\r\n> 4) This inclusion \"#include \"access/xlogrecord.h\" is not required:\r\n> #include \"postgres_fe.h\"\r\n> \r\n> +#include \"access/xlogrecord.h\"\r\n> +#include \"access/xlog_internal.h\"\r\n> #include \"catalog/pg_authid_d.h\"\r\n\r\nRemoved.\r\n\r\n> 5)\"thepublisher's\" should be \"the publisher's\"\r\n> When a live check is requested, there is a possibility of additional changes\r\n> occurring, which may cause the current WAL position to exceed the\r\n> confirmed_flush_lsn\r\n> of the slot. As a result, we check the confirmed_flush_lsn of each logical slot\r\n> instead. This is sufficient as all the WAL records will be sent during\r\n> thepublisher's\r\n> shutdown.\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 8 Jun 2023 03:54:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 4:00 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > Sorry for the delay, I didn't had time to come back to it until this afternoon.\n>\n> No issues, everyone is busy:-).\n>\n> > I don't think that your analysis is correct. Slots are guaranteed to be\n> > stopped after all the normal backends have been stopped, exactly to avoid such\n> > extraneous records.\n> >\n> > What is happening here is that the slot's confirmed_flush_lsn is properly\n> > updated in memory and ends up being the same as the current LSN before the\n> > shutdown. But as it's a logical slot and those records aren't decoded, the\n> > slot isn't marked as dirty and therefore isn't saved to disk. You don't see\n> > that behavior when doing a manual checkpoint before (per your script comment),\n> > as in that case the checkpoint also tries to save the slot to disk but then\n> > finds a slot that was marked as dirty and therefore saves it.\n> >\n\nHere, why the behavior is different for manual and non-manual checkpoint?\n\n> > In your script's scenario, when you restart the server the previous slot data\n> > is restored and the confirmed_flush_lsn goes backward, which explains those\n> > extraneous records.\n>\n> So you meant to say that the key point was that some records which are not sent\n> to subscriber do not mark slots as dirty, hence the updated confirmed_flush was\n> not written into slot file. Is it right? LogicalConfirmReceivedLocation() is called\n> by walsender when the process gets reply from worker process, so your analysis\n> seems correct.\n>\n\nCan you please explain what led to updating the confirmed_flush in\nmemory but not in the disk? BTW, have we ensured that discarding the\nadditional records are already sent to the subscriber, if so, why for\nthose records confirmed_flush LSN is not progressed?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Jun 2023 16:23:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving comments!\r\n\r\n> > > Sorry for the delay, I didn't had time to come back to it until this afternoon.\r\n> >\r\n> > No issues, everyone is busy:-).\r\n> >\r\n> > > I don't think that your analysis is correct. Slots are guaranteed to be\r\n> > > stopped after all the normal backends have been stopped, exactly to avoid\r\n> such\r\n> > > extraneous records.\r\n> > >\r\n> > > What is happening here is that the slot's confirmed_flush_lsn is properly\r\n> > > updated in memory and ends up being the same as the current LSN before the\r\n> > > shutdown. But as it's a logical slot and those records aren't decoded, the\r\n> > > slot isn't marked as dirty and therefore isn't saved to disk. You don't see\r\n> > > that behavior when doing a manual checkpoint before (per your script\r\n> comment),\r\n> > > as in that case the checkpoint also tries to save the slot to disk but then\r\n> > > finds a slot that was marked as dirty and therefore saves it.\r\n> > >\r\n> \r\n> Here, why the behavior is different for manual and non-manual checkpoint?\r\n\r\nI have analyzed more, and concluded that there are no difference between manual\r\nand shutdown checkpoint.\r\n\r\nThe difference was whether the CHECKPOINT record has been decoded or not.\r\nThe overall workflow of this test was:\r\n\r\n1. do INSERT\r\n(2. do CHECKPOINT)\r\n(3. decode CHECKPOINT record)\r\n4. receive feedback message from standby\r\n5. do shutdown CHECKPOINT\r\n\r\nAt step 3, the walsender decoded that WAL and set candidate_xmin_lsn. The stucktrace was:\r\nstandby_decode()->SnapBuildProcessRunningXacts()->LogicalIncreaseXminForSlot().\r\n\r\nAt step 4, the confirmed_flush of the slot was updated, but ReplicationSlotSave()\r\nwas executed only when the slot->candidate_xmin_lsn had valid lsn. If step 2 and\r\n3 are misssed, the dirty flag is not set and the change is still on the memory.\r\n\r\nFInally, the CHECKPOINT was executed at step 5. If step 2 and 3 are misssed and\r\nthe patch from Julien is not applied, the updated value will be discarded. This\r\nis what I observed. The patch forces to save the logical slot at the shutdown\r\ncheckpoint, so the confirmed_lsn is save to disk at step 5.\r\n\r\n> Can you please explain what led to updating the confirmed_flush in\r\n> memory but not in the disk?\r\n\r\nThe code-level workflow was said above. The slot info is updated only after\r\ndecoding CHECKPOINT. I'm not sure the initial motivation, but I suspect we wanted\r\nto reduce the number of writing to disk.\r\n\r\n> BTW, have we ensured that discarding the\r\n> additional records are already sent to the subscriber, if so, why for\r\n> those records confirmed_flush LSN is not progressed?\r\n\r\nIn this case, the apply worker request the LSN which is greater than confirmed_lsn\r\nvia START_REPLICATION. Therefore, according to CreateDecodingContext(), the\r\nwalsender sends from the appropriate records, doesn't it? I think discarding is\r\nnot happened on subscriber.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 30 Jun 2023 13:58:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 9:24 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n\nFew comments/questions\n====================\n1.\n+check_for_parameter_settings(ClusterInfo *new_cluster)\n{\n...\n+\n+ res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\n+ max_replication_slots = atoi(PQgetvalue(res, 0, 0));\n+\n+ if (max_replication_slots == 0)\n+ pg_fatal(\"max_replication_slots must be greater than 0\");\n...\n}\n\nWon't it be better to verify that the value of \"max_replication_slots\"\nis greater than the number of logical slots we are planning to copy\nfrom old on the new cluster? Similar to this, I thought whether we\nneed to check the value of max_wal_senders? But, I guess one can\nsimply decode from slots by using APIs, so not sure about that. What\ndo you think?\n\n2.\n+ /*\n+ * Dump logical replication slots if needed.\n+ *\n+ * XXX We cannot dump replication slots at the same time as the schema\n+ * dump because we need to separate the timing of restoring\n+ * replication slots and other objects. Replication slots, in\n+ * particular, should not be restored before executing the pg_resetwal\n+ * command because it will remove WALs that are required by the slots.\n+ */\n+ if (user_opts.include_logical_slots)\n\nCan you explain this point a bit more with some example scenarios?\nBasically, if we had sent all the WAL before the upgrade then why do\nwe need to worry about the timing of pg_resetwal?\n\n3. I see that you are trying to ensure that all the WAL has been\nconsumed for a slot except for shutdown_checkpoint in patch 0003 but\ndo we need to think of any interaction with restart_lsn\n(MyReplicationSlot->data.restart_lsn) which is the start point to read\nWAL for decoding by walsender?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jul 2023 11:11:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san, I haven't looked at this thread for a very long time so\nto re-familiarize myself with it I read all the latest v16-0001 patch.\n\nHere are a number of minor review comments I noted in passing:\n\n======\nCommit message\n\n1.\nFor pg_dump this commit includes a new option called\n\"--logical-replication-slots-only\".\nThis option can be used to dump logical replication slots. When this option is\nspecified, the slot_name, plugin, and two_phase parameters are extracted from\npg_replication_slots. An SQL file is then generated which executes\npg_create_logical_replication_slot() with the extracted parameters.\n\n~\n\nThis part doesn't do the actual execution, so maybe slightly reword this.\n\nBEFORE\nAn SQL file is then generated which executes\npg_create_logical_replication_slot() with the extracted parameters.\n\nSUGGESTION\nAn SQL file that executes pg_create_logical_replication_slot() with\nthe extracted parameters is generated.\n\n~~~\n\n2.\nFor pg_upgrade, when '--include-logical-replication-slots' is\nspecified, it executes\npg_dump with the new \"--logical-replication-slots-only\" option and\nrestores from the\ndump. Note that we cannot dump replication slots at the same time as the schema\ndump because we need to separate the timing of restoring replication slots and\nother objects. Replication slots, in particular, should not be restored before\nexecuting the pg_resetwal command because it will remove WALs that are required\nby the slots.\n\n~~~\n\nMaybe \"restores from the dump\" can be described more?\n\nBEFORE\n...and restores from the dump.\n\nSUGGESTION\n...and restores the slots using the\npg_create_logical_replication_slots() statements that the dump\ngenerated (see above).\n\n======\nsrc/bin/pg_dump/pg_dump.c\n\n3. help\n\n+\n+ /*\n+ * The option --logical-replication-slots-only is used only by pg_upgrade\n+ * and should not be called by users, which is why it is not listed.\n+ */\n printf(_(\" --no-comments do not dump comments\\n\"));\n~\n\n/not listed./not exposed by the help./\n\n~~~\n\n4. getLogicalReplicationSlots\n\n+ /* Check whether we should dump or not */\n+ if (fout->remoteVersion < 160000)\n+ return;\n\nPG16 is already in beta. I think this should now be changed to 170000, right?\n\n======\nsrc/bin/pg_upgrade/check.c\n\n5. check_new_cluster\n\n+ /*\n+ * Do additional works if --include-logical-replication-slots is required.\n+ * These must be done before check_new_cluster_is_empty() because the\n+ * slot_arr attribute of the new_cluster will be checked in the function.\n+ */\n\nSUGGESTION (minor rewording/grammar)\nDo additional work if --include-logical-replication-slots was\nspecified. This must be done before check_new_cluster_is_empty()\nbecause the slot_arr attribute of the new_cluster will be checked in\nthat function.\n\n~~~\n\n6. check_new_cluster_is_empty\n\n+ /*\n+ * If --include-logical-replication-slots is required, check the\n+ * existence of slots.\n+ */\n+ if (user_opts.include_logical_slots)\n+ {\n+ LogicalSlotInfoArr *slot_arr = &new_cluster.dbarr.dbs[dbnum].slot_arr;\n+\n+ /* if nslots > 0, report just first entry and exit */\n+ if (slot_arr->nslots)\n+ pg_fatal(\"New cluster database \\\"%s\\\" is not empty: found logical\nreplication slot \\\"%s\\\"\",\n+ new_cluster.dbarr.dbs[dbnum].db_name,\n+ slot_arr->slots[0].slotname);\n+ }\n+\n\n6a.\nThere are a number of places in this function using\n\"new_cluster.dbarr.dbs[dbnum].XXX\"\n\nIt is OK but maybe it would be tidier to up-front assign a local\nvariable for this?\n\nDbInfo *pDbInfo = &new_cluster.dbarr.dbs[dbnum];\n\n~\n\n6b.\nThe above code adds an unnecessary blank line in the loop that was not\nthere previously.\n\n~~~\n\n7. check_for_parameter_settings\n\n+/*\n+ * Verify parameter settings for creating logical replication slots\n+ */\n+static void\n+check_for_parameter_settings(ClusterInfo *new_cluster)\n\n7a.\nI felt this might have some missing words so it was meant to say:\n\nSUGGESTION\nVerify the parameter settings necessary for creating logical replication slots.\n\n~\n\n7b.\nMaybe you can give this function a better name because there is no\nhint in this generic name that it has anything to do with replication\nslots.\n\n~~~\n\n8.\n+ /* --include-logical-replication-slots can be used since PG16. */\n+ if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1500)\n+ return;\n\nPG16 is already in beta, so the version number (1500) and the comment\nmentioning PG16 are outdated aren't they?\n\n======\nsrc/bin/pg_upgrade/info.c\n\n9.\n static void print_rel_infos(RelInfoArr *rel_arr);\n-\n+static void print_slot_infos(LogicalSlotInfoArr *slot_arr);\n\nThe removal of the existing blank line seems not a necessary part of this patch.\n\n~~~\n\n10. get_logical_slot_infos_per_db\n\n+ char query[QUERY_ALLOC];\n+\n+ query[0] = '\\0'; /* initialize query string to empty */\n+\n+ snprintf(query, sizeof(query),\n+ \"SELECT slot_name, plugin, two_phase \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE database = current_database() AND temporary = false \"\n+ \"AND wal_status IN ('reserved', 'extended');\");\n\nDoes the initial assignment query[0] = '\\0'; acheive anything? IIUC,\nthe next statement is simply going to overwrite that anyway.\n\n~~~\n\n11. free_db_and_rel_infos\n\n+\n+ /*\n+ * db_arr has an additional attribute, LogicalSlotInfoArr slot_arr,\n+ * but there is no need to free it. It has a valid member only when\n+ * the cluster had logical replication slots in the previous call.\n+ * However, in this case, a FATAL error is thrown, and we cannot reach\n+ * this point.\n+ */\n\nMaybe this comment can be reworded? For example, the meaning of \"in\nthe previous call\" is not very clear. What previous call?\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.c\n\n12. main\n\n+ /*\n+ * Create logical replication slots if requested.\n+ *\n+ * Note: This must be done after doing pg_resetwal command because the\n+ * command will remove required WALs.\n+ */\n+ if (user_opts.include_logical_slots)\n+ {\n+ start_postmaster(&new_cluster, true);\n+ create_logical_replication_slots();\n+ stop_postmaster(false);\n+ }\n\nIMO \"the command\" is a bit vague. It might be better to be explicit\nand say \"... because pg_resetwal would remove XXXXX...\"\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n13.\n+typedef struct\n+{\n+ LogicalSlotInfo *slots;\n+ int nslots;\n+} LogicalSlotInfoArr;\n+\n\nI assume you mimicked the RelInfoArr struct, but IMO it makes more\nsense for the field 'nslots' to come before the 'slots'.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 17 Jul 2023 19:24:41 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Jun 30, 2023 at 7:29 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> I have analyzed more, and concluded that there are no difference between manual\n> and shutdown checkpoint.\n>\n> The difference was whether the CHECKPOINT record has been decoded or not.\n> The overall workflow of this test was:\n>\n> 1. do INSERT\n> (2. do CHECKPOINT)\n> (3. decode CHECKPOINT record)\n> 4. receive feedback message from standby\n> 5. do shutdown CHECKPOINT\n>\n> At step 3, the walsender decoded that WAL and set candidate_xmin_lsn. The stucktrace was:\n> standby_decode()->SnapBuildProcessRunningXacts()->LogicalIncreaseXminForSlot().\n>\n> At step 4, the confirmed_flush of the slot was updated, but ReplicationSlotSave()\n> was executed only when the slot->candidate_xmin_lsn had valid lsn. If step 2 and\n> 3 are misssed, the dirty flag is not set and the change is still on the memory.\n>\n> FInally, the CHECKPOINT was executed at step 5. If step 2 and 3 are misssed and\n> the patch from Julien is not applied, the updated value will be discarded. This\n> is what I observed. The patch forces to save the logical slot at the shutdown\n> checkpoint, so the confirmed_lsn is save to disk at step 5.\n>\n\nI see your point but there are comments in walsender.c which indicates\nthat we also wait for step-5 to get replicated. See [1] and comments\natop walsender.c. If this is true then we don't need a special check\nas you have in patch 0003 or at least it doesn't seem to be required\nin all cases.\n\n[1] -\n/*\n* When SIGUSR2 arrives, we send any outstanding logs up to the\n* shutdown checkpoint record (i.e., the latest record), wait for\n* them to be replicated to the standby, and exit. ...\n*/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jul 2023 18:19:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Jul 17, 2023 at 6:19 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Jun 30, 2023 at 7:29 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > I have analyzed more, and concluded that there are no difference between manual\n> > and shutdown checkpoint.\n> >\n> > The difference was whether the CHECKPOINT record has been decoded or not.\n> > The overall workflow of this test was:\n> >\n> > 1. do INSERT\n> > (2. do CHECKPOINT)\n> > (3. decode CHECKPOINT record)\n> > 4. receive feedback message from standby\n> > 5. do shutdown CHECKPOINT\n> >\n> > At step 3, the walsender decoded that WAL and set candidate_xmin_lsn. The stucktrace was:\n> > standby_decode()->SnapBuildProcessRunningXacts()->LogicalIncreaseXminForSlot().\n> >\n> > At step 4, the confirmed_flush of the slot was updated, but ReplicationSlotSave()\n> > was executed only when the slot->candidate_xmin_lsn had valid lsn. If step 2 and\n> > 3 are misssed, the dirty flag is not set and the change is still on the memory.\n> >\n> > FInally, the CHECKPOINT was executed at step 5. If step 2 and 3 are misssed and\n> > the patch from Julien is not applied, the updated value will be discarded. This\n> > is what I observed. The patch forces to save the logical slot at the shutdown\n> > checkpoint, so the confirmed_lsn is save to disk at step 5.\n> >\n>\n> I see your point but there are comments in walsender.c which indicates\n> that we also wait for step-5 to get replicated. See [1] and comments\n> atop walsender.c. If this is true then we don't need a special check\n> as you have in patch 0003 or at least it doesn't seem to be required\n> in all cases.\n>\n\nI have studied this a bit more and it seems that is true for physical\nwalsenders where we set the state of walsender as WALSNDSTATE_STOPPING\nin XLogSendPhysical, then the checkpointer finishes writing checkpoint\nrecord and then postmaster sends SIGUSR2 for walsender to exit. IIUC,\nthis whole logic of different stop states has been introduced in\ncommit c6c3334364 based on the discussion in the thread [1]. As per my\nunderstanding, logical walsenders don't seem to be waiting for\nshutdown checkpoint record and finishes before even we LOG that\nrecord. It seems that the behavior of logical walsenders is different\nfrom physical walsenders where we wait for them to send even the final\nshutdown checkpoint record before they finish. If so, then we won't be\nable to switchover to logical subscribers even in case of a clean\nshutdown. Am, I missing something?\n\n[1] - https://www.postgresql.org/message-id/CAHGQGwEsttg9P9LOOavoc9d6VB1zVmYgfBk%3DLjsk-UL9cEf-eA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Jul 2023 14:36:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> I have studied this a bit more and it seems that is true for physical\r\n> walsenders where we set the state of walsender as WALSNDSTATE_STOPPING\r\n> in XLogSendPhysical, then the checkpointer finishes writing checkpoint\r\n> record and then postmaster sends SIGUSR2 for walsender to exit. IIUC,\r\n> this whole logic of different stop states has been introduced in\r\n> commit c6c3334364 based on the discussion in the thread [1]. As per my\r\n> understanding, logical walsenders don't seem to be waiting for\r\n> shutdown checkpoint record and finishes before even we LOG that\r\n> record. It seems that the behavior of logical walsenders is different\r\n> from physical walsenders where we wait for them to send even the final\r\n> shutdown checkpoint record before they finish.\r\n\r\nYes, you are right. Physical walsenders wait exiting checkpointer, but logical\r\nones exit before checkpointer does. This is because logical walsender may generate\r\nWALs due by executing replication commands like START_REPLICATION and\r\nCREATE_REPLICATION_SLOT and they may be recorded at after the shutdown\r\ncheckpoint record. This leads PANIC.\r\n\r\n> If so, then we won't be\r\n> able to switchover to logical subscribers even in case of a clean\r\n> shutdown. Am, I missing something?\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/message-id/CAHGQGwEsttg9P9LOOavoc9d6VB1zV\r\n> mYgfBk%3DLjsk-UL9cEf-eA%40mail.gmail.com\r\n\r\nBased on the above, we are considering that we delay the timing of shutdown for\r\nlogical walsenders. The preliminary workflow is:\r\n\r\n1. When logical walsenders receives siginal from checkpointer, it consumes all\r\n of WAL records, change its state into WALSNDSTATE_STOPPING, and stop doing\r\n anything. \r\n2. Then the checkpointer does the shutdown checkpoint\r\n3. After that postmaster sends signal to walsenders, same as current implementation.\r\n4. Finally logical walsenders process the shutdown checkpoint record and update the\r\n confirmed_lsn after the acknowledgement from subscriber. \r\n Note that logical walsenders don't have to send a shutdown checkpoint record\r\n to subscriber but following keep_alive will help us to increment the confirmed_lsn.\r\n5. All tasks are done, they exit.\r\n\r\nThis mechanism ensures that the confirmed_lsn of active slots is same as the current\r\nWAL location of old publisher, so that 0003 patch would become more simpler.\r\nWe would not have to calculate the acceptable difference anymore.\r\n\r\nOne thing we must consider is that any WALs must not be generated while decoding\r\nthe shutdown checkpoint record. It causes the PANIC. IIUC the record leads\r\nSnapBuildSerializationPoint(), which just serializes snapbuild or restores from\r\nit, so the change may be acceptable. Thought?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 19 Jul 2023 10:22:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version patchset.\r\n\r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> For pg_dump this commit includes a new option called\r\n> \"--logical-replication-slots-only\".\r\n> This option can be used to dump logical replication slots. When this option is\r\n> specified, the slot_name, plugin, and two_phase parameters are extracted from\r\n> pg_replication_slots. An SQL file is then generated which executes\r\n> pg_create_logical_replication_slot() with the extracted parameters.\r\n> \r\n> ~\r\n> \r\n> This part doesn't do the actual execution, so maybe slightly reword this.\r\n> \r\n> BEFORE\r\n> An SQL file is then generated which executes\r\n> pg_create_logical_replication_slot() with the extracted parameters.\r\n> \r\n> SUGGESTION\r\n> An SQL file that executes pg_create_logical_replication_slot() with\r\n> the extracted parameters is generated.\r\n\r\nChanged.\r\n\r\n> 2.\r\n> For pg_upgrade, when '--include-logical-replication-slots' is\r\n> specified, it executes\r\n> pg_dump with the new \"--logical-replication-slots-only\" option and\r\n> restores from the\r\n> dump. Note that we cannot dump replication slots at the same time as the schema\r\n> dump because we need to separate the timing of restoring replication slots and\r\n> other objects. Replication slots, in particular, should not be restored before\r\n> executing the pg_resetwal command because it will remove WALs that are\r\n> required\r\n> by the slots.\r\n> \r\n> ~~~\r\n> \r\n> Maybe \"restores from the dump\" can be described more?\r\n> \r\n> BEFORE\r\n> ...and restores from the dump.\r\n> \r\n> SUGGESTION\r\n> ...and restores the slots using the\r\n> pg_create_logical_replication_slots() statements that the dump\r\n> generated (see above).\r\n\r\nFixed.\r\n\r\n> src/bin/pg_dump/pg_dump.c\r\n> \r\n> 3. help\r\n> \r\n> +\r\n> + /*\r\n> + * The option --logical-replication-slots-only is used only by pg_upgrade\r\n> + * and should not be called by users, which is why it is not listed.\r\n> + */\r\n> printf(_(\" --no-comments do not dump comments\\n\"));\r\n> ~\r\n> \r\n> /not listed./not exposed by the help./\r\n\r\nFixed.\r\n\r\n> 4. getLogicalReplicationSlots\r\n> \r\n> + /* Check whether we should dump or not */\r\n> + if (fout->remoteVersion < 160000)\r\n> + return;\r\n> \r\n> PG16 is already in beta. I think this should now be changed to 170000, right?\r\n\r\nThat's right, fixed.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 5. check_new_cluster\r\n> \r\n> + /*\r\n> + * Do additional works if --include-logical-replication-slots is required.\r\n> + * These must be done before check_new_cluster_is_empty() because the\r\n> + * slot_arr attribute of the new_cluster will be checked in the function.\r\n> + */\r\n> \r\n> SUGGESTION (minor rewording/grammar)\r\n> Do additional work if --include-logical-replication-slots was\r\n> specified. This must be done before check_new_cluster_is_empty()\r\n> because the slot_arr attribute of the new_cluster will be checked in\r\n> that function.\r\n\r\nFixed.\r\n\r\n> 6. check_new_cluster_is_empty\r\n> \r\n> + /*\r\n> + * If --include-logical-replication-slots is required, check the\r\n> + * existence of slots.\r\n> + */\r\n> + if (user_opts.include_logical_slots)\r\n> + {\r\n> + LogicalSlotInfoArr *slot_arr = &new_cluster.dbarr.dbs[dbnum].slot_arr;\r\n> +\r\n> + /* if nslots > 0, report just first entry and exit */\r\n> + if (slot_arr->nslots)\r\n> + pg_fatal(\"New cluster database \\\"%s\\\" is not empty: found logical\r\n> replication slot \\\"%s\\\"\",\r\n> + new_cluster.dbarr.dbs[dbnum].db_name,\r\n> + slot_arr->slots[0].slotname);\r\n> + }\r\n> +\r\n> \r\n> 6a.\r\n> There are a number of places in this function using\r\n> \"new_cluster.dbarr.dbs[dbnum].XXX\"\r\n> \r\n> It is OK but maybe it would be tidier to up-front assign a local\r\n> variable for this?\r\n> \r\n> DbInfo *pDbInfo = &new_cluster.dbarr.dbs[dbnum];\r\n\r\nSeems better, fixed.\r\n\r\n> 6b.\r\n> The above code adds an unnecessary blank line in the loop that was not\r\n> there previously.\r\n\r\nRemoved.\r\n\r\n> 7. check_for_parameter_settings\r\n> \r\n> +/*\r\n> + * Verify parameter settings for creating logical replication slots\r\n> + */\r\n> +static void\r\n> +check_for_parameter_settings(ClusterInfo *new_cluster)\r\n> \r\n> 7a.\r\n> I felt this might have some missing words so it was meant to say:\r\n> \r\n> SUGGESTION\r\n> Verify the parameter settings necessary for creating logical replication slots.\r\n\r\nChanged.\r\n\r\n> 7b.\r\n> Maybe you can give this function a better name because there is no\r\n> hint in this generic name that it has anything to do with replication\r\n> slots.\r\n\r\nRenamed to check_for_logical_replication_slots(), how do you think?\r\n\r\n> 8.\r\n> + /* --include-logical-replication-slots can be used since PG16. */\r\n> + if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1500)\r\n> + return;\r\n> \r\n> PG16 is already in beta, so the version number (1500) and the comment\r\n> mentioning PG16 are outdated aren't they?\r\n\r\nRight, fixed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 9.\r\n> static void print_rel_infos(RelInfoArr *rel_arr);\r\n> -\r\n> +static void print_slot_infos(LogicalSlotInfoArr *slot_arr);\r\n> \r\n> The removal of the existing blank line seems not a necessary part of this patch.\r\n\r\nAdded.\r\n\r\n> 10. get_logical_slot_infos_per_db\r\n> \r\n> + char query[QUERY_ALLOC];\r\n> +\r\n> + query[0] = '\\0'; /* initialize query string to empty */\r\n> +\r\n> + snprintf(query, sizeof(query),\r\n> + \"SELECT slot_name, plugin, two_phase \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE database = current_database() AND temporary = false \"\r\n> + \"AND wal_status IN ('reserved', 'extended');\");\r\n> \r\n> Does the initial assignment query[0] = '\\0'; acheive anything? IIUC,\r\n> the next statement is simply going to overwrite that anyway.\r\n\r\nThis was garbage of previous versions. Removed.\r\n\r\n> 11. free_db_and_rel_infos\r\n> \r\n> +\r\n> + /*\r\n> + * db_arr has an additional attribute, LogicalSlotInfoArr slot_arr,\r\n> + * but there is no need to free it. It has a valid member only when\r\n> + * the cluster had logical replication slots in the previous call.\r\n> + * However, in this case, a FATAL error is thrown, and we cannot reach\r\n> + * this point.\r\n> + */\r\n> \r\n> Maybe this comment can be reworded? For example, the meaning of \"in\r\n> the previous call\" is not very clear. What previous call?\r\n\r\nAfter considering more, I thought it should be more simpler. What I wanted to say\r\nwas that the slot_arr.slots did not have malloc'd memory. So I added Assert() for\r\nthe confirmation and changed comments. For that purpose pg_malloc0() is also\r\nintroduced in get_db_infos(). How do you think?\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.c\r\n> \r\n> 12. main\r\n> \r\n> + /*\r\n> + * Create logical replication slots if requested.\r\n> + *\r\n> + * Note: This must be done after doing pg_resetwal command because the\r\n> + * command will remove required WALs.\r\n> + */\r\n> + if (user_opts.include_logical_slots)\r\n> + {\r\n> + start_postmaster(&new_cluster, true);\r\n> + create_logical_replication_slots();\r\n> + stop_postmaster(false);\r\n> + }\r\n> \r\n> IMO \"the command\" is a bit vague. It might be better to be explicit\r\n> and say \"... because pg_resetwal would remove XXXXX...\"\r\n\r\nChanged.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 13.\r\n> +typedef struct\r\n> +{\r\n> + LogicalSlotInfo *slots;\r\n> + int nslots;\r\n> +} LogicalSlotInfoArr;\r\n> +\r\n> \r\n> I assume you mimicked the RelInfoArr struct, but IMO it makes more\r\n> sense for the field 'nslots' to come before the 'slots'.\r\n\r\nYeah, I followed that, but no strong opinion. Fixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 19 Jul 2023 14:02:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\n\nThanks for reviewing! The patch could be available at [1].\n\n> Few comments/questions\n> ====================\n> 1.\n> +check_for_parameter_settings(ClusterInfo *new_cluster)\n> {\n> ...\n> +\n> + res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\n> + max_replication_slots = atoi(PQgetvalue(res, 0, 0));\n> +\n> + if (max_replication_slots == 0)\n> + pg_fatal(\"max_replication_slots must be greater than 0\");\n> ...\n> }\n> \n> Won't it be better to verify that the value of \"max_replication_slots\"\n> is greater than the number of logical slots we are planning to copy\n> from old on the new cluster? Similar to this, I thought whether we\n> need to check the value of max_wal_senders? But, I guess one can\n> simply decode from slots by using APIs, so not sure about that. What\n> do you think?\n\nAgreed for verifying the max_replication_slots. There are several ways to add it,\nso I chose the simplest one - store the #slots to global variable and compare\nbetween it and max_replication_slots.\nAs for the max_wal_senders, I don't think it should be. As you said, there is a\npossibility user-defined background worker uses the slot and consumes WALs.\n\n> 2.\n> + /*\n> + * Dump logical replication slots if needed.\n> + *\n> + * XXX We cannot dump replication slots at the same time as the schema\n> + * dump because we need to separate the timing of restoring\n> + * replication slots and other objects. Replication slots, in\n> + * particular, should not be restored before executing the pg_resetwal\n> + * command because it will remove WALs that are required by the slots.\n> + */\n> + if (user_opts.include_logical_slots)\n> \n> Can you explain this point a bit more with some example scenarios?\n> Basically, if we had sent all the WAL before the upgrade then why do\n> we need to worry about the timing of pg_resetwal?\n\nOK, I can tell the example here. Should it be described on the source?\n\nAssuming that there is a valid logical replication slot as follows:\n\n```\npostgres=# select slot_name, plugin, restart_lsn, wal_status, two_phase from pg_replication_slots;\n slot_name | plugin | restart_lsn | wal_status | two_phase \n-----------+---------------+-------------+------------+-----------\n test | test_decoding | 0/15665A8 | reserved | f\n(1 row)\n\npostgres=# select * from pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/15665E0\n(1 row)\n```\n\nAnd here let's execute the pg_resetwal to the pg server.\nThe existing wal segment file is purged and moved to next seg.\n\n```\n$ pg_ctl stop -D data_N1/\nwaiting for server to shut down.... done\nserver stopped\n$ pg_resetwal -l 000000010000000000000002 data_N1/\nWrite-ahead log reset\n$ pg_ctl start -D data_N1/ -l N1.log \nwaiting for server to start.... done\nserver started\n```\n\nAfter that the logical slot cannot move foward anymore because the required WALs\nare removed, whereas the wal_status is still \"reserved\".\n\n```\npostgres=# select slot_name, plugin, restart_lsn, wal_status, two_phase from pg_replication_slots;\n slot_name | plugin | restart_lsn | wal_status | two_phase \n-----------+---------------+-------------+------------+-----------\n test | test_decoding | 0/15665A8 | reserved | f\n(1 row)\n\npostgres=# select * from pg_current_wal_lsn();\n pg_current_wal_lsn \n--------------------\n 0/2028328\n(1 row)\n\npostgres=# select * from pg_logical_slot_get_changes('test', NULL, NULL);\nERROR: requested WAL segment pg_wal/000000010000000000000001 has already been removed\n```\n\npg_upgrade runs pg_dump and then pg_resetwal, so dumping slots must be done\nseparately to avoid above error.\n\n> 3. I see that you are trying to ensure that all the WAL has been\n> consumed for a slot except for shutdown_checkpoint in patch 0003 but\n> do we need to think of any interaction with restart_lsn\n> (MyReplicationSlot->data.restart_lsn) which is the start point to read\n> WAL for decoding by walsender?\n\nCurrently I'm not sure it should be considered. Do you have in mind?\n\ncandidate_restart_lsn for the slot is set ony when XLOG_RUNNING_XACTS is decoded\n(LogicalIncreaseRestartDecodingForSlot()), and is set as restart_lsn later. So\nthere are few timings to update the value and we cannot determine the accepatble\nboundary.\n\nFurthermore, I think restart point is not affect the result for replicating\nchanges on subscriber because it is always behind confimed_flush.\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866E9ED5B8C5AD7F7AC062FF539A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 19 Jul 2023 14:03:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 7:33 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 2.\n> > + /*\n> > + * Dump logical replication slots if needed.\n> > + *\n> > + * XXX We cannot dump replication slots at the same time as the schema\n> > + * dump because we need to separate the timing of restoring\n> > + * replication slots and other objects. Replication slots, in\n> > + * particular, should not be restored before executing the pg_resetwal\n> > + * command because it will remove WALs that are required by the slots.\n> > + */\n> > + if (user_opts.include_logical_slots)\n> >\n> > Can you explain this point a bit more with some example scenarios?\n> > Basically, if we had sent all the WAL before the upgrade then why do\n> > we need to worry about the timing of pg_resetwal?\n>\n> OK, I can tell the example here. Should it be described on the source?\n>\n> Assuming that there is a valid logical replication slot as follows:\n>\n> ```\n> postgres=# select slot_name, plugin, restart_lsn, wal_status, two_phase from pg_replication_slots;\n> slot_name | plugin | restart_lsn | wal_status | two_phase\n> -----------+---------------+-------------+------------+-----------\n> test | test_decoding | 0/15665A8 | reserved | f\n> (1 row)\n>\n> postgres=# select * from pg_current_wal_lsn();\n> pg_current_wal_lsn\n> --------------------\n> 0/15665E0\n> (1 row)\n> ```\n>\n> And here let's execute the pg_resetwal to the pg server.\n> The existing wal segment file is purged and moved to next seg.\n>\n> ```\n> $ pg_ctl stop -D data_N1/\n> waiting for server to shut down.... done\n> server stopped\n> $ pg_resetwal -l 000000010000000000000002 data_N1/\n> Write-ahead log reset\n> $ pg_ctl start -D data_N1/ -l N1.log\n> waiting for server to start.... done\n> server started\n> ```\n>\n> After that the logical slot cannot move foward anymore because the required WALs\n> are removed, whereas the wal_status is still \"reserved\".\n>\n> ```\n> postgres=# select slot_name, plugin, restart_lsn, wal_status, two_phase from pg_replication_slots;\n> slot_name | plugin | restart_lsn | wal_status | two_phase\n> -----------+---------------+-------------+------------+-----------\n> test | test_decoding | 0/15665A8 | reserved | f\n> (1 row)\n>\n> postgres=# select * from pg_current_wal_lsn();\n> pg_current_wal_lsn\n> --------------------\n> 0/2028328\n> (1 row)\n>\n> postgres=# select * from pg_logical_slot_get_changes('test', NULL, NULL);\n> ERROR: requested WAL segment pg_wal/000000010000000000000001 has already been removed\n> ```\n>\n> pg_upgrade runs pg_dump and then pg_resetwal, so dumping slots must be done\n> separately to avoid above error.\n>\n\nOkay, so the point is that if we create the slot in the new cluster\nbefore pg_resetwal then its restart_lsn will be set to the current LSN\nposition which will later be reset by pg_resetwal. So, we won't be\nable to use such a slot, right?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Jul 2023 11:18:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> Based on the above, we are considering that we delay the timing of shutdown for\r\n> logical walsenders. The preliminary workflow is:\r\n> \r\n> 1. When logical walsenders receives siginal from checkpointer, it consumes all\r\n> of WAL records, change its state into WALSNDSTATE_STOPPING, and stop\r\n> doing\r\n> anything.\r\n> 2. Then the checkpointer does the shutdown checkpoint\r\n> 3. After that postmaster sends signal to walsenders, same as current\r\n> implementation.\r\n> 4. Finally logical walsenders process the shutdown checkpoint record and update\r\n> the\r\n> confirmed_lsn after the acknowledgement from subscriber.\r\n> Note that logical walsenders don't have to send a shutdown checkpoint record\r\n> to subscriber but following keep_alive will help us to increment the\r\n> confirmed_lsn.\r\n> 5. All tasks are done, they exit.\r\n> \r\n> This mechanism ensures that the confirmed_lsn of active slots is same as the\r\n> current\r\n> WAL location of old publisher, so that 0003 patch would become more simpler.\r\n> We would not have to calculate the acceptable difference anymore.\r\n> \r\n> One thing we must consider is that any WALs must not be generated while\r\n> decoding\r\n> the shutdown checkpoint record. It causes the PANIC. IIUC the record leads\r\n> SnapBuildSerializationPoint(), which just serializes snapbuild or restores from\r\n> it, so the change may be acceptable. Thought?\r\n\r\nI've implemented the ideas from my previous proposal, PSA another patch set.\r\nPatch 0001 introduces the state WALSNDSTATE_STOPPING to logical walsenders. The\r\nworkflow remains largely the same as described in my previous post, with the\r\nfollowing additions:\r\n\r\n* A flag has been added to track whether all the WALs have been flushed. The\r\n logical walsender can only exit after the flag is set. This ensures that all\r\n WALs are flushed before the termination of the walsender.\r\n* Cumulative statistics are now forcibly written before changing the state.\r\n While the previous involved reporting stats upon process exit, the current approach\r\n must report earlier due to the checkpointer's termination timing. See comments\r\n in CheckpointerMain() and atop pgstat_before_server_shutdown().\r\n* At the end of processes, slots are now saved to disk.\r\n\r\n\r\nPatch 0002 adds --include-logical-replication-slots option to pg_upgrade,\r\nnot changed from previous set.\r\n\r\nPatch 0003 adds a check function, which becomes simpler. \r\nThe previous version calculated the \"acceptable\" difference between confirmed_lsn\r\nand the current WAL position. This was necessary because shutdown records could\r\nnot be sent to subscribers, creating a disparity in these values. However, this\r\napproach had drawbacks, such as needing adjustments if record sizes changed.\r\n\r\nNow, the record can be sent to subscribers, so the hacking is not needed anymore,\r\nat least in the context of logical replication. The consistency is now maintained\r\nby the logical walsenders, so slots created by the backend could not be.\r\nWe must consider what should be...\r\n\r\nHow do you think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 21 Jul 2023 07:30:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, 21 Jul 2023 at 13:00, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> > Based on the above, we are considering that we delay the timing of shutdown for\n> > logical walsenders. The preliminary workflow is:\n> >\n> > 1. When logical walsenders receives siginal from checkpointer, it consumes all\n> > of WAL records, change its state into WALSNDSTATE_STOPPING, and stop\n> > doing\n> > anything.\n> > 2. Then the checkpointer does the shutdown checkpoint\n> > 3. After that postmaster sends signal to walsenders, same as current\n> > implementation.\n> > 4. Finally logical walsenders process the shutdown checkpoint record and update\n> > the\n> > confirmed_lsn after the acknowledgement from subscriber.\n> > Note that logical walsenders don't have to send a shutdown checkpoint record\n> > to subscriber but following keep_alive will help us to increment the\n> > confirmed_lsn.\n> > 5. All tasks are done, they exit.\n> >\n> > This mechanism ensures that the confirmed_lsn of active slots is same as the\n> > current\n> > WAL location of old publisher, so that 0003 patch would become more simpler.\n> > We would not have to calculate the acceptable difference anymore.\n> >\n> > One thing we must consider is that any WALs must not be generated while\n> > decoding\n> > the shutdown checkpoint record. It causes the PANIC. IIUC the record leads\n> > SnapBuildSerializationPoint(), which just serializes snapbuild or restores from\n> > it, so the change may be acceptable. Thought?\n>\n> I've implemented the ideas from my previous proposal, PSA another patch set.\n> Patch 0001 introduces the state WALSNDSTATE_STOPPING to logical walsenders. The\n> workflow remains largely the same as described in my previous post, with the\n> following additions:\n>\n> * A flag has been added to track whether all the WALs have been flushed. The\n> logical walsender can only exit after the flag is set. This ensures that all\n> WALs are flushed before the termination of the walsender.\n> * Cumulative statistics are now forcibly written before changing the state.\n> While the previous involved reporting stats upon process exit, the current approach\n> must report earlier due to the checkpointer's termination timing. See comments\n> in CheckpointerMain() and atop pgstat_before_server_shutdown().\n> * At the end of processes, slots are now saved to disk.\n>\n>\n> Patch 0002 adds --include-logical-replication-slots option to pg_upgrade,\n> not changed from previous set.\n>\n> Patch 0003 adds a check function, which becomes simpler.\n> The previous version calculated the \"acceptable\" difference between confirmed_lsn\n> and the current WAL position. This was necessary because shutdown records could\n> not be sent to subscribers, creating a disparity in these values. However, this\n> approach had drawbacks, such as needing adjustments if record sizes changed.\n>\n> Now, the record can be sent to subscribers, so the hacking is not needed anymore,\n> at least in the context of logical replication. The consistency is now maintained\n> by the logical walsenders, so slots created by the backend could not be.\n> We must consider what should be...\n>\n> How do you think?\n\nHere is a patch which checks that there are no WAL records other than\nCHECKPOINT_SHUTDOWN WAL record to be consumed based on the discussion\nfrom [1].\nPatch 0001 and 0002 is same as the patch posted by Kuroda-san, Patch\n0003 exposes pg_get_wal_records_content to get the WAL records along\nwith the WAL record type between start and end lsn. pg_walinspect\ncontrib module already exposes a function for this requirement, I have\nmoved this functionality to be exposed from the backend. Patch 0004\nhas slight change in check function to check that there are no other\nrecords other than CHECKPOINT_SHUTDOWN to be consumed. The attached\npatch has the changes for the same.\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Kem-J5NM7GJCgyKP84pEN6RsG6JWo%3D6pSn1E%2BiexL1Fw%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Fri, 28 Jul 2023 17:29:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 5:48 PM vignesh C <[email protected]> wrote:\n>\n> Here is a patch which checks that there are no WAL records other than\n> CHECKPOINT_SHUTDOWN WAL record to be consumed based on the discussion\n> from [1].\n>\n\nFew comments:\n=============\n1. Do we really need 0001 patch after the latest change proposed by\nVignesh in the 0004 patch?\n\n2.\n+ if (dopt.logical_slots_only)\n+ {\n+ if (!dopt.binary_upgrade)\n+ pg_fatal(\"options --logical-replication-slots-only requires option\n--binary-upgrade\");\n+\n+ if (dopt.dataOnly)\n+ pg_fatal(\"options --logical-replication-slots-only and\n-a/--data-only cannot be used together\");\n+\n+ if (dopt.schemaOnly)\n+ pg_fatal(\"options --logical-replication-slots-only and\n-s/--schema-only cannot be used together\");\n\nCan you please explain why the patch imposes these restrictions? I\nguess the binary_upgrade is because you want this option to be used\nfor the upgrade. Do we want to avoid giving any other option with\nlogical_slots, if so, are the above checks sufficient and why?\n\n3.\n+ /*\n+ * Get replication slots.\n+ *\n+ * XXX: Which information must be extracted from old node? Currently three\n+ * attributes are extracted because they are used by\n+ * pg_create_logical_replication_slot().\n+ */\n+ appendPQExpBufferStr(query,\n+ \"SELECT slot_name, plugin, two_phase \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE database = current_database() AND temporary = false \"\n+ \"AND wal_status IN ('reserved', 'extended');\");\n\nWhy are we ignoring the slots that have wal status as WALAVAIL_REMOVED\nor WALAVAIL_UNRESERVED? I think the slots where wal status is\nWALAVAIL_REMOVED, the corresponding slots are invalidated at some\npoint. I think such slots can't be used for decoding but these will be\ndropped along with the subscription or when a user does it manually.\nSo, if we don't copy such slots after the upgrade then there could be\na problem in dropping the corresponding subscription. If we don't want\nto copy over such slots then we need to provide instructions on what\nusers should do in such cases. OTOH, if we want to copy over such\nslots then we need to find a way to invalidate such slots after copy.\nEither way, this needs more analysis.\n\n4.\n+ /*\n+ * Check that all logical replication slots have reached the current WAL\n+ * position.\n+ */\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE (SELECT count(record_type) \"\n+ \" FROM pg_catalog.pg_get_wal_records_content(confirmed_flush_lsn,\npg_catalog.pg_current_wal_insert_lsn()) \"\n+ \" WHERE record_type != 'CHECKPOINT_SHUTDOWN') <> 0 \"\n+ \"AND temporary = false AND wal_status IN ('reserved', 'extended');\");\n\nI think this can unnecessarily lead to reading a lot of WAL data if\nthe confirmed_flush_lsn for a slot is too much behind. Can we think of\nimproving this by passing the number of records to read which in this\ncase should be 1?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Aug 2023 15:09:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On 8/1/23 5:39 AM, Amit Kapila wrote:\r\n> On Fri, Jul 28, 2023 at 5:48 PM vignesh C <[email protected]> wrote:\r\n>>\r\n>> Here is a patch which checks that there are no WAL records other than\r\n>> CHECKPOINT_SHUTDOWN WAL record to be consumed based on the discussion\r\n>> from [1].\r\n>>\r\n> \r\n> Few comments:\r\n> =============\r\n\r\n> 2.\r\n> + if (dopt.logical_slots_only)\r\n> + {\r\n> + if (!dopt.binary_upgrade)\r\n> + pg_fatal(\"options --logical-replication-slots-only requires option\r\n> --binary-upgrade\");\r\n> +\r\n> + if (dopt.dataOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -a/--data-only cannot be used together\");\r\n> +\r\n> + if (dopt.schemaOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -s/--schema-only cannot be used together\");\r\n> \r\n> Can you please explain why the patch imposes these restrictions? I\r\n> guess the binary_upgrade is because you want this option to be used\r\n> for the upgrade. Do we want to avoid giving any other option with\r\n> logical_slots, if so, are the above checks sufficient and why?\r\n\r\nCan I take this a step further on the user interface and ask why the \r\nflag would be \"--include-logical-replication-slots\" vs. being enabled by \r\ndefault?\r\n\r\nAre there reasons why we wouldn't enable this feature by default on \r\npg_upgrade, and instead (if need be) have a flag that would be \r\n\"--exclude-logical-replication-slots\"? Right now, not having the ability \r\nto run pg_upgrade with logical replication slots enabled on the \r\npublisher is a a very big pain point for users, so I would strongly \r\nrecommend against adding friction unless there is a very large challenge \r\nwith such an implementation.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 1 Aug 2023 22:16:35 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Jonathan,\r\n\r\nThank you for reading the thread!\r\n\r\n> Can I take this a step further on the user interface and ask why the\r\n> flag would be \"--include-logical-replication-slots\" vs. being enabled by\r\n> default?\r\n> \r\n> Are there reasons why we wouldn't enable this feature by default on\r\n> pg_upgrade, and instead (if need be) have a flag that would be\r\n> \"--exclude-logical-replication-slots\"? Right now, not having the ability\r\n> to run pg_upgrade with logical replication slots enabled on the\r\n> publisher is a a very big pain point for users, so I would strongly\r\n> recommend against adding friction unless there is a very large challenge\r\n> with such an implementation.\r\n\r\nThe main reason was that there were no major complaints till now. This decision\r\nfollowed the related discussion, for upgrading the subscriber [1]. As mentioned\r\nthere, current style might have more flexibility. Of course we could change that\r\nif there are more opinions around here.\r\n(I believe that this feature is useful for everyone, but changing the default may\r\naffect others...)\r\n\r\nAs for the implementation, I did not check so deeply but there is no challenge.\r\nWe cannot change the style pg_dump option due to the pg_resetwal ordering issue[2],\r\nbut it option is not visible from users. I will check deeper when we want to do...\r\n\r\nHow do you think?\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1KD-hZ3syruxJA6fK-JtSBzL6etkwToPuTmVkrCvT6ASw%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB58668C61A3C6EE82AE436C07F539A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 2 Aug 2023 03:31:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for making the PoC!\r\n\r\n> Here is a patch which checks that there are no WAL records other than\r\n> CHECKPOINT_SHUTDOWN WAL record to be consumed based on the discussion\r\n> from [1].\r\n\r\nBasically I agreed your approach. Thanks!\r\n\r\n> Patch 0001 and 0002 is same as the patch posted by Kuroda-san, Patch\r\n> 0003 exposes pg_get_wal_records_content to get the WAL records along\r\n> with the WAL record type between start and end lsn. pg_walinspect\r\n> contrib module already exposes a function for this requirement, I have\r\n> moved this functionality to be exposed from the backend. Patch 0004\r\n> has slight change in check function to check that there are no other\r\n> records other than CHECKPOINT_SHUTDOWN to be consumed. The attached\r\n> patch has the changes for the same.\r\n> Thoughts?\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/message-id/CAA4eK1Kem-J5NM7GJCgyKP84pEN6\r\n> RsG6JWo%3D6pSn1E%2BiexL1Fw%40mail.gmail.com\r\n\r\nFew comments:\r\n\r\n* Per comment from Amit [1], I used pg_get_wal_record_info() instead of pg_get_wal_records_info().\r\nThis function extract a next available WAL record, which can avoid huge scan if\r\nthe confirmed_flush is much behind.\r\n* According to cfbot and my analysis, the 0001 cannot pass the test on macOS.\r\n So I revived Julien's patch [2] as 0002 once. AFAIS the 0001 is not so welcomed.\r\n\r\nNext patch will be available soon.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1LWKkoyy-p-SAT0JTWa%3D6kXiMd%3Da6ZcArY9eU4a3g4TZg%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/20230414061248.vdsxz2febjo3re6h%40jrouhaud\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 2 Aug 2023 08:13:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving comments! PSA new version patchset.\r\n\r\n> 1. Do we really need 0001 patch after the latest change proposed by\r\n> Vignesh in the 0004 patch?\r\n\r\nI removed 0001 patch and revived old patch which serializes slots at shutdown.\r\nThis is because the problem which slots are not serialized to disk still remain [1]\r\nand then confirmed_flush becomes behind, even if we implement the approach.\r\n\r\n> 2.\r\n> + if (dopt.logical_slots_only)\r\n> + {\r\n> + if (!dopt.binary_upgrade)\r\n> + pg_fatal(\"options --logical-replication-slots-only requires option\r\n> --binary-upgrade\");\r\n> +\r\n> + if (dopt.dataOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -a/--data-only cannot be used together\");\r\n> +\r\n> + if (dopt.schemaOnly)\r\n> + pg_fatal(\"options --logical-replication-slots-only and\r\n> -s/--schema-only cannot be used together\");\r\n> \r\n> Can you please explain why the patch imposes these restrictions? I\r\n> guess the binary_upgrade is because you want this option to be used\r\n> for the upgrade. Do we want to avoid giving any other option with\r\n> logical_slots, if so, are the above checks sufficient and why?\r\n\r\nRegarding the --binary-upgrade, the motivation is same as you expected. I covered\r\nup the --logical-replication-slots-only option from users, so it should not be\r\nused not for upgrade. Additionaly, this option is not shown in help and document.\r\n\r\nAs for -{data|schema}-only options, I removed restrictions.\r\nFirstly I set as excluded because it may be confused - as discussed at [2], slots\r\nmust be dumped after all the pg_resetwal is done and at that time all the definitions\r\nare already dumped. to avoid duplicated definitions, we must ensure only slots are\r\nwritten in the output file. I thought this requirement contradict descirptions of\r\nthese options (Dump only the A, not B).\r\nBut after considering more, I thought this might not be needed because it was not\r\nopened to users - no one would be confused by using both them.\r\n(Restriction for -c is also removed for the same motivation)\r\n\r\n> 3.\r\n> + /*\r\n> + * Get replication slots.\r\n> + *\r\n> + * XXX: Which information must be extracted from old node? Currently three\r\n> + * attributes are extracted because they are used by\r\n> + * pg_create_logical_replication_slot().\r\n> + */\r\n> + appendPQExpBufferStr(query,\r\n> + \"SELECT slot_name, plugin, two_phase \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE database = current_database() AND temporary = false \"\r\n> + \"AND wal_status IN ('reserved', 'extended');\");\r\n> \r\n> Why are we ignoring the slots that have wal status as WALAVAIL_REMOVED\r\n> or WALAVAIL_UNRESERVED? I think the slots where wal status is\r\n> WALAVAIL_REMOVED, the corresponding slots are invalidated at some\r\n> point. I think such slots can't be used for decoding but these will be\r\n> dropped along with the subscription or when a user does it manually.\r\n> So, if we don't copy such slots after the upgrade then there could be\r\n> a problem in dropping the corresponding subscription. If we don't want\r\n> to copy over such slots then we need to provide instructions on what\r\n> users should do in such cases. OTOH, if we want to copy over such\r\n> slots then we need to find a way to invalidate such slots after copy.\r\n> Either way, this needs more analysis.\r\n\r\nI considered again here. At least WALAVAIL_UNRESERVED should be supported because\r\nthe slot is still usable. It can return reserved or extended.\r\n\r\nAs for WALAVAIL_REMOVED, I don't think it should be so that I added a description\r\nto the document.\r\n\r\nThis feature re-create slots which have same name/plugins as old ones, not replicate\r\nits state. So if we copy them as-is slots become usable again. If subscribers refer\r\nthe slot and then connect again at that time, changes between 'WALAVAIL_REMOVED'\r\nmay be lost.\r\n\r\nBased on above slots must be copied as WALAVAIL_REMOVED, but as you said, we do\r\nnot have a way to control that. the status is calculated by using restart_lsn,\r\nbut there are no function to modify directly. \r\n\r\nOne approach is adding an SQL funciton which set restart_lsn to aritrary value\r\n(or 0/0, invalid), but it seems dangerous.\r\n\r\n> 4.\r\n> + /*\r\n> + * Check that all logical replication slots have reached the current WAL\r\n> + * position.\r\n> + */\r\n> + res = executeQueryOrDie(conn,\r\n> + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE (SELECT count(record_type) \"\r\n> + \" FROM pg_catalog.pg_get_wal_records_content(confirmed_flush_lsn,\r\n> pg_catalog.pg_current_wal_insert_lsn()) \"\r\n> + \" WHERE record_type != 'CHECKPOINT_SHUTDOWN') <> 0 \"\r\n> + \"AND temporary = false AND wal_status IN ('reserved', 'extended');\");\r\n> \r\n> I think this can unnecessarily lead to reading a lot of WAL data if\r\n> the confirmed_flush_lsn for a slot is too much behind. Can we think of\r\n> improving this by passing the number of records to read which in this\r\n> case should be 1?\r\n\r\nI checked and pg_wal_record_info() seemed to be used for the purpose. I tried to\r\nmove the functionality to core.\r\n\r\nBut this function raise an ERROR when there is no valid record after the specified\r\nlsn. This means that the pg_upgrade fails if logical slots has caught up the current\r\nWAL location. IIUC DBA must do following steps:\r\n\r\n1. shutdown old publisher\r\n2. disable the subscription once <- this is mandatory, otherwise the walsender may\r\n send the record during the upgrade and confirmed_lsn may point the SHUTDOWN_CHECKPOINT\r\n3. do pg_upgrade <- pg_get_wal_record_content() may raise an ERROR if 2. was skipped\r\n4. change the connection string of subscription\r\n5. enable the subscription again\r\n\r\nIf we think this is not robust, we must implement similar function which does not raise ERROR instead.\r\nHow do you think?\r\n\r\n[1]: https://www.postgresql.org/message-id/20230414061248.vdsxz2febjo3re6h%40jrouhaud\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1KD-hZ3syruxJA6fK-JtSBzL6etkwToPuTmVkrCvT6ASw@mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 2 Aug 2023 08:13:38 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 1:43 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for giving comments! PSA new version patchset.\n>\n> > 3.\n> > + /*\n> > + * Get replication slots.\n> > + *\n> > + * XXX: Which information must be extracted from old node? Currently three\n> > + * attributes are extracted because they are used by\n> > + * pg_create_logical_replication_slot().\n> > + */\n> > + appendPQExpBufferStr(query,\n> > + \"SELECT slot_name, plugin, two_phase \"\n> > + \"FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE database = current_database() AND temporary = false \"\n> > + \"AND wal_status IN ('reserved', 'extended');\");\n> >\n> > Why are we ignoring the slots that have wal status as WALAVAIL_REMOVED\n> > or WALAVAIL_UNRESERVED? I think the slots where wal status is\n> > WALAVAIL_REMOVED, the corresponding slots are invalidated at some\n> > point. I think such slots can't be used for decoding but these will be\n> > dropped along with the subscription or when a user does it manually.\n> > So, if we don't copy such slots after the upgrade then there could be\n> > a problem in dropping the corresponding subscription. If we don't want\n> > to copy over such slots then we need to provide instructions on what\n> > users should do in such cases. OTOH, if we want to copy over such\n> > slots then we need to find a way to invalidate such slots after copy.\n> > Either way, this needs more analysis.\n>\n> I considered again here. At least WALAVAIL_UNRESERVED should be supported because\n> the slot is still usable. It can return reserved or extended.\n>\n> As for WALAVAIL_REMOVED, I don't think it should be so that I added a description\n> to the document.\n>\n> This feature re-create slots which have same name/plugins as old ones, not replicate\n> its state. So if we copy them as-is slots become usable again. If subscribers refer\n> the slot and then connect again at that time, changes between 'WALAVAIL_REMOVED'\n> may be lost.\n>\n> Based on above slots must be copied as WALAVAIL_REMOVED, but as you said, we do\n> not have a way to control that. the status is calculated by using restart_lsn,\n> but there are no function to modify directly.\n>\n> One approach is adding an SQL funciton which set restart_lsn to aritrary value\n> (or 0/0, invalid), but it seems dangerous.\n>\n\nI see your point related to WALAVAIL_REMOVED status of the slot but\ndid you test the scenario I have explained in my comment? Basically, I\nwant to know whether it can impact the user in some way. So, please\ncheck whether the corresponding subscriptions will be allowed to drop.\nYou can test it both before and after the upgrade.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Aug 2023 12:08:31 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> I see your point related to WALAVAIL_REMOVED status of the slot but\r\n> did you test the scenario I have explained in my comment? Basically, I\r\n> want to know whether it can impact the user in some way. So, please\r\n> check whether the corresponding subscriptions will be allowed to drop.\r\n> You can test it both before and after the upgrade.\r\n\r\nYeah, this is a real issue. I have tested and confirmed the expected things.\r\nEven if the status of the slot is 'lost', it may be needed for dropping\r\nsubscriptions properly.\r\n\r\n* before upgrading, the subscription which refers the lost slot could be dropped\r\n* after upgrading, the subscription could not be dropped as-is.\r\n users must ALTER SUBSCRIPTION sub SET (slot_name = NONE);\r\n\r\nFollowings are the stepped what I did:\r\n\r\n## Setup\r\n\r\n1. constructed a logical replication system\r\n2. disabled the subscriber once\r\n3. consumed many WALs so that the status of slot became 'lost'\r\n\r\n```\r\npublisher=# SELECT slot_name, wal_status FROM pg_replication_slots ;\r\nslot_name | wal_status \r\n-----------+------------\r\nsub | lost\r\n(1 row)\r\n```\r\n\r\n# testcase a - try to drop sub. before upgrading\r\n\r\na-1. enabled the subscriber again.\r\n At that time following messages are shown on subscriber log:\r\n```\r\nERROR: could not start WAL streaming: ERROR: can no longer get changes from replication slot \"sub\"\r\nDETAIL: This slot has been invalidated because it exceeded the maximum reserved size.\r\n```\r\n\r\na-2. did DROP SUBSCRIPTION ...\r\na-3. succeeded.\r\n\r\n```\r\nsubscriber=# DROP SUBSCRIPTION sub;\r\nNOTICE: dropped replication slot \"sub\" on publisher\r\nDROP SUBSCRIPTION\r\n```\r\n\r\n# testcase b - try to drop sub. after upgrading\r\n\r\nb-1. did pg_upgrade command\r\nb-2. enabled the subscriber. From that point an apply worker connected to new node...\r\nb-3. did DROP SUBSCRIPTION ...\r\nb-4. failed with the message:\r\n\r\n```\r\nsubscriber=# DROP SUBSCRIPTION sub;\r\nERROR: could not drop replication slot \"sub\" on publisher: ERROR: replication slot \"sub\" does not exist\r\n```\r\n\r\nThe workaround was to disassociate the slot, which was written in the document. \r\n\r\n```\r\nsubscriber =# ALTER SUBSCRIPTION sub DISABLE;\r\nALTER SUBSCRIPTION\r\nsubscriber =# ALTER SUBSCRIPTION sub SET (slot_name = NONE);\r\nALTER SUBSCRIPTION\r\nsubscriber =# DROP SUBSCRIPTION sub;\r\nDROP SUBSCRIPTION\r\n```\r\n\r\nPSA the script for emulating above tests.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 3 Aug 2023 09:28:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 1:43 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for giving comments! PSA new version patchset.\n>\n> > 1. Do we really need 0001 patch after the latest change proposed by\n> > Vignesh in the 0004 patch?\n>\n> I removed 0001 patch and revived old patch which serializes slots at shutdown.\n> This is because the problem which slots are not serialized to disk still remain [1]\n> and then confirmed_flush becomes behind, even if we implement the approach.\n>\n\nSo, IIUC, you are talking about a patch with the below commit message.\n[PATCH v18 2/4] Always persist to disk logical slots during a\n shutdown checkpoint.\n\nIt's entirely possible for a logical slot to have a confirmed_flush_lsn higher\nthan the last value saved on disk while not being marked as dirty. It's\ncurrently not a problem to lose that value during a clean shutdown / restart\ncycle, but a later patch adding support for pg_upgrade of publications and\nlogical slots will rely on that value being properly persisted to disk.\n\n\nAs per this commit message, this patch should be numbered as 1 but you\nhave placed it as 2 after the main upgrade patch?\n\n\n> > 2.\n> > + if (dopt.logical_slots_only)\n> > + {\n> > + if (!dopt.binary_upgrade)\n> > + pg_fatal(\"options --logical-replication-slots-only requires option\n> > --binary-upgrade\");\n> > +\n> > + if (dopt.dataOnly)\n> > + pg_fatal(\"options --logical-replication-slots-only and\n> > -a/--data-only cannot be used together\");\n> > +\n> > + if (dopt.schemaOnly)\n> > + pg_fatal(\"options --logical-replication-slots-only and\n> > -s/--schema-only cannot be used together\");\n> >\n> > Can you please explain why the patch imposes these restrictions? I\n> > guess the binary_upgrade is because you want this option to be used\n> > for the upgrade. Do we want to avoid giving any other option with\n> > logical_slots, if so, are the above checks sufficient and why?\n>\n> Regarding the --binary-upgrade, the motivation is same as you expected. I covered\n> up the --logical-replication-slots-only option from users, so it should not be\n> used not for upgrade. Additionaly, this option is not shown in help and document.\n>\n> As for -{data|schema}-only options, I removed restrictions.\n> Firstly I set as excluded because it may be confused - as discussed at [2], slots\n> must be dumped after all the pg_resetwal is done and at that time all the definitions\n> are already dumped. to avoid duplicated definitions, we must ensure only slots are\n> written in the output file. I thought this requirement contradict descirptions of\n> these options (Dump only the A, not B).\n> But after considering more, I thought this might not be needed because it was not\n> opened to users - no one would be confused by using both them.\n> (Restriction for -c is also removed for the same motivation)\n>\n\nI see inconsistent behavior here with the patch. If I use \"pg_dump.exe\n--schema-only --logical-replication-slots-only --binary-upgrade\npostgres\" then I get only a dump of slots without any schema. When I\nuse \"pg_dump.exe --data-only --logical-replication-slots-only\n--binary-upgrade postgres\" then neither table data nor slots. When I\nuse \"pg_dump.exe --create --logical-replication-slots-only\n--binary-upgrade postgres\" then it returns the error \"pg_dump: error:\nrole with OID 10 does not exist\".\n\nNow, I tried using --binary-upgrade with some other option like\n\"pg_dump.exe --create --binary-upgrade postgres\" and then I got a dump\nwith all required objects with support for binary-upgrade.\n\nI think your thought here is that this new option won't be usable\ndirectly with pg_dump but we should study whether we allow to support\nother options with --binary-upgrade for in-place upgrade utilities\nother than pg_upgrade.\n\n>\n> > 4.\n> > + /*\n> > + * Check that all logical replication slots have reached the current WAL\n> > + * position.\n> > + */\n> > + res = executeQueryOrDie(conn,\n> > + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE (SELECT count(record_type) \"\n> > + \" FROM pg_catalog.pg_get_wal_records_content(confirmed_flush_lsn,\n> > pg_catalog.pg_current_wal_insert_lsn()) \"\n> > + \" WHERE record_type != 'CHECKPOINT_SHUTDOWN') <> 0 \"\n> > + \"AND temporary = false AND wal_status IN ('reserved', 'extended');\");\n> >\n> > I think this can unnecessarily lead to reading a lot of WAL data if\n> > the confirmed_flush_lsn for a slot is too much behind. Can we think of\n> > improving this by passing the number of records to read which in this\n> > case should be 1?\n>\n> I checked and pg_wal_record_info() seemed to be used for the purpose. I tried to\n> move the functionality to core.\n>\n\nBut I don't see how it addresses my concern about reading too many\nrecords. If the confirmed_flush_lsn is too much behind, it will also\ntry to read all the remaining WAL for such slots.\n\n> But this function raise an ERROR when there is no valid record after the specified\n> lsn. This means that the pg_upgrade fails if logical slots has caught up the current\n> WAL location. IIUC DBA must do following steps:\n>\n> 1. shutdown old publisher\n> 2. disable the subscription once <- this is mandatory, otherwise the walsender may\n> send the record during the upgrade and confirmed_lsn may point the SHUTDOWN_CHECKPOINT\n> 3. do pg_upgrade <- pg_get_wal_record_content() may raise an ERROR if 2. was skipped\n>\n\nBut we have already seen that we write shutdown_checkpoint record only\nafter logical walsender is shut down. So, how above is possible?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Aug 2023 15:56:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 1:43 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 3.\n> > + /*\n> > + * Get replication slots.\n> > + *\n> > + * XXX: Which information must be extracted from old node? Currently three\n> > + * attributes are extracted because they are used by\n> > + * pg_create_logical_replication_slot().\n> > + */\n> > + appendPQExpBufferStr(query,\n> > + \"SELECT slot_name, plugin, two_phase \"\n> > + \"FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE database = current_database() AND temporary = false \"\n> > + \"AND wal_status IN ('reserved', 'extended');\");\n> >\n> > Why are we ignoring the slots that have wal status as WALAVAIL_REMOVED\n> > or WALAVAIL_UNRESERVED? I think the slots where wal status is\n> > WALAVAIL_REMOVED, the corresponding slots are invalidated at some\n> > point. I think such slots can't be used for decoding but these will be\n> > dropped along with the subscription or when a user does it manually.\n> > So, if we don't copy such slots after the upgrade then there could be\n> > a problem in dropping the corresponding subscription. If we don't want\n> > to copy over such slots then we need to provide instructions on what\n> > users should do in such cases. OTOH, if we want to copy over such\n> > slots then we need to find a way to invalidate such slots after copy.\n> > Either way, this needs more analysis.\n>\n> I considered again here. At least WALAVAIL_UNRESERVED should be supported because\n> the slot is still usable. It can return reserved or extended.\n>\n> As for WALAVAIL_REMOVED, I don't think it should be so that I added a description\n> to the document.\n>\n> This feature re-create slots which have same name/plugins as old ones, not replicate\n> its state. So if we copy them as-is slots become usable again. If subscribers refer\n> the slot and then connect again at that time, changes between 'WALAVAIL_REMOVED'\n> may be lost.\n>\n> Based on above slots must be copied as WALAVAIL_REMOVED, but as you said, we do\n> not have a way to control that. the status is calculated by using restart_lsn,\n> but there are no function to modify directly.\n>\n> One approach is adding an SQL funciton which set restart_lsn to aritrary value\n> (or 0/0, invalid), but it seems dangerous.\n>\n\nSo, we have three options here (a) As you have done in the patch,\ndocument this limitation and request user to perform some manual steps\nto drop the subscription; (b) don't allow upgrade to proceed if there\nare invalid slots in the old cluster; (c) provide a new function like\npg_copy_logical_replication_slot_contents() where we copy the required\ncontents like invalid status(ReplicationSlotInvalidationCause), etc.\n\nPersonally, I would prefer (b) because it will minimize the steps\nrequired to perform by the user after the upgrade and looks cleaner\nsolution.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Aug 2023 16:29:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> So, we have three options here (a) As you have done in the patch,\r\n> document this limitation and request user to perform some manual steps\r\n> to drop the subscription; (b) don't allow upgrade to proceed if there\r\n> are invalid slots in the old cluster; (c) provide a new function like\r\n> pg_copy_logical_replication_slot_contents() where we copy the required\r\n> contents like invalid status(ReplicationSlotInvalidationCause), etc.\r\n> \r\n> Personally, I would prefer (b) because it will minimize the steps\r\n> required to perform by the user after the upgrade and looks cleaner\r\n> solution.\r\n> \r\n> Thoughts?\r\n\r\nThanks for suggestion. I agreed (b) was better because it did not endanger users\r\nfor data lost. I implemented locally and worked well, so I'm planning to adopt\r\nthe idea in next version, if no objections.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 4 Aug 2023 12:54:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 5:13 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 4.\n> > + /*\n> > + * Check that all logical replication slots have reached the current WAL\n> > + * position.\n> > + */\n> > + res = executeQueryOrDie(conn,\n> > + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE (SELECT count(record_type) \"\n> > + \" FROM pg_catalog.pg_get_wal_records_content(confirmed_flush_lsn,\n> > pg_catalog.pg_current_wal_insert_lsn()) \"\n> > + \" WHERE record_type != 'CHECKPOINT_SHUTDOWN') <> 0 \"\n> > + \"AND temporary = false AND wal_status IN ('reserved', 'extended');\");\n> >\n> > I think this can unnecessarily lead to reading a lot of WAL data if\n> > the confirmed_flush_lsn for a slot is too much behind. Can we think of\n> > improving this by passing the number of records to read which in this\n> > case should be 1?\n>\n> I checked and pg_wal_record_info() seemed to be used for the purpose. I tried to\n> move the functionality to core.\n\nIIUC the above query checks if the WAL record written at the slot's\nconfirmed_flush_lsn is a CHECKPOINT_SHUTDOWN, but there is no check if\nthis WAL record is the latest record. Therefore, I think it's quite\npossible that slot's confirmed_flush_lsn points to previous\nCHECKPOINT_SHUTDOWN, for example, in cases where the subscription was\ndisabled after the publisher shut down and then some changes are made\non the publisher. We might want to add that check too but it would not\nwork. Because some WAL records could be written (e.g., by autovacuums)\nduring pg_upgrade before checking the slot's confirmed_flush_lsn.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 6 Aug 2023 21:31:36 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sun, Aug 6, 2023 at 6:02 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Aug 2, 2023 at 5:13 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > 4.\n> > > + /*\n> > > + * Check that all logical replication slots have reached the current WAL\n> > > + * position.\n> > > + */\n> > > + res = executeQueryOrDie(conn,\n> > > + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n> > > + \"WHERE (SELECT count(record_type) \"\n> > > + \" FROM pg_catalog.pg_get_wal_records_content(confirmed_flush_lsn,\n> > > pg_catalog.pg_current_wal_insert_lsn()) \"\n> > > + \" WHERE record_type != 'CHECKPOINT_SHUTDOWN') <> 0 \"\n> > > + \"AND temporary = false AND wal_status IN ('reserved', 'extended');\");\n> > >\n> > > I think this can unnecessarily lead to reading a lot of WAL data if\n> > > the confirmed_flush_lsn for a slot is too much behind. Can we think of\n> > > improving this by passing the number of records to read which in this\n> > > case should be 1?\n> >\n> > I checked and pg_wal_record_info() seemed to be used for the purpose. I tried to\n> > move the functionality to core.\n>\n> IIUC the above query checks if the WAL record written at the slot's\n> confirmed_flush_lsn is a CHECKPOINT_SHUTDOWN, but there is no check if\n> this WAL record is the latest record.\n>\n\nYeah, I also think there should be some way to ensure this. How about\npassing the number of records to read to this API? Actually, that will\naddress my other concern as well where the current API can lead to\nreading an unbounded number of records if the confirmed_flush_lsn\nlocation is far behind the CHECKPOINT_SHUTDOWN. Do you have any better\nideas to address it?\n\n> Therefore, I think it's quite\n> possible that slot's confirmed_flush_lsn points to previous\n> CHECKPOINT_SHUTDOWN, for example, in cases where the subscription was\n> disabled after the publisher shut down and then some changes are made\n> on the publisher. We might want to add that check too but it would not\n> work. Because some WAL records could be written (e.g., by autovacuums)\n> during pg_upgrade before checking the slot's confirmed_flush_lsn.\n>\n\nI think autovacuum is not enabled during the upgrade. See comment \"Use\n-b to disable autovacuum.\" in start_postmaster(). However, I am not\nsure if there can't be any additional WAL from checkpointer or\nbgwriter. Checkpointer has a code that ensures that if there is no\nimportant WAL activity then it would be skipped. Similarly, bgwriter\nalso doesn't LOG xl_running_xacts unless there is an important\nactivity. I feel if there is a chance of any WAL activity during the\nupgrade, we need to either change the check to ensure such WAL records\nare expected or document the same in some way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Aug 2023 09:24:02 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 2, 2023 at 7:46 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> Can I take this a step further on the user interface and ask why the\n> flag would be \"--include-logical-replication-slots\" vs. being enabled by\n> default?\n>\n> Are there reasons why we wouldn't enable this feature by default on\n> pg_upgrade, and instead (if need be) have a flag that would be\n> \"--exclude-logical-replication-slots\"? Right now, not having the ability\n> to run pg_upgrade with logical replication slots enabled on the\n> publisher is a a very big pain point for users, so I would strongly\n> recommend against adding friction unless there is a very large challenge\n> with such an implementation.\n>\n\nThanks for acknowledging the need/importance of this feature. I also\ndon't see a need to have such a flag for pg_upgrade. The only reason\nwhy one might want to exclude slots is that they are not up to date\nw.r.t WAL being consumed. For example, one has not consumed all the\nWAL from manually created slots or say some subscription has been\ndisabled before shutdown. I guess in those cases we should give an\nerror to the user and ask to remove such slots before the upgrade\nbecause anyway, those won't be usable after the upgrade.\n\nHaving said that, I think we need a flag for pg_dump to dump the slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Aug 2023 11:00:13 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 07, 2023 at 09:24:02AM +0530, Amit Kapila wrote:\n>\n> I think autovacuum is not enabled during the upgrade. See comment \"Use\n> -b to disable autovacuum.\" in start_postmaster(). However, I am not\n> sure if there can't be any additional WAL from checkpointer or\n> bgwriter. Checkpointer has a code that ensures that if there is no\n> important WAL activity then it would be skipped. Similarly, bgwriter\n> also doesn't LOG xl_running_xacts unless there is an important\n> activity. I feel if there is a chance of any WAL activity during the\n> upgrade, we need to either change the check to ensure such WAL records\n> are expected or document the same in some way.\n\nUnless I'm missing something I don't see what prevents something to connect\nusing the replication protocol and issue any query or even create new\nreplication slots?\n\nNote also that as complained a few years ago nothing prevents a bgworker from\nspawning up during pg_upgrade and possibly corrupt the upgraded cluster if\nmultixid are assigned. If publications are preserved wouldn't it mean that\nsuch bgworkers could also lead to data loss?\n\n\n",
"msg_date": "Mon, 7 Aug 2023 13:59:31 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 11:29 AM Julien Rouhaud <[email protected]> wrote:\n>\n> On Mon, Aug 07, 2023 at 09:24:02AM +0530, Amit Kapila wrote:\n> >\n> > I think autovacuum is not enabled during the upgrade. See comment \"Use\n> > -b to disable autovacuum.\" in start_postmaster(). However, I am not\n> > sure if there can't be any additional WAL from checkpointer or\n> > bgwriter. Checkpointer has a code that ensures that if there is no\n> > important WAL activity then it would be skipped. Similarly, bgwriter\n> > also doesn't LOG xl_running_xacts unless there is an important\n> > activity. I feel if there is a chance of any WAL activity during the\n> > upgrade, we need to either change the check to ensure such WAL records\n> > are expected or document the same in some way.\n>\n> Unless I'm missing something I don't see what prevents something to connect\n> using the replication protocol and issue any query or even create new\n> replication slots?\n>\n\nI think the point is that if we have any slots where we have not\nconsumed the pending WAL (other than the expected like\nSHUTDOWN_CHECKPOINT) or if there are invalid slots then the upgrade\nwon't proceed and we will request user to remove such slots or ensure\nthat WAL is consumed by slots. So, I think in the case you mentioned,\nthe upgrade won't succeed.\n\n> Note also that as complained a few years ago nothing prevents a bgworker from\n> spawning up during pg_upgrade and possibly corrupt the upgraded cluster if\n> multixid are assigned. If publications are preserved wouldn't it mean that\n> such bgworkers could also lead to data loss?\n>\n\nIs it because such workers would write some WAL which slots may not\nprocess? If so, I think it is equally dangerous as other problems that\ncan arise due to such a worker. Do you think of any special handling\nhere?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Aug 2023 12:42:33 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 07, 2023 at 12:42:33PM +0530, Amit Kapila wrote:\n> On Mon, Aug 7, 2023 at 11:29 AM Julien Rouhaud <[email protected]> wrote:\n> >\n> > Unless I'm missing something I don't see what prevents something to connect\n> > using the replication protocol and issue any query or even create new\n> > replication slots?\n> >\n>\n> I think the point is that if we have any slots where we have not\n> consumed the pending WAL (other than the expected like\n> SHUTDOWN_CHECKPOINT) or if there are invalid slots then the upgrade\n> won't proceed and we will request user to remove such slots or ensure\n> that WAL is consumed by slots. So, I think in the case you mentioned,\n> the upgrade won't succeed.\n\nWhat if new slots are added while the old instance is started in the middle of\npg_upgrade, *after* the various checks are done?\n\n> > Note also that as complained a few years ago nothing prevents a bgworker from\n> > spawning up during pg_upgrade and possibly corrupt the upgraded cluster if\n> > multixid are assigned. If publications are preserved wouldn't it mean that\n> > such bgworkers could also lead to data loss?\n> >\n>\n> Is it because such workers would write some WAL which slots may not\n> process? If so, I think it is equally dangerous as other problems that\n> can arise due to such a worker. Do you think of any special handling\n> here?\n\nYes, and there were already multiple reports of multixact corruption due to\nbgworker activity during pg_upgrade (see\nhttps://www.postgresql.org/message-id/[email protected]\nfor instance). I think we should once and for all fix this whole class of\nproblem one way or another.\n\n\n",
"msg_date": "Mon, 7 Aug 2023 15:36:17 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 12:54 PM Amit Kapila <[email protected]> wrote:\n>\n> On Sun, Aug 6, 2023 at 6:02 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Aug 2, 2023 at 5:13 PM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > > 4.\n> > > > + /*\n> > > > + * Check that all logical replication slots have reached the current WAL\n> > > > + * position.\n> > > > + */\n> > > > + res = executeQueryOrDie(conn,\n> > > > + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n> > > > + \"WHERE (SELECT count(record_type) \"\n> > > > + \" FROM pg_catalog.pg_get_wal_records_content(confirmed_flush_lsn,\n> > > > pg_catalog.pg_current_wal_insert_lsn()) \"\n> > > > + \" WHERE record_type != 'CHECKPOINT_SHUTDOWN') <> 0 \"\n> > > > + \"AND temporary = false AND wal_status IN ('reserved', 'extended');\");\n> > > >\n> > > > I think this can unnecessarily lead to reading a lot of WAL data if\n> > > > the confirmed_flush_lsn for a slot is too much behind. Can we think of\n> > > > improving this by passing the number of records to read which in this\n> > > > case should be 1?\n> > >\n> > > I checked and pg_wal_record_info() seemed to be used for the purpose. I tried to\n> > > move the functionality to core.\n> >\n> > IIUC the above query checks if the WAL record written at the slot's\n> > confirmed_flush_lsn is a CHECKPOINT_SHUTDOWN, but there is no check if\n> > this WAL record is the latest record.\n> >\n>\n> Yeah, I also think there should be some way to ensure this. How about\n> passing the number of records to read to this API? Actually, that will\n> address my other concern as well where the current API can lead to\n> reading an unbounded number of records if the confirmed_flush_lsn\n> location is far behind the CHECKPOINT_SHUTDOWN. Do you have any better\n> ideas to address it?\n\nIt makes sense to me to limit the number of WAL records to read. But\nas I mentioned below, if there is a chance of any WAL activity during\nthe upgrade, I'm not sure what limit to set.\n\n>\n> > Therefore, I think it's quite\n> > possible that slot's confirmed_flush_lsn points to previous\n> > CHECKPOINT_SHUTDOWN, for example, in cases where the subscription was\n> > disabled after the publisher shut down and then some changes are made\n> > on the publisher. We might want to add that check too but it would not\n> > work. Because some WAL records could be written (e.g., by autovacuums)\n> > during pg_upgrade before checking the slot's confirmed_flush_lsn.\n> >\n>\n> I think autovacuum is not enabled during the upgrade. See comment \"Use\n> -b to disable autovacuum.\" in start_postmaster().\n\nRight, thanks.\n\n> However, I am not\n> sure if there can't be any additional WAL from checkpointer or\n> bgwriter. Checkpointer has a code that ensures that if there is no\n> important WAL activity then it would be skipped. Similarly, bgwriter\n> also doesn't LOG xl_running_xacts unless there is an important\n> activity.\n\nWAL records for hint bit updates could be generated even in upgrading mode?\n\n> I feel if there is a chance of any WAL activity during the\n> upgrade, we need to either change the check to ensure such WAL records\n> are expected or document the same in some way.\n\nYes, but how does it work with the above idea of limiting the number\nof WAL records to read? If XLOG_FPI_FOR_HINT can still be generated in\nthe upgrade mode, we cannot predict how many such records are\ngenerated after the latest CHECKPOINT_SHUTDOWN.\n\nI'm not really sure we should always perform the slot's\nconfirmed_flush_lsn check by default in the first place. With this\ncheck, the upgrade won't be able to proceed if there is any logical\nslot that is not used by logical replication (or something streaming\nthe changes using walsender), right? For example, if a user uses a\nprogram that periodically consumes the changes from the logical slot,\nthe slot would not be able to pass the check even if the user executed\npg_logical_slot_get_changes() just before shutdown. The backend\nprocess who consumes the changes is always terminated before the\nshutdown checkpoint. On the other hand, I think there are cases where\nthe user can ensure that no meaningful WAL records are generated after\nthe last pg_logical_slot_get_changes(). I'm concerned that this check\nmight make upgrading such cases cumbersome unnecessarily.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 7 Aug 2023 17:31:50 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 2:02 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Aug 7, 2023 at 12:54 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Sun, Aug 6, 2023 at 6:02 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > IIUC the above query checks if the WAL record written at the slot's\n> > > confirmed_flush_lsn is a CHECKPOINT_SHUTDOWN, but there is no check if\n> > > this WAL record is the latest record.\n> > >\n> >\n> > Yeah, I also think there should be some way to ensure this. How about\n> > passing the number of records to read to this API? Actually, that will\n> > address my other concern as well where the current API can lead to\n> > reading an unbounded number of records if the confirmed_flush_lsn\n> > location is far behind the CHECKPOINT_SHUTDOWN. Do you have any better\n> > ideas to address it?\n>\n> It makes sense to me to limit the number of WAL records to read. But\n> as I mentioned below, if there is a chance of any WAL activity during\n> the upgrade, I'm not sure what limit to set.\n>\n\nIn that case, we won't be able to pass the number of records. We need\nto check based on the type of records.\n\n>\n> > However, I am not\n> > sure if there can't be any additional WAL from checkpointer or\n> > bgwriter. Checkpointer has a code that ensures that if there is no\n> > important WAL activity then it would be skipped. Similarly, bgwriter\n> > also doesn't LOG xl_running_xacts unless there is an important\n> > activity.\n>\n> WAL records for hint bit updates could be generated even in upgrading mode?\n>\n\nDo you mean these records can be generated during reading catalog tables?\n\n> > I feel if there is a chance of any WAL activity during the\n> > upgrade, we need to either change the check to ensure such WAL records\n> > are expected or document the same in some way.\n>\n> Yes, but how does it work with the above idea of limiting the number\n> of WAL records to read? If XLOG_FPI_FOR_HINT can still be generated in\n> the upgrade mode, we cannot predict how many such records are\n> generated after the latest CHECKPOINT_SHUTDOWN.\n>\n\nRight, as said earlier, in that case, we need to rely on the type of records.\n\n> I'm not really sure we should always perform the slot's\n> confirmed_flush_lsn check by default in the first place. With this\n> check, the upgrade won't be able to proceed if there is any logical\n> slot that is not used by logical replication (or something streaming\n> the changes using walsender), right? For example, if a user uses a\n> program that periodically consumes the changes from the logical slot,\n> the slot would not be able to pass the check even if the user executed\n> pg_logical_slot_get_changes() just before shutdown. The backend\n> process who consumes the changes is always terminated before the\n> shutdown checkpoint. On the other hand, I think there are cases where\n> the user can ensure that no meaningful WAL records are generated after\n> the last pg_logical_slot_get_changes(). I'm concerned that this check\n> might make upgrading such cases cumbersome unnecessarily.\n>\n\nYou are right and I have mentioned the same case today in my response\nto Jonathan but do you have better ideas to deal with such slots than\nto give an ERROR?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Aug 2023 14:32:32 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 1:06 PM Julien Rouhaud <[email protected]> wrote:\n>\n> On Mon, Aug 07, 2023 at 12:42:33PM +0530, Amit Kapila wrote:\n> > On Mon, Aug 7, 2023 at 11:29 AM Julien Rouhaud <[email protected]> wrote:\n> > >\n> > > Unless I'm missing something I don't see what prevents something to connect\n> > > using the replication protocol and issue any query or even create new\n> > > replication slots?\n> > >\n> >\n> > I think the point is that if we have any slots where we have not\n> > consumed the pending WAL (other than the expected like\n> > SHUTDOWN_CHECKPOINT) or if there are invalid slots then the upgrade\n> > won't proceed and we will request user to remove such slots or ensure\n> > that WAL is consumed by slots. So, I think in the case you mentioned,\n> > the upgrade won't succeed.\n>\n> What if new slots are added while the old instance is started in the middle of\n> pg_upgrade, *after* the various checks are done?\n>\n\nThey won't be copied but I think that won't be any different than\nother objects like tables. Anyway, I have another idea which is to not\nallow creating slots during binary upgrade unless one specifically\nrequests it by having an API like binary_upgrade_allow_slot_create()\nsimilar to existing APIs binary_upgrade_*.\n\n> > > Note also that as complained a few years ago nothing prevents a bgworker from\n> > > spawning up during pg_upgrade and possibly corrupt the upgraded cluster if\n> > > multixid are assigned. If publications are preserved wouldn't it mean that\n> > > such bgworkers could also lead to data loss?\n> > >\n> >\n> > Is it because such workers would write some WAL which slots may not\n> > process? If so, I think it is equally dangerous as other problems that\n> > can arise due to such a worker. Do you think of any special handling\n> > here?\n>\n> Yes, and there were already multiple reports of multixact corruption due to\n> bgworker activity during pg_upgrade (see\n> https://www.postgresql.org/message-id/[email protected]\n> for instance). I think we should once and for all fix this whole class of\n> problem one way or another.\n>\n\nI don't object to doing something like we discussed in the thread you\nlinked but don't see the link with this work. Surely, the extra\nWAL/XIDs generated during the upgrade will cause data inconsistency\nwhich is no different after this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Aug 2023 15:46:13 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit, Julien,\r\n\r\n> > > >\r\n> > > > Unless I'm missing something I don't see what prevents something to\r\n> connect\r\n> > > > using the replication protocol and issue any query or even create new\r\n> > > > replication slots?\r\n> > > >\r\n> > >\r\n> > > I think the point is that if we have any slots where we have not\r\n> > > consumed the pending WAL (other than the expected like\r\n> > > SHUTDOWN_CHECKPOINT) or if there are invalid slots then the upgrade\r\n> > > won't proceed and we will request user to remove such slots or ensure\r\n> > > that WAL is consumed by slots. So, I think in the case you mentioned,\r\n> > > the upgrade won't succeed.\r\n> >\r\n> > What if new slots are added while the old instance is started in the middle of\r\n> > pg_upgrade, *after* the various checks are done?\r\n> >\r\n> \r\n> They won't be copied but I think that won't be any different than\r\n> other objects like tables. Anyway, I have another idea which is to not\r\n> allow creating slots during binary upgrade unless one specifically\r\n> requests it by having an API like binary_upgrade_allow_slot_create()\r\n> similar to existing APIs binary_upgrade_*.\r\n\r\nI confirmed the part and confirmed that objects created after the dump\r\nwere not copied to new node. PSA scripts to emulate my test.\r\n\r\n# tested steps\r\n\r\n-1. applied v18 patch set\r\n0. modified source to create objects during upgrade and install:\r\n\r\n```\r\n@@ -188,6 +188,9 @@ check_and_dump_old_cluster(bool live_check)\r\n if (!user_opts.check)\r\n generate_old_dump();\r\n \r\n+ printf(\"XXX: start to sleep\\n\");\r\n+ sleep(35);\r\n+\r\n```\r\n\r\n1. prepared a node which had a replication slot\r\n2. did pg_upgrade, the process will sleep 35 seconds during that\r\n3. connected to the in-upgrading node by the command:\r\n\r\n```\r\npsql \"host=`pwd` user=postgres port=50432 replication=database\"\r\n```\r\n\r\n4. created a table and replication slot. Note that for binary upgrade, it was very\r\n hard to create tables manually. For me, table \"bar\" and slot \"test\" were created.\r\n5. waited until the upgrade and boot new node.\r\n6. confirmed that created tables and slots were not found on new node.\r\n\r\n```\r\nnew_publisher=# \\d\r\nDid not find any relations.\r\n\r\nnew_publisher=# SELECT slot_name FROM pg_replication_slots WHERE slot_name = 'test';\r\n slot_name \r\n-----------\r\n(0 rows)\r\n```\r\n\r\nYou can execute test_01.sh first, and then execute test_02.sh while the first terminal is stuck.\r\n\r\n\r\nNote that such creations are theoretically occurred, but it is very rare.\r\nBy followings line in start_postmaster(), the TCP/IP connections are refused and\r\nonly the superuser can connect to the server.\r\n\r\n```\r\n#if !defined(WIN32)\r\n\t/* prevent TCP/IP connections, restrict socket access */\r\n\tstrcat(socket_string,\r\n\t\t \" -c listen_addresses='' -c unix_socket_permissions=0700\");\r\n\r\n\t/* Have a sockdir?\tTell the postmaster. */\r\n\tif (cluster->sockdir)\r\n\t\tsnprintf(socket_string + strlen(socket_string),\r\n\t\t\t\t sizeof(socket_string) - strlen(socket_string),\r\n\t\t\t\t \" -c %s='%s'\",\r\n\t\t\t\t (GET_MAJOR_VERSION(cluster->major_version) <= 902) ?\r\n\t\t\t\t \"unix_socket_directory\" : \"unix_socket_directories\",\r\n\t\t\t\t cluster->sockdir);\r\n#endif\r\n```\r\n\r\nMoreover, the socket directory is set to current dir of caller, and port number\r\nis also different from setting written in postgresql.conf.\r\nI think there are few chances that replication slots are accidentally created\r\nduring the replication slot.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 7 Aug 2023 10:53:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 6:02 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 7, 2023 at 2:02 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Aug 7, 2023 at 12:54 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Sun, Aug 6, 2023 at 6:02 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > IIUC the above query checks if the WAL record written at the slot's\n> > > > confirmed_flush_lsn is a CHECKPOINT_SHUTDOWN, but there is no check if\n> > > > this WAL record is the latest record.\n> > > >\n> > >\n> > > Yeah, I also think there should be some way to ensure this. How about\n> > > passing the number of records to read to this API? Actually, that will\n> > > address my other concern as well where the current API can lead to\n> > > reading an unbounded number of records if the confirmed_flush_lsn\n> > > location is far behind the CHECKPOINT_SHUTDOWN. Do you have any better\n> > > ideas to address it?\n> >\n> > It makes sense to me to limit the number of WAL records to read. But\n> > as I mentioned below, if there is a chance of any WAL activity during\n> > the upgrade, I'm not sure what limit to set.\n> >\n>\n> In that case, we won't be able to pass the number of records. We need\n> to check based on the type of records.\n>\n> >\n> > > However, I am not\n> > > sure if there can't be any additional WAL from checkpointer or\n> > > bgwriter. Checkpointer has a code that ensures that if there is no\n> > > important WAL activity then it would be skipped. Similarly, bgwriter\n> > > also doesn't LOG xl_running_xacts unless there is an important\n> > > activity.\n> >\n> > WAL records for hint bit updates could be generated even in upgrading mode?\n> >\n>\n> Do you mean these records can be generated during reading catalog tables?\n\nYes.\n\n>\n> > > I feel if there is a chance of any WAL activity during the\n> > > upgrade, we need to either change the check to ensure such WAL records\n> > > are expected or document the same in some way.\n> >\n> > Yes, but how does it work with the above idea of limiting the number\n> > of WAL records to read? If XLOG_FPI_FOR_HINT can still be generated in\n> > the upgrade mode, we cannot predict how many such records are\n> > generated after the latest CHECKPOINT_SHUTDOWN.\n> >\n>\n> Right, as said earlier, in that case, we need to rely on the type of records.\n\nAnother idea would be that before starting the old cluster we check if\nthe slot's confirmed_flush_lsn in the slot state file matches the\nlatest checkpoint LSN got by pg_controlfile. We need another tool to\ndump the slot state file, though.\n\n>\n> > I'm not really sure we should always perform the slot's\n> > confirmed_flush_lsn check by default in the first place. With this\n> > check, the upgrade won't be able to proceed if there is any logical\n> > slot that is not used by logical replication (or something streaming\n> > the changes using walsender), right? For example, if a user uses a\n> > program that periodically consumes the changes from the logical slot,\n> > the slot would not be able to pass the check even if the user executed\n> > pg_logical_slot_get_changes() just before shutdown. The backend\n> > process who consumes the changes is always terminated before the\n> > shutdown checkpoint. On the other hand, I think there are cases where\n> > the user can ensure that no meaningful WAL records are generated after\n> > the last pg_logical_slot_get_changes(). I'm concerned that this check\n> > might make upgrading such cases cumbersome unnecessarily.\n> >\n>\n> You are right and I have mentioned the same case today in my response\n> to Jonathan but do you have better ideas to deal with such slots than\n> to give an ERROR?\n\nIt makes sense to me to give an ERROR for such slots but does it also\nmake sense to make the check optional?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 9 Aug 2023 11:30:45 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 8:01 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Aug 7, 2023 at 6:02 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Aug 7, 2023 at 2:02 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > WAL records for hint bit updates could be generated even in upgrading mode?\n> > >\n> >\n> > Do you mean these records can be generated during reading catalog tables?\n>\n> Yes.\n>\n\nBTW, Kuroda-San has verified and found that three types of records\n(including XLOG_FPI_FOR_HINT) can be generated by the system during\nthe upgrade. See email [1].\n\n> >\n> > > > I feel if there is a chance of any WAL activity during the\n> > > > upgrade, we need to either change the check to ensure such WAL records\n> > > > are expected or document the same in some way.\n> > >\n> > > Yes, but how does it work with the above idea of limiting the number\n> > > of WAL records to read? If XLOG_FPI_FOR_HINT can still be generated in\n> > > the upgrade mode, we cannot predict how many such records are\n> > > generated after the latest CHECKPOINT_SHUTDOWN.\n> > >\n> >\n> > Right, as said earlier, in that case, we need to rely on the type of records.\n>\n> Another idea would be that before starting the old cluster we check if\n> the slot's confirmed_flush_lsn in the slot state file matches the\n> latest checkpoint LSN got by pg_controlfile. We need another tool to\n> dump the slot state file, though.\n>\n\nI feel it would be a good idea to provide such a tool for users to\navoid getting errors during upgrade but I think the upgrade code still\nneeds to ensure that there are no WAL records between\nconfirm_flush_lsn and SHUTDOWN_CHECKPOINT than required. Or, do you\nwant to say that we don't do any verification check during the upgrade\nand let the data loss happens if the user didn't ensure that by\nrunning such a tool?\n\n> >\n> > > I'm not really sure we should always perform the slot's\n> > > confirmed_flush_lsn check by default in the first place. With this\n> > > check, the upgrade won't be able to proceed if there is any logical\n> > > slot that is not used by logical replication (or something streaming\n> > > the changes using walsender), right? For example, if a user uses a\n> > > program that periodically consumes the changes from the logical slot,\n> > > the slot would not be able to pass the check even if the user executed\n> > > pg_logical_slot_get_changes() just before shutdown. The backend\n> > > process who consumes the changes is always terminated before the\n> > > shutdown checkpoint. On the other hand, I think there are cases where\n> > > the user can ensure that no meaningful WAL records are generated after\n> > > the last pg_logical_slot_get_changes(). I'm concerned that this check\n> > > might make upgrading such cases cumbersome unnecessarily.\n> > >\n> >\n> > You are right and I have mentioned the same case today in my response\n> > to Jonathan but do you have better ideas to deal with such slots than\n> > to give an ERROR?\n>\n> It makes sense to me to give an ERROR for such slots but does it also\n> make sense to make the check optional?\n>\n\nWe can do that if we think so. We have two ways to make this check\noptional (a) have a switch like --include-logical-replication-slots as\nthe proposed patch has which means by default we won't try to upgrade\nslots; (b) have a switch like --exclude-logical-replication-slots as\nJonathan proposed which means we will exclude slots only if specified\nby user. Now, one thing to note is that we don't seem to have any\ninclude/exclude switch in the upgrade which I think indicates users by\ndefault prefer to upgrade everything. Now, even if we decide not to\ngive any switch initially but do it only if there is a user demand for\nit then also users will have a way to proceed with an upgrade which is\nby dropping such slots. Do you have any preference?\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58660273EACEFC5BF256B133F50DA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Aug 2023 09:45:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 9, 2023 at 1:15 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Aug 9, 2023 at 8:01 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Aug 7, 2023 at 6:02 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Aug 7, 2023 at 2:02 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > WAL records for hint bit updates could be generated even in upgrading mode?\n> > > >\n> > >\n> > > Do you mean these records can be generated during reading catalog tables?\n> >\n> > Yes.\n> >\n>\n> BTW, Kuroda-San has verified and found that three types of records\n> (including XLOG_FPI_FOR_HINT) can be generated by the system during\n> the upgrade. See email [1].\n>\n> > >\n> > > > > I feel if there is a chance of any WAL activity during the\n> > > > > upgrade, we need to either change the check to ensure such WAL records\n> > > > > are expected or document the same in some way.\n> > > >\n> > > > Yes, but how does it work with the above idea of limiting the number\n> > > > of WAL records to read? If XLOG_FPI_FOR_HINT can still be generated in\n> > > > the upgrade mode, we cannot predict how many such records are\n> > > > generated after the latest CHECKPOINT_SHUTDOWN.\n> > > >\n> > >\n> > > Right, as said earlier, in that case, we need to rely on the type of records.\n> >\n> > Another idea would be that before starting the old cluster we check if\n> > the slot's confirmed_flush_lsn in the slot state file matches the\n> > latest checkpoint LSN got by pg_controlfile. We need another tool to\n> > dump the slot state file, though.\n> >\n>\n> I feel it would be a good idea to provide such a tool for users to\n> avoid getting errors during upgrade but I think the upgrade code still\n> needs to ensure that there are no WAL records between\n> confirm_flush_lsn and SHUTDOWN_CHECKPOINT than required. Or, do you\n> want to say that we don't do any verification check during the upgrade\n> and let the data loss happens if the user didn't ensure that by\n> running such a tool?\n\nI meant that if we can check the slot state file while the old cluster\nstops, we can ensure there are no WAL records between slot's\nconfirmed_fluhs_lsn (in the state file) and the latest checkpoint (in\nthe control file).\n\n>\n> > >\n> > > > I'm not really sure we should always perform the slot's\n> > > > confirmed_flush_lsn check by default in the first place. With this\n> > > > check, the upgrade won't be able to proceed if there is any logical\n> > > > slot that is not used by logical replication (or something streaming\n> > > > the changes using walsender), right? For example, if a user uses a\n> > > > program that periodically consumes the changes from the logical slot,\n> > > > the slot would not be able to pass the check even if the user executed\n> > > > pg_logical_slot_get_changes() just before shutdown. The backend\n> > > > process who consumes the changes is always terminated before the\n> > > > shutdown checkpoint. On the other hand, I think there are cases where\n> > > > the user can ensure that no meaningful WAL records are generated after\n> > > > the last pg_logical_slot_get_changes(). I'm concerned that this check\n> > > > might make upgrading such cases cumbersome unnecessarily.\n> > > >\n> > >\n> > > You are right and I have mentioned the same case today in my response\n> > > to Jonathan but do you have better ideas to deal with such slots than\n> > > to give an ERROR?\n> >\n> > It makes sense to me to give an ERROR for such slots but does it also\n> > make sense to make the check optional?\n> >\n>\n> We can do that if we think so. We have two ways to make this check\n> optional (a) have a switch like --include-logical-replication-slots as\n> the proposed patch has which means by default we won't try to upgrade\n> slots; (b) have a switch like --exclude-logical-replication-slots as\n> Jonathan proposed which means we will exclude slots only if specified\n> by user. Now, one thing to note is that we don't seem to have any\n> include/exclude switch in the upgrade which I think indicates users by\n> default prefer to upgrade everything. Now, even if we decide not to\n> give any switch initially but do it only if there is a user demand for\n> it then also users will have a way to proceed with an upgrade which is\n> by dropping such slots. Do you have any preference?\n\nTBH I'm not sure if there is a use case where the user wants to\nexclude replication slots during the upgrade. Including replication\nslots by default seems to be better to me, at least for now. I\ninitially thought asking for users to drop replication slots that\npossibly have not consumed all WAL records would not be a good idea,\nbut since we already do such things in check.c I now think it would\nnot be a problem. I guess it would be great if we can check WAL\nrecords between slots' confimed_flush_lsn and the latest LSN, and if\nthere are no meaningful WAL records there we can upgrade the\nreplication slots.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 10 Aug 2023 10:15:38 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 6:46 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Wed, Aug 9, 2023 at 1:15 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Aug 9, 2023 at 8:01 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > I feel it would be a good idea to provide such a tool for users to\n> > avoid getting errors during upgrade but I think the upgrade code still\n> > needs to ensure that there are no WAL records between\n> > confirm_flush_lsn and SHUTDOWN_CHECKPOINT than required. Or, do you\n> > want to say that we don't do any verification check during the upgrade\n> > and let the data loss happens if the user didn't ensure that by\n> > running such a tool?\n>\n> I meant that if we can check the slot state file while the old cluster\n> stops, we can ensure there are no WAL records between slot's\n> confirmed_fluhs_lsn (in the state file) and the latest checkpoint (in\n> the control file).\n>\n\nAre you suggesting doing this before we start the old cluster or after\nwe stop the old cluster? I was thinking about the pros and cons of\ndoing this check when the server is 'on' (along with other upgrade\nchecks something like the patch is doing now) versus when the server\nis 'off'. I think the advantage of doing it when the server is 'off'\n(after check_and_dump_old_cluster()) is that it will be ensured that\nthere is no extra WAL that could be generated during the upgrade and\nhas not been verified against confirmed_flush_lsn location. But OTOH,\nto retrieve slot information when the server is 'off', we need a\nseparate utility or probably a functionality for the same in\npg_upgrade and also some WAL reading stuff which sounds to me like a\nlarger change that may not be warranted here. I think anyway the extra\nWAL (if any got generated during the upgrade) won't be required after\nthe upgrade so not convinced to make such a check while the server is\n'off'. Are there reasons which make it better to do this while the old\ncluster is 'off'?\n\n> >\n> > We can do that if we think so. We have two ways to make this check\n> > optional (a) have a switch like --include-logical-replication-slots as\n> > the proposed patch has which means by default we won't try to upgrade\n> > slots; (b) have a switch like --exclude-logical-replication-slots as\n> > Jonathan proposed which means we will exclude slots only if specified\n> > by user. Now, one thing to note is that we don't seem to have any\n> > include/exclude switch in the upgrade which I think indicates users by\n> > default prefer to upgrade everything. Now, even if we decide not to\n> > give any switch initially but do it only if there is a user demand for\n> > it then also users will have a way to proceed with an upgrade which is\n> > by dropping such slots. Do you have any preference?\n>\n> TBH I'm not sure if there is a use case where the user wants to\n> exclude replication slots during the upgrade. Including replication\n> slots by default seems to be better to me, at least for now. I\n> initially thought asking for users to drop replication slots that\n> possibly have not consumed all WAL records would not be a good idea,\n> but since we already do such things in check.c I now think it would\n> not be a problem. I guess it would be great if we can check WAL\n> records between slots' confimed_flush_lsn and the latest LSN, and if\n> there are no meaningful WAL records there we can upgrade the\n> replication slots.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Aug 2023 09:22:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 7, 2023 at 3:46 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 7, 2023 at 1:06 PM Julien Rouhaud <[email protected]> wrote:\n> >\n> > On Mon, Aug 07, 2023 at 12:42:33PM +0530, Amit Kapila wrote:\n> > > On Mon, Aug 7, 2023 at 11:29 AM Julien Rouhaud <[email protected]> wrote:\n> > > >\n> > > > Unless I'm missing something I don't see what prevents something to connect\n> > > > using the replication protocol and issue any query or even create new\n> > > > replication slots?\n> > > >\n> > >\n> > > I think the point is that if we have any slots where we have not\n> > > consumed the pending WAL (other than the expected like\n> > > SHUTDOWN_CHECKPOINT) or if there are invalid slots then the upgrade\n> > > won't proceed and we will request user to remove such slots or ensure\n> > > that WAL is consumed by slots. So, I think in the case you mentioned,\n> > > the upgrade won't succeed.\n> >\n> > What if new slots are added while the old instance is started in the middle of\n> > pg_upgrade, *after* the various checks are done?\n> >\n>\n> They won't be copied but I think that won't be any different than\n> other objects like tables. Anyway, I have another idea which is to not\n> allow creating slots during binary upgrade unless one specifically\n> requests it by having an API like binary_upgrade_allow_slot_create()\n> similar to existing APIs binary_upgrade_*.\n>\n\nSawada-San, Julien, and others, do you have any thoughts on the above point?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Aug 2023 10:56:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 2:27 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 7, 2023 at 3:46 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Aug 7, 2023 at 1:06 PM Julien Rouhaud <[email protected]> wrote:\n> > >\n> > > On Mon, Aug 07, 2023 at 12:42:33PM +0530, Amit Kapila wrote:\n> > > > On Mon, Aug 7, 2023 at 11:29 AM Julien Rouhaud <[email protected]> wrote:\n> > > > >\n> > > > > Unless I'm missing something I don't see what prevents something to connect\n> > > > > using the replication protocol and issue any query or even create new\n> > > > > replication slots?\n> > > > >\n> > > >\n> > > > I think the point is that if we have any slots where we have not\n> > > > consumed the pending WAL (other than the expected like\n> > > > SHUTDOWN_CHECKPOINT) or if there are invalid slots then the upgrade\n> > > > won't proceed and we will request user to remove such slots or ensure\n> > > > that WAL is consumed by slots. So, I think in the case you mentioned,\n> > > > the upgrade won't succeed.\n> > >\n> > > What if new slots are added while the old instance is started in the middle of\n> > > pg_upgrade, *after* the various checks are done?\n> > >\n> >\n> > They won't be copied but I think that won't be any different than\n> > other objects like tables. Anyway, I have another idea which is to not\n> > allow creating slots during binary upgrade unless one specifically\n> > requests it by having an API like binary_upgrade_allow_slot_create()\n> > similar to existing APIs binary_upgrade_*.\n> >\n>\n> Sawada-San, Julien, and others, do you have any thoughts on the above point?\n\nIIUC during the old cluster running in the middle of pg_upgrade it\ndoesn't accept TCP connections. I'm not sure we need to worry about\nthe case where someone in the same server attempts to create\nreplication slots during the upgrade. The same is true for other\nobjects, as Amit mentioned.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 10 Aug 2023 16:30:40 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 12:52 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 10, 2023 at 6:46 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Wed, Aug 9, 2023 at 1:15 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Aug 9, 2023 at 8:01 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > I feel it would be a good idea to provide such a tool for users to\n> > > avoid getting errors during upgrade but I think the upgrade code still\n> > > needs to ensure that there are no WAL records between\n> > > confirm_flush_lsn and SHUTDOWN_CHECKPOINT than required. Or, do you\n> > > want to say that we don't do any verification check during the upgrade\n> > > and let the data loss happens if the user didn't ensure that by\n> > > running such a tool?\n> >\n> > I meant that if we can check the slot state file while the old cluster\n> > stops, we can ensure there are no WAL records between slot's\n> > confirmed_fluhs_lsn (in the state file) and the latest checkpoint (in\n> > the control file).\n> >\n>\n> Are you suggesting doing this before we start the old cluster or after\n> we stop the old cluster? I was thinking about the pros and cons of\n> doing this check when the server is 'on' (along with other upgrade\n> checks something like the patch is doing now) versus when the server\n> is 'off'. I think the advantage of doing it when the server is 'off'\n> (after check_and_dump_old_cluster()) is that it will be ensured that\n> there is no extra WAL that could be generated during the upgrade and\n> has not been verified against confirmed_flush_lsn location. But OTOH,\n> to retrieve slot information when the server is 'off', we need a\n> separate utility or probably a functionality for the same in\n> pg_upgrade and also some WAL reading stuff which sounds to me like a\n> larger change that may not be warranted here. I think anyway the extra\n> WAL (if any got generated during the upgrade) won't be required after\n> the upgrade so not convinced to make such a check while the server is\n> 'off'. Are there reasons which make it better to do this while the old\n> cluster is 'off'?\n\nWhat I imagined is that we do this check before\ncheck_and_dump_old_cluster() while the server is 'off'. Reading the\nslot state file would be simple and I guess we would not need a tool\nor cli program for that. We need to expose RepliactionSlotOnDisk,\nthough. After reading the control file and the slots' state files we\ncheck if slot's confirmed_flush_lsn matches the latest checkpoint LSN\nin the control file (BTW maybe we can get slot name and plugin name\nhere instead of using pg_dump?). Extra WAL records could be generated\nonly after this check, so we wouldn't need to worry about that for\nslots for logical replication. As for non-logical replication slots,\nwe would need some WAL reading stuff, but I'm not sure we need it for\nthe first commit. Or another idea would be to allow users to mark\nreplication slots \"upgradable\" so that pg_upgrade skips the\nconfirmed_flush_lsn check.\n\nBTW this check would not be able to support live-check but I think\nit's not a problem as this check with a running server will never be\nable to pass.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 10 Aug 2023 22:37:04 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\nBased on recent discussions, I updated the patch set. I did not reply one by one\r\nbecause there are many posts, but thank you for giving many suggestion!\r\n\r\nFollowings shows what I changed.\r\n\r\n1.\r\nThis feature is now enabled by default. Instead \"--exclude-logical-replication-slots\"\r\nwas added. (Per suggestions like [1])\r\n\r\n2.\r\nPg_upgrade raises ERROR when some slots are 'WALAVAIL_REMOVED'. (Per discussion[2])\r\n\r\n3.\r\nSlots which are 'WALAVAIL_UNRESERVED' are dumped and restored. (Per consideration[3])\r\n\r\n4.\r\nCombination --logical-replication-slots-only and other --only options was\r\nprohibit again. (Per suggestion[4]) Currently --data-only and --schema-only\r\ncould not be used together, so I followed the same style. Additionally, it's not\r\neasy for user to predict the behavior if specifying many --only command.\r\n\r\n5. \r\nFixed some bugs related with combinations of options. E.g., v18 did not allow to\r\nuse \"--create\", but now it could use same time. This was because information\r\nof role did not get from node while doing slot dump.\r\n\r\n6.\r\nThe ordering of patches was changed. The patch \"Always persist to disk...\"\r\nbecame 0001. (Per suggestion [4])\r\n\r\n7.\r\nFunctions for checking were changed (per [5]). Currently WALs between\r\nconfirmed_lsn and current location is scanned and confirmed. The requirements\r\nare little hacky:\r\n\r\n* The first record after the confirmed_lsn must be SHUTDOWN_CHECKPOINT\r\n* Other records till current position must be either RUNNING_XACT,\r\n CHECKPOINT_ONLINE or XLOG_FPI_FOR_HINT.\r\n\r\nIn the checking function (validate_wal_record_types_after), WALs are read\r\nrepeatedly and confirmed its type. v18 required to change the version number\r\nfor pg_walinspect, it is not needed anymore.\r\n\r\n\r\n[1]: https://www.postgresql.org/message-id/ad83b9f2-ced3-c51c-342a-cc281ff562fc%40postgresql.org\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1%2B8btsYhNQvw6QJ4iTw1wFhkFXXABT%3DED1eHFvtekRanQ%40mail.gmail.com\r\n[3]: https://www.postgresql.org/message-id/TYAPR01MB5866FD3F7992A46D0457F0E6F50BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[4]: https://www.postgresql.org/message-id/CAA4eK1%2BCD82Kssy%2BiqpETPKYUh9AmNORF%2B3iGfNXgxKxqL3T6g%40mail.gmail.com\r\n[5]: https://www.postgresql.org/message-id/CAD21AoC4D4wYTcLM8T-rAv%3DpO5kS6ffcVD1e7h4eFERT4%2BfwQQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 10 Aug 2023 15:02:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 10:37:04PM +0900, Masahiko Sawada wrote:\n> On Thu, Aug 10, 2023 at 12:52 PM Amit Kapila <[email protected]> wrote:\n> > Are you suggesting doing this before we start the old cluster or after\n> > we stop the old cluster? I was thinking about the pros and cons of\n> > doing this check when the server is 'on' (along with other upgrade\n> > checks something like the patch is doing now) versus when the server\n> > is 'off'. I think the advantage of doing it when the server is 'off'\n> > (after check_and_dump_old_cluster()) is that it will be ensured that\n> > there is no extra WAL that could be generated during the upgrade and\n> > has not been verified against confirmed_flush_lsn location. But OTOH,\n> > to retrieve slot information when the server is 'off', we need a\n> > separate utility or probably a functionality for the same in\n> > pg_upgrade and also some WAL reading stuff which sounds to me like a\n> > larger change that may not be warranted here. I think anyway the extra\n> > WAL (if any got generated during the upgrade) won't be required after\n> > the upgrade so not convinced to make such a check while the server is\n> > 'off'. Are there reasons which make it better to do this while the old\n> > cluster is 'off'?\n> \n> What I imagined is that we do this check before\n> check_and_dump_old_cluster() while the server is 'off'. Reading the\n> slot state file would be simple and I guess we would not need a tool\n> or cli program for that.\n\nAgreed.\n\n> BTW this check would not be able to support live-check but I think\n> it's not a problem as this check with a running server will never be\n> able to pass.\n\nAgreed.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Thu, 10 Aug 2023 21:45:38 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 10, 2023 at 04:30:40PM +0900, Masahiko Sawada wrote:\n> On Thu, Aug 10, 2023 at 2:27 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Sawada-San, Julien, and others, do you have any thoughts on the above point?\n>\n> IIUC during the old cluster running in the middle of pg_upgrade it\n> doesn't accept TCP connections. I'm not sure we need to worry about\n> the case where someone in the same server attempts to create\n> replication slots during the upgrade.\n\nAFAICS this is only true for non-Windows platform, so we would still need some\nextra safeguards on Windows. Having those on all platforms will probably be\nsimpler and won't hurt otherwise.\n\n> The same is true for other objects, as Amit mentioned.\n\nI disagree. As I mentioned before any module registered in\nshared_preload_libraries can spawn background workers which can perform any\nactivity. There were previous reports of corruption because of multi-xact\nbeing generated by such bgworkers during pg_upgrade, I'm pretty sure that there\nare some modules that create objects (automatic partitioning tools for\ninstance). It's also unclear to me what would happen if some writes are\nperformed by such module at various points of the pg_upgrade process. Couldn't\nthat lead to either data loss or broken slot (as it couldn't stream changes\nfrom older major version)?\n\n\n",
"msg_date": "Fri, 11 Aug 2023 13:13:48 +0800",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 7:07 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Aug 10, 2023 at 12:52 PM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > Are you suggesting doing this before we start the old cluster or after\n> > we stop the old cluster? I was thinking about the pros and cons of\n> > doing this check when the server is 'on' (along with other upgrade\n> > checks something like the patch is doing now) versus when the server\n> > is 'off'. I think the advantage of doing it when the server is 'off'\n> > (after check_and_dump_old_cluster()) is that it will be ensured that\n> > there is no extra WAL that could be generated during the upgrade and\n> > has not been verified against confirmed_flush_lsn location. But OTOH,\n> > to retrieve slot information when the server is 'off', we need a\n> > separate utility or probably a functionality for the same in\n> > pg_upgrade and also some WAL reading stuff which sounds to me like a\n> > larger change that may not be warranted here. I think anyway the extra\n> > WAL (if any got generated during the upgrade) won't be required after\n> > the upgrade so not convinced to make such a check while the server is\n> > 'off'. Are there reasons which make it better to do this while the old\n> > cluster is 'off'?\n>\n> What I imagined is that we do this check before\n> check_and_dump_old_cluster() while the server is 'off'. Reading the\n> slot state file would be simple and I guess we would not need a tool\n> or cli program for that. We need to expose RepliactionSlotOnDisk,\n> though.\n>\n\nWon't that require a lot of version-specific checks as across versions\nthe file format could be different? For the case of the control file,\nwe use version-specific pg_controldata (for the old cluster, the\ncorresponding version's pg_controldata) utility to read the old\nversion control file. I thought we need something similar here if we\nwant to do what you are suggesting.\n\n>\n> After reading the control file and the slots' state files we\n> check if slot's confirmed_flush_lsn matches the latest checkpoint LSN\n> in the control file (BTW maybe we can get slot name and plugin name\n> here instead of using pg_dump?).\n\nBut isn't the advantage of doing via pg_dump (in binary_mode) that we\nallow some outside core in-place upgrade tool to also use it if\nrequired? If we don't think that would be required then we can\nprobably use the info we retrieve it in pg_upgrade.\n\n>\n> the first commit. Or another idea would be to allow users to mark\n> replication slots \"upgradable\" so that pg_upgrade skips the\n> confirmed_flush_lsn check.\n>\n\nI guess for that we need to ask users to ensure that confirm_flush_lsn\nis up-to-date and then provide some slot-level API to mark the slots\nwith the required status. If so, that sounds a bit complicated for\nusers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 11 Aug 2023 10:46:31 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 10:43 AM Julien Rouhaud <[email protected]> wrote:\n>\n> On Thu, Aug 10, 2023 at 04:30:40PM +0900, Masahiko Sawada wrote:\n> > On Thu, Aug 10, 2023 at 2:27 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > Sawada-San, Julien, and others, do you have any thoughts on the above point?\n> >\n> > IIUC during the old cluster running in the middle of pg_upgrade it\n> > doesn't accept TCP connections. I'm not sure we need to worry about\n> > the case where someone in the same server attempts to create\n> > replication slots during the upgrade.\n>\n> AFAICS this is only true for non-Windows platform, so we would still need some\n> extra safeguards on Windows. Having those on all platforms will probably be\n> simpler and won't hurt otherwise.\n>\n> > The same is true for other objects, as Amit mentioned.\n>\n> I disagree. As I mentioned before any module registered in\n> shared_preload_libraries can spawn background workers which can perform any\n> activity. There were previous reports of corruption because of multi-xact\n> being generated by such bgworkers during pg_upgrade, I'm pretty sure that there\n> are some modules that create objects (automatic partitioning tools for\n> instance). It's also unclear to me what would happen if some writes are\n> performed by such module at various points of the pg_upgrade process. Couldn't\n> that lead to either data loss or broken slot (as it couldn't stream changes\n> from older major version)?\n>\n\nIt won't be any bad than what can happen to tables. If we know that\nsuch bgworkers can cause corruption if they do writes during the\nupgrade, I don't think it is the job of this patch to prevent the\nrelated scenarios. We can probably disallow the creation of new slots\nduring the binary upgrade but that also I am not sure. I guess it\nwould be better to document such hazards as a first step and then\nprobably write a patch to prevent WAL writes or something along those\nlines.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 11 Aug 2023 11:18:09 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 11:18:09AM +0530, Amit Kapila wrote:\n> On Fri, Aug 11, 2023 at 10:43 AM Julien Rouhaud <[email protected]> wrote:\n> > I disagree. As I mentioned before any module registered in\n> > shared_preload_libraries can spawn background workers which can perform any\n> > activity. There were previous reports of corruption because of multi-xact\n> > being generated by such bgworkers during pg_upgrade, I'm pretty sure that there\n> > are some modules that create objects (automatic partitioning tools for\n> > instance). It's also unclear to me what would happen if some writes are\n> > performed by such module at various points of the pg_upgrade process. Couldn't\n> > that lead to either data loss or broken slot (as it couldn't stream changes\n> > from older major version)?\n> \n> It won't be any bad than what can happen to tables. If we know that\n> such bgworkers can cause corruption if they do writes during the\n> upgrade, I don't think it is the job of this patch to prevent the\n> related scenarios. We can probably disallow the creation of new slots\n> during the binary upgrade but that also I am not sure. I guess it\n> would be better to document such hazards as a first step and then\n> probably write a patch to prevent WAL writes or something along those\n> lines.\n\nYes, if users are connecting to the clusters during pg_upgrade, we have\nmany more problems than slots.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 11 Aug 2023 14:03:47 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 10:46:31AM +0530, Amit Kapila wrote:\n> On Thu, Aug 10, 2023 at 7:07 PM Masahiko Sawada <[email protected]> wrote:\n> > What I imagined is that we do this check before\n> > check_and_dump_old_cluster() while the server is 'off'. Reading the\n> > slot state file would be simple and I guess we would not need a tool\n> > or cli program for that. We need to expose RepliactionSlotOnDisk,\n> > though.\n> \n> Won't that require a lot of version-specific checks as across versions\n> the file format could be different? For the case of the control file,\n> we use version-specific pg_controldata (for the old cluster, the\n> corresponding version's pg_controldata) utility to read the old\n> version control file. I thought we need something similar here if we\n> want to do what you are suggesting.\n\nYou mean the slot file format? We will need that complexity somewhere,\nso why not in pg_upgrade?\n\n> > After reading the control file and the slots' state files we\n> > check if slot's confirmed_flush_lsn matches the latest checkpoint LSN\n> > in the control file (BTW maybe we can get slot name and plugin name\n> > here instead of using pg_dump?).\n> \n> But isn't the advantage of doing via pg_dump (in binary_mode) that we\n> allow some outside core in-place upgrade tool to also use it if\n> required? If we don't think that would be required then we can\n> probably use the info we retrieve it in pg_upgrade.\n\nYou mean the code reading the slot file? I don't see the point of\nadding user complexity to enable some hypothetical external usage.\n\n> > the first commit. Or another idea would be to allow users to mark\n> > replication slots \"upgradable\" so that pg_upgrade skips the\n> > confirmed_flush_lsn check.\n> \n> I guess for that we need to ask users to ensure that confirm_flush_lsn\n> is up-to-date and then provide some slot-level API to mark the slots\n> with the required status. If so, that sounds a bit complicated for\n> users.\n\nAgreed, not worth it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 11 Aug 2023 14:08:39 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 11, 2023 at 11:38 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Fri, Aug 11, 2023 at 10:46:31AM +0530, Amit Kapila wrote:\n> > On Thu, Aug 10, 2023 at 7:07 PM Masahiko Sawada <[email protected]> wrote:\n> > > What I imagined is that we do this check before\n> > > check_and_dump_old_cluster() while the server is 'off'. Reading the\n> > > slot state file would be simple and I guess we would not need a tool\n> > > or cli program for that. We need to expose RepliactionSlotOnDisk,\n> > > though.\n> >\n> > Won't that require a lot of version-specific checks as across versions\n> > the file format could be different? For the case of the control file,\n> > we use version-specific pg_controldata (for the old cluster, the\n> > corresponding version's pg_controldata) utility to read the old\n> > version control file. I thought we need something similar here if we\n> > want to do what you are suggesting.\n>\n> You mean the slot file format?\n\nYes.\n\n>\n> We will need that complexity somewhere,\n> so why not in pg_upgrade?\n>\n\nI don't think we need the complexity of version-specific checks if we\ndo what we do in get_control_data(). Basically, invoke\nversion-specific pg_replslotdata to get version-specific slot\ninformation. There has been a proposal for a tool like that [1]. Do\nyou have something better in mind? If so, can you please explain the\nsame a bit more?\n\n> > > After reading the control file and the slots' state files we\n> > > check if slot's confirmed_flush_lsn matches the latest checkpoint LSN\n> > > in the control file (BTW maybe we can get slot name and plugin name\n> > > here instead of using pg_dump?).\n> >\n> > But isn't the advantage of doing via pg_dump (in binary_mode) that we\n> > allow some outside core in-place upgrade tool to also use it if\n> > required? If we don't think that would be required then we can\n> > probably use the info we retrieve it in pg_upgrade.\n>\n> You mean the code reading the slot file? I don't see the point of\n> adding user complexity to enable some hypothetical external usage.\n>\n\nIt is not just that we need a slot reading facility but rather mimic\nsomething like pg_get_replication_slots() where we have to know the\nwalstate (WALAVAIL_REMOVED, etc.) as well. I am not against it but am\nnot sure that we do it for any other object in the upgrade. Can you\nplease point me out if we have any such prior usage? Even if we don't\ndo it today, we can start doing now if that makes sense but it appears\nto me that we are accessing contents of data-dir/WAL by invoking some\nother utilities like pg_controldata, pg_resetwal, so something similar\nwould make sense here. Actually, what we do here also somewhat depends\non what we decide for the other point we are discussing above in the\nemail.\n\n[1] - https://www.postgresql.org/message-id/flat/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 12 Aug 2023 11:50:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Aug 12, 2023 at 11:50:36AM +0530, Amit Kapila wrote:\n> > We will need that complexity somewhere,\n> > so why not in pg_upgrade?\n> >\n> \n> I don't think we need the complexity of version-specific checks if we\n> do what we do in get_control_data(). Basically, invoke\n> version-specific pg_replslotdata to get version-specific slot\n> information. There has been a proposal for a tool like that [1]. Do\n> you have something better in mind? If so, can you please explain the\n> same a bit more?\n\nYes, if you want to break it out into a separate tool and then have\npg_upgrade call/parse it like it calls/parses pg_controldata, that seems\nfine.\n\n> > > > After reading the control file and the slots' state files we\n> > > > check if slot's confirmed_flush_lsn matches the latest checkpoint LSN\n> > > > in the control file (BTW maybe we can get slot name and plugin name\n> > > > here instead of using pg_dump?).\n> > >\n> > > But isn't the advantage of doing via pg_dump (in binary_mode) that we\n> > > allow some outside core in-place upgrade tool to also use it if\n> > > required? If we don't think that would be required then we can\n> > > probably use the info we retrieve it in pg_upgrade.\n> >\n> > You mean the code reading the slot file? I don't see the point of\n> > adding user complexity to enable some hypothetical external usage.\n> \n> It is not just that we need a slot reading facility but rather mimic\n> something like pg_get_replication_slots() where we have to know the\n> walstate (WALAVAIL_REMOVED, etc.) as well. I am not against it but am\n> not sure that we do it for any other object in the upgrade. Can you\n> please point me out if we have any such prior usage? Even if we don't\n> do it today, we can start doing now if that makes sense but it appears\n> to me that we are accessing contents of data-dir/WAL by invoking some\n> other utilities like pg_controldata, pg_resetwal, so something similar\n> would make sense here. Actually, what we do here also somewhat depends\n> on what we decide for the other point we are discussing above in the\n> email.\n\nYes, if there is value in having that information available via the\ncommand-line tool, it makes sense to add it.\n\nLet me add that developers have complained how pg_upgrade scrapes the\noutput pg_controldata rather than reading the file, and we are basically\ndo that some more with this. However, I think that is an appropriate\napproach.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Sat, 12 Aug 2023 11:30:55 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Aug 12, 2023, 15:20 Amit Kapila <[email protected]> wrote:\n\n> On Fri, Aug 11, 2023 at 11:38 PM Bruce Momjian <[email protected]> wrote:\n> >\n> > On Fri, Aug 11, 2023 at 10:46:31AM +0530, Amit Kapila wrote:\n> > > On Thu, Aug 10, 2023 at 7:07 PM Masahiko Sawada <[email protected]>\n> wrote:\n> > > > What I imagined is that we do this check before\n> > > > check_and_dump_old_cluster() while the server is 'off'. Reading the\n> > > > slot state file would be simple and I guess we would not need a tool\n> > > > or cli program for that. We need to expose RepliactionSlotOnDisk,\n> > > > though.\n> > >\n> > > Won't that require a lot of version-specific checks as across versions\n> > > the file format could be different? For the case of the control file,\n> > > we use version-specific pg_controldata (for the old cluster, the\n> > > corresponding version's pg_controldata) utility to read the old\n> > > version control file. I thought we need something similar here if we\n> > > want to do what you are suggesting.\n> >\n> > You mean the slot file format?\n>\n> Yes.\n>\n> >\n> > We will need that complexity somewhere,\n> > so why not in pg_upgrade?\n> >\n>\n> I don't think we need the complexity of version-specific checks if we\n> do what we do in get_control_data(). Basically, invoke\n> version-specific pg_replslotdata to get version-specific slot\n> information. There has been a proposal for a tool like that [1]. Do\n> you have something better in mind? If so, can you please explain the\n> same a bit more?\n>\n\nYeah, we need something like pg_replslotdata. If there are other useful\nusecases for this tool, it would be good to have it. But I'm not sure other\nthan pg_upgrade usecase.\n\nAnother idea is (which might have already discussed thoguh) that we check\nif the latest shutdown checkpoint LSN in the control file matches the\nconfirmed_flush_lsn in pg_replication_slots view. That way, we can ensure\nthat the slot has consumed all WAL records before the last shutdown. We\ndon't need to worry about WAL records generated after starting the old\ncluster during the upgrade, at least for logical replication slots.\n\nRegards,\n\nOn Sat, Aug 12, 2023, 15:20 Amit Kapila <[email protected]> wrote:On Fri, Aug 11, 2023 at 11:38 PM Bruce Momjian <[email protected]> wrote:\n>\n> On Fri, Aug 11, 2023 at 10:46:31AM +0530, Amit Kapila wrote:\n> > On Thu, Aug 10, 2023 at 7:07 PM Masahiko Sawada <[email protected]> wrote:\n> > > What I imagined is that we do this check before\n> > > check_and_dump_old_cluster() while the server is 'off'. Reading the\n> > > slot state file would be simple and I guess we would not need a tool\n> > > or cli program for that. We need to expose RepliactionSlotOnDisk,\n> > > though.\n> >\n> > Won't that require a lot of version-specific checks as across versions\n> > the file format could be different? For the case of the control file,\n> > we use version-specific pg_controldata (for the old cluster, the\n> > corresponding version's pg_controldata) utility to read the old\n> > version control file. I thought we need something similar here if we\n> > want to do what you are suggesting.\n>\n> You mean the slot file format?\n\nYes.\n\n>\n> We will need that complexity somewhere,\n> so why not in pg_upgrade?\n>\n\nI don't think we need the complexity of version-specific checks if we\ndo what we do in get_control_data(). Basically, invoke\nversion-specific pg_replslotdata to get version-specific slot\ninformation. There has been a proposal for a tool like that [1]. Do\nyou have something better in mind? If so, can you please explain the\nsame a bit more?Yeah, we need something like pg_replslotdata. If there are other useful usecases for this tool, it would be good to have it. But I'm not sure other than pg_upgrade usecase.Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.Regards,",
"msg_date": "Mon, 14 Aug 2023 11:27:20 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Sat, Aug 12, 2023, 15:20 Amit Kapila <[email protected]> wrote:\n>>\n>> I don't think we need the complexity of version-specific checks if we\n>> do what we do in get_control_data(). Basically, invoke\n>> version-specific pg_replslotdata to get version-specific slot\n>> information. There has been a proposal for a tool like that [1]. Do\n>> you have something better in mind? If so, can you please explain the\n>> same a bit more?\n>\n>\n> Yeah, we need something like pg_replslotdata. If there are other useful usecases for this tool, it would be good to have it. But I'm not sure other than pg_upgrade usecase.\n>\n> Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.\n>\n\nRight, this is somewhat closer to what Patch is already doing. But\nremember in this case we need to remember and use the latest\ncheckpoint from the control file before the old cluster is started\nbecause otherwise the latest checkpoint location could be even updated\nduring the upgrade. So, instead of reading from WAL, we need to change\nso that we rely on the control file's latest LSN. I would prefer this\nidea than to invent a new API/tool like pg_replslotdata.\n\nThe other point you and Bruce seem to be favoring is that instead of\ndumping/restoring slots via pg_dump, we remember the required\ninformation of slots retrieved during their validation in pg_upgrade\nitself and use that to create the slots in the new cluster. Though I\nam not aware of doing similar treatment for other objects we restore\nin this case it seems reasonable especially because slots are not\nstored in the catalog and we anyway already need to retrieve the\nrequired information to validate them, so trying to again retrieve it\nvia pg_dump doesn't seem useful unless I am missing something. Does\nthis match your understanding?\n\nYet another thing I am trying to consider is whether we can allow to\nupgrade slots from 16 or 15 to later versions. As of now, the patch\nhas the following check:\ngetLogicalReplicationSlots()\n{\n...\n+ /* Check whether we should dump or not */\n+ if (fout->remoteVersion < 170000)\n+ return;\n...\n}\n\nIf we decide to use the existing view pg_replication_slots then can we\nconsider upgrading slots from the prior version to 17? Now, if we want\nto invent any new API similar to pg_replslotdata then we can't do this\nbecause it won't exist in prior versions but OTOH using existing view\npg_replication_slots can allow us to fetch slot info from older\nversions as well. So, I think it is worth considering.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 10:37:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 8:32 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Based on recent discussions, I updated the patch set. I did not reply one by one\n> because there are many posts, but thank you for giving many suggestion!\n>\n> Followings shows what I changed.\n>\n> 1.\n> This feature is now enabled by default. Instead \"--exclude-logical-replication-slots\"\n> was added. (Per suggestions like [1])\n>\n\nAFAICS, we don't have any concrete agreement on such an option but my\nvote is to not have such an option as we don't have any similar option\nfor any other object. I understand that it could be convenient for\nsome use cases where some of the logical slots are not yet caught up\nw.r.t WAL and users want to upgrade without the slots but not sure if\nthat is really the case. Does anyone else have an opinion on this\npoint?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Aug 2023 10:51:45 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 14, 2023 at 2:07 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Sat, Aug 12, 2023, 15:20 Amit Kapila <[email protected]> wrote:\n> >>\n> >> I don't think we need the complexity of version-specific checks if we\n> >> do what we do in get_control_data(). Basically, invoke\n> >> version-specific pg_replslotdata to get version-specific slot\n> >> information. There has been a proposal for a tool like that [1]. Do\n> >> you have something better in mind? If so, can you please explain the\n> >> same a bit more?\n> >\n> >\n> > Yeah, we need something like pg_replslotdata. If there are other useful usecases for this tool, it would be good to have it. But I'm not sure other than pg_upgrade usecase.\n> >\n> > Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.\n> >\n>\n> Right, this is somewhat closer to what Patch is already doing. But\n> remember in this case we need to remember and use the latest\n> checkpoint from the control file before the old cluster is started\n> because otherwise the latest checkpoint location could be even updated\n> during the upgrade. So, instead of reading from WAL, we need to change\n> so that we rely on the control file's latest LSN.\n\nYes, I was thinking the same idea.\n\nBut it works for only replication slots for logical replication. Do we\nwant to check if no meaningful WAL records are generated after the\nlatest shutdown checkpoint, for manually created slots (or non-logical\nreplication slots)? If so, we would need to have something reading WAL\nrecords in the end.\n\n> I would prefer this\n> idea than to invent a new API/tool like pg_replslotdata.\n\n+1\n\n>\n> The other point you and Bruce seem to be favoring is that instead of\n> dumping/restoring slots via pg_dump, we remember the required\n> information of slots retrieved during their validation in pg_upgrade\n> itself and use that to create the slots in the new cluster. Though I\n> am not aware of doing similar treatment for other objects we restore\n> in this case it seems reasonable especially because slots are not\n> stored in the catalog and we anyway already need to retrieve the\n> required information to validate them, so trying to again retrieve it\n> via pg_dump doesn't seem useful unless I am missing something. Does\n> this match your understanding?\n\nIf there are use cases for --logical-replication-slots-only option\nother than pg_upgrade, it would be good to have it in pg_dump. I was\njust not sure of other use cases.\n\n>\n> Yet another thing I am trying to consider is whether we can allow to\n> upgrade slots from 16 or 15 to later versions. As of now, the patch\n> has the following check:\n> getLogicalReplicationSlots()\n> {\n> ...\n> + /* Check whether we should dump or not */\n> + if (fout->remoteVersion < 170000)\n> + return;\n> ...\n> }\n>\n> If we decide to use the existing view pg_replication_slots then can we\n> consider upgrading slots from the prior version to 17? Now, if we want\n> to invent any new API similar to pg_replslotdata then we can't do this\n> because it won't exist in prior versions but OTOH using existing view\n> pg_replication_slots can allow us to fetch slot info from older\n> versions as well. So, I think it is worth considering.\n\nI think that without 0001 patch the replication slots will not be able\nto pass the confirmed_flush_lsn check.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 15 Aug 2023 11:21:15 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 7:51 AM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Aug 14, 2023 at 2:07 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.\n> > >\n> >\n> > Right, this is somewhat closer to what Patch is already doing. But\n> > remember in this case we need to remember and use the latest\n> > checkpoint from the control file before the old cluster is started\n> > because otherwise the latest checkpoint location could be even updated\n> > during the upgrade. So, instead of reading from WAL, we need to change\n> > so that we rely on the control file's latest LSN.\n>\n> Yes, I was thinking the same idea.\n>\n> But it works for only replication slots for logical replication. Do we\n> want to check if no meaningful WAL records are generated after the\n> latest shutdown checkpoint, for manually created slots (or non-logical\n> replication slots)? If so, we would need to have something reading WAL\n> records in the end.\n>\n\nThis feature only targets logical replication slots. I don't see a\nreason to be different for manually created logical replication slots.\nIs there something particular that you think we could be missing?\n\n> > I would prefer this\n> > idea than to invent a new API/tool like pg_replslotdata.\n>\n> +1\n>\n> >\n> > The other point you and Bruce seem to be favoring is that instead of\n> > dumping/restoring slots via pg_dump, we remember the required\n> > information of slots retrieved during their validation in pg_upgrade\n> > itself and use that to create the slots in the new cluster. Though I\n> > am not aware of doing similar treatment for other objects we restore\n> > in this case it seems reasonable especially because slots are not\n> > stored in the catalog and we anyway already need to retrieve the\n> > required information to validate them, so trying to again retrieve it\n> > via pg_dump doesn't seem useful unless I am missing something. Does\n> > this match your understanding?\n>\n> If there are use cases for --logical-replication-slots-only option\n> other than pg_upgrade, it would be good to have it in pg_dump. I was\n> just not sure of other use cases.\n>\n\nIt was primarily for upgrade purposes only. So, as we can't see a good\nreason to go via pg_dump let's do it in upgrade unless someone thinks\notherwise.\n\n> >\n> > Yet another thing I am trying to consider is whether we can allow to\n> > upgrade slots from 16 or 15 to later versions. As of now, the patch\n> > has the following check:\n> > getLogicalReplicationSlots()\n> > {\n> > ...\n> > + /* Check whether we should dump or not */\n> > + if (fout->remoteVersion < 170000)\n> > + return;\n> > ...\n> > }\n> >\n> > If we decide to use the existing view pg_replication_slots then can we\n> > consider upgrading slots from the prior version to 17? Now, if we want\n> > to invent any new API similar to pg_replslotdata then we can't do this\n> > because it won't exist in prior versions but OTOH using existing view\n> > pg_replication_slots can allow us to fetch slot info from older\n> > versions as well. So, I think it is worth considering.\n>\n> I think that without 0001 patch the replication slots will not be able\n> to pass the confirmed_flush_lsn check.\n>\n\nRight, but we can think of backpatching the same. Anyway, we can do\nthat as a separate work by starting a new thread to see if there is a\nbroader agreement for backpatching such a change. For now, we can\nfocus on >=v17.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 15 Aug 2023 08:36:11 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tuesday, August 15, 2023 11:06 AM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Tue, Aug 15, 2023 at 7:51 AM Masahiko Sawada <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Mon, Aug 14, 2023 at 2:07 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> > > > Another idea is (which might have already discussed thoguh) that we\r\n> check if the latest shutdown checkpoint LSN in the control file matches the\r\n> confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that\r\n> the slot has consumed all WAL records before the last shutdown. We don't\r\n> need to worry about WAL records generated after starting the old cluster\r\n> during the upgrade, at least for logical replication slots.\r\n> > > >\r\n> > >\r\n> > > Right, this is somewhat closer to what Patch is already doing. But\r\n> > > remember in this case we need to remember and use the latest\r\n> > > checkpoint from the control file before the old cluster is started\r\n> > > because otherwise the latest checkpoint location could be even\r\n> > > updated during the upgrade. So, instead of reading from WAL, we need\r\n> > > to change so that we rely on the control file's latest LSN.\r\n> >\r\n> > Yes, I was thinking the same idea.\r\n> >\r\n> > But it works for only replication slots for logical replication. Do we\r\n> > want to check if no meaningful WAL records are generated after the\r\n> > latest shutdown checkpoint, for manually created slots (or non-logical\r\n> > replication slots)? If so, we would need to have something reading WAL\r\n> > records in the end.\r\n> >\r\n> \r\n> > > I would prefer this\r\n> > > idea than to invent a new API/tool like pg_replslotdata.\r\n> >\r\n> > +1\r\n\r\nChanged the check to compare the latest checkpoint lsn from pg_controldata\r\nwith the confirmed_flush_lsn in pg_replication_slots view.\r\n\r\n> >\r\n> > >\r\n> > > The other point you and Bruce seem to be favoring is that instead of\r\n> > > dumping/restoring slots via pg_dump, we remember the required\r\n> > > information of slots retrieved during their validation in pg_upgrade\r\n> > > itself and use that to create the slots in the new cluster. Though I\r\n> > > am not aware of doing similar treatment for other objects we restore\r\n> > > in this case it seems reasonable especially because slots are not\r\n> > > stored in the catalog and we anyway already need to retrieve the\r\n> > > required information to validate them, so trying to again retrieve\r\n> > > it via pg_dump doesn't seem useful unless I am missing something.\r\n> > > Does this match your understanding?\r\n> >\r\n> > If there are use cases for --logical-replication-slots-only option\r\n> > other than pg_upgrade, it would be good to have it in pg_dump. I was\r\n> > just not sure of other use cases.\r\n> >\r\n> \r\n> It was primarily for upgrade purposes only. So, as we can't see a good reason to\r\n> go via pg_dump let's do it in upgrade unless someone thinks otherwise.\r\n\r\nRemoved the new option in pg_dump and modified the pg_upgrade\r\ndirectly use the slot info to restore the slot in new cluster.\r\n\r\n> \r\n> > >\r\n> > > Yet another thing I am trying to consider is whether we can allow to\r\n> > > upgrade slots from 16 or 15 to later versions. As of now, the patch\r\n> > > has the following check:\r\n> > > getLogicalReplicationSlots()\r\n> > > {\r\n> > > ...\r\n> > > + /* Check whether we should dump or not */ if (fout->remoteVersion\r\n> > > + < 170000) return;\r\n> > > ...\r\n> > > }\r\n> > >\r\n> > > If we decide to use the existing view pg_replication_slots then can\r\n> > > we consider upgrading slots from the prior version to 17? Now, if we\r\n> > > want to invent any new API similar to pg_replslotdata then we can't\r\n> > > do this because it won't exist in prior versions but OTOH using\r\n> > > existing view pg_replication_slots can allow us to fetch slot info\r\n> > > from older versions as well. So, I think it is worth considering.\r\n> >\r\n> > I think that without 0001 patch the replication slots will not be able\r\n> > to pass the confirmed_flush_lsn check.\r\n> >\r\n> \r\n> Right, but we can think of backpatching the same. Anyway, we can do that as a\r\n> separate work by starting a new thread to see if there is a broader agreement\r\n> for backpatching such a change. For now, we can focus on >=v17.\r\n> \r\n\r\nHere is the new version patch which addressed above points.\r\nThe new version patch also removes the --exclude-logical-replication-slots\r\noption due to recent comment. \r\nThanks Kuroda-san for addressing most of the points. \r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Tue, 15 Aug 2023 04:13:49 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\nThanks for posting the patch! I want to open a question to gather opinions from others.\r\n\r\n> > It was primarily for upgrade purposes only. So, as we can't see a good reason to\r\n> > go via pg_dump let's do it in upgrade unless someone thinks otherwise.\r\n> \r\n> Removed the new option in pg_dump and modified the pg_upgrade\r\n> directly use the slot info to restore the slot in new cluster.\r\n\r\nIn this version, creations of logical slots are serialized, whereas old ones were\r\nparallelised per db. Do you it should be parallelized again? I have tested locally\r\nand felt harmless. Also, this approch allows to log the executed SQLs.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 16 Aug 2023 03:07:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> > > It was primarily for upgrade purposes only. So, as we can't see a good reason\r\n> to\r\n> > > go via pg_dump let's do it in upgrade unless someone thinks otherwise.\r\n> >\r\n> > Removed the new option in pg_dump and modified the pg_upgrade\r\n> > directly use the slot info to restore the slot in new cluster.\r\n> \r\n> In this version, creations of logical slots are serialized, whereas old ones were\r\n> parallelised per db. Do you it should be parallelized again? I have tested locally\r\n> and felt harmless. Also, this approch allows to log the executed SQLs.\r\n\r\nI updated the patch to allow parallel executions. Workers are launched per slots,\r\neach one connects to the new node via psql and executes pg_create_logical_replication_slot().\r\nMoreover, following points were changed for 0002.\r\n\r\n* Ensured to log executed SQLs for creating slots.\r\n* Fixed an issue that 'unreserved' slots could not be upgrade. This change was \r\n not expected one. Related discussion was [1].\r\n* Added checks for output plugin libraries. pg_upgrade ensures that plugins\r\n referred by old slots were installed to the new executable directory. \r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866FD3F7992A46D0457F0E6F50BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 16 Aug 2023 10:25:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wednesday, August 16, 2023 6:25 PM Kuroda, Hayato/黒田 隼人 wrote:\r\n> \r\n> Dear hackers,\r\n> \r\n> > > > It was primarily for upgrade purposes only. So, as we can't see a\r\n> > > > good reason\r\n> > to\r\n> > > > go via pg_dump let's do it in upgrade unless someone thinks otherwise.\r\n> > >\r\n> > > Removed the new option in pg_dump and modified the pg_upgrade\r\n> > > directly use the slot info to restore the slot in new cluster.\r\n> >\r\n> > In this version, creations of logical slots are serialized, whereas\r\n> > old ones were parallelised per db. Do you it should be parallelized\r\n> > again? I have tested locally and felt harmless. Also, this approch allows to log\r\n> the executed SQLs.\r\n> \r\n> I updated the patch to allow parallel executions. Workers are launched per\r\n> slots, each one connects to the new node via psql and executes\r\n> pg_create_logical_replication_slot().\r\n> Moreover, following points were changed for 0002.\r\n> \r\n> * Ensured to log executed SQLs for creating slots.\r\n> * Fixed an issue that 'unreserved' slots could not be upgrade. This change was\r\n> not expected one. Related discussion was [1].\r\n> * Added checks for output plugin libraries. pg_upgrade ensures that plugins\r\n> referred by old slots were installed to the new executable directory.\r\n\r\n\r\nThanks for updating the patch ! Here are few comments:\r\n\r\n+static void\r\n+create_logical_replication_slots(void)\r\n...\r\n+\t\tquery = createPQExpBuffer();\r\n+\t\tescaped = createPQExpBuffer();\r\n+\t\tconn = connectToServer(&new_cluster, old_db->db_name);\r\n\r\nSince the connection here is not used anymore, so I think we can remove it.\r\n\r\n2.\r\n\r\n+static void\r\n+create_logical_replication_slots(void)\r\n...\r\n+\t/* update new_cluster info again */\r\n+\tget_logical_slot_infos(&new_cluster);\r\n+}\r\n\r\nDo we need to get new slots again after restoring ?\r\n\r\n3.\r\n\r\n+\tsnprintf(query, sizeof(query),\r\n+\t\t\t \"SELECT slot_name, plugin, two_phase \"\r\n+\t\t\t \"FROM pg_catalog.pg_replication_slots \"\r\n+\t\t\t \"WHERE database = current_database() AND temporary = false \"\r\n+\t\t\t \"AND wal_status <> 'lost';\");\r\n+\r\n+\tres = executeQueryOrDie(conn, \"%s\", query);\r\n+\r\n\r\nInstead of building the query in a new variable, can we directly put the SQL in executeQueryOrDie()\r\ne.g.\r\nexecuteQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase ...\");\r\n\r\n\r\n4.\r\n+int\tnum_slots_on_old_cluster;\r\n\r\nInstead of a new global variable, would it be better to record this in the cluster info ?\r\n\r\n\r\n5.\r\n\r\n \t\tchar\t\tsql_file_name[MAXPGPATH],\r\n \t\t\t\t\tlog_file_name[MAXPGPATH];\r\n+\r\n \t\tDbInfo\t *old_db = &old_cluster.dbarr.dbs[dbnum];\r\n\r\nThere is an extra change here.\r\n\r\n6.\r\n+\tfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n..\r\n+\t\t/* reap all children */\r\n+\t\twhile (reap_child(true) == true)\r\n+\t\t\t;\r\n+\t}\r\n\r\nMaybe we can move the \"while (reap_child(true) == true)\" out of the for() loop ?\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Wed, 16 Aug 2023 11:21:30 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 3:55 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > > It was primarily for upgrade purposes only. So, as we can't see a good reason\n> > to\n> > > > go via pg_dump let's do it in upgrade unless someone thinks otherwise.\n> > >\n> > > Removed the new option in pg_dump and modified the pg_upgrade\n> > > directly use the slot info to restore the slot in new cluster.\n> >\n> > In this version, creations of logical slots are serialized, whereas old ones were\n> > parallelised per db. Do you it should be parallelized again? I have tested locally\n> > and felt harmless. Also, this approch allows to log the executed SQLs.\n>\n> I updated the patch to allow parallel executions. Workers are launched per slots,\n> each one connects to the new node via psql and executes pg_create_logical_replication_slot().\n>\n\nWill it be beneficial for slots? Invoking a separate process each time\ncould be more costlier than slot creation. The other thing is during\nslot creation, the snapbuild waits for parallel transactions to finish\nso that can also hurt the patch. I think we can test it by having 50,\n100, or 500 slots on the old cluster and see if doing parallel\nexecution for the creation of those on the new cluster has any benefit\nover serial execution.\n\n> Moreover, following points were changed for 0002.\n>\n> * Ensured to log executed SQLs for creating slots.\n> * Fixed an issue that 'unreserved' slots could not be upgrade. This change was\n> not expected one. Related discussion was [1].\n> * Added checks for output plugin libraries. pg_upgrade ensures that plugins\n> referred by old slots were installed to the new executable directory.\n>\n\nI think this is a good idea but did you test it with out-of-core\nplugins, if so, can you please share the results? Also, let's update\nthis information in docs as well.\n\nFew minor comments\n1. Why the patch updates the slots info at the end of\ncreate_logical_replication_slots()? Can you please update the comments\nfor the same?\n\n2.\n@@ -36,6 +36,7 @@ generate_old_dump(void)\n {\n char sql_file_name[MAXPGPATH],\n log_file_name[MAXPGPATH];\n+\n DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\n\nSpurious line change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Aug 2023 11:13:03 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 4:51 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> 4.\n> +int num_slots_on_old_cluster;\n>\n> Instead of a new global variable, would it be better to record this in the cluster info ?\n>\n\nI was thinking whether we can go a step ahead and remove this variable\naltogether. In old cluster handling, we can get and check together at\nthe same place and for the new cluster, if we have a function that\nreturns slot_count by traversing old clusterinfo that should be\nsufficient. If you have other better ideas to eliminate this variable\nthat is also fine. I think this will make the patch bit clean w.r.t\nthis new variable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Aug 2023 11:48:23 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for the first 2 patches.\n\n(There are a couple of overlaps with what Hou-san already wrote review\ncomments about)\n\nFor patch v21-0001...\n\n======\n1. SaveSlotToPath\n\n- /* and don't do anything if there's nothing to write */\n- if (!was_dirty)\n+ /*\n+ * and don't do anything if there's nothing to write, unless it's this is\n+ * called for a logical slot during a shutdown checkpoint, as we want to\n+ * persist the confirmed_flush_lsn in that case, even if that's the only\n+ * modification.\n+ */\n+ if (!was_dirty && (SlotIsPhysical(slot) || !is_shutdown))\n return;\n\nThe condition seems to be coded in a slightly awkward way when\ncompared to how the comment was worded.\n\nHow about:\nif (!was_dirty && !(SlotIsLogical(slot) && is_shutdown))\n\n\n//////////\n\nFor patch v21-0002...\n\n======\nCommit Message\n\n1.\n\nFor pg_upgrade, it query the logical replication slots information from the old\ncluter and restores the slots using the pg_create_logical_replication_slots()\nstatements. Note that we need to separate the timing of restoring replication\nslots and other objects. Replication slots, in particular, should not be\nrestored before executing the pg_resetwal command because it will remove WALs\nthat are required by the slots.\n\n~\n\nRevisit this paragraph. There are lots of typos etc.\n\n1a.\n\"For pg_upgrade\". I think this wording is a hangover from back when\nthe patch was split into two parts for pg_dump and pg_upgrade, but now\nit seems strange.\n\n~\n1b.\n/cluter/cluster/\n\n~\n1c\n/because it/because pg_resetwal/\n\n======\nsrc/sgml/ref/pgupgrade.sgml\n\n2.\n\n+ <step>\n+ <title>Prepare for publisher upgrades</title>\n+\n+ <para>\n+ <application>pg_upgrade</application> try to dump and restore logical\n+ replication slots. This helps avoid the need for manually defining the\n+ same replication slot on the new publisher.\n+ </para>\n+\n\n2a.\n/try/attempts to/ ??\n\n~\n2b.\nIs \"dump\" the right word here? I didn't see dumping happening in the\npatch anymore.\n\n~~~\n\n3.\n\n+ <para>\n+ Before you start upgrading the publisher node, ensure that the\n+ subscription is temporarily disabled. After the upgrade is complete,\n+ execute the\n+ <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... DISABLE</command></link>\n+ command to update the connection string, and then re-enable the\n+ subscription.\n+ </para>\n\n3a.\nThat link made no sense in this context.\n\nDon't you mean to say:\n<command>ALTER SUBSCRIPTION ... CONNECTION ...</command>\n\n~\n\n3b.\nHmm. I wonder now did you *also* mean to describe how to disable? For example:\n\nBefore you start upgrading the publisher node, ensure that the\nsubscription is temporarily disabled, by executing\n<link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\nDISABLE</command></link>.\n\n~~~\n\n4.\n\n+\n+ <para>\n+ Upgrading slots has some settings. At first, all the slots must not be in\n+ <literal>lost</literal>, and they must have consumed all the WALs on old\n+ node. Furthermore, new node must have larger\n+ <link linkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varname></link>\n+ than existing slots on old node, and\n+ <link linkend=\"guc-wal-level\"><varname>wal_level</varname></link> must be\n+ <literal>logical</literal>. <application>pg_upgrade</application> will\n+ run error if something wrong.\n+ </para>\n+ </step>\n+\n\n4a.\n\"At first, all the slots must not be in lost\"\n\nApart from being strangely worded, I was not familiar with what it\nmeant to say \"must not be in lost\". Will this be meaningful to the\nuser?\n\nIMO this should have more description, e.g. including mentioning the\n\"wal_status\" attribute with the appropriate link to\nhttps://www.postgresql.org/docs/current/view-pg-replication-slots.html\n\n~\n\n4b.\nBEFORE\nUpgrading slots has some settings. ...\n<application>pg_upgrade</application> will run error if something\nwrong.\n\nSUGGESTION\nThere are some prerequisites for <application>pg_upgrade</application>\nto be able to upgrade the replication slots. If these are not met an\nerror will be reported.\n\n~\n\n4c.\nWondered if this list of prerequisites might be better presented as an\nSGML list.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n5.\n extern char *output_files[];\n+extern int num_slots_on_old_cluster;\n\n~\n\nIMO something feels not quite right about having this counter floating\naround as a global variable.\n\nShouldn't this instead be a field member of the old_cluster. That\nseems to be the normal way to hold the cluster-wise info.\n\n~~~\n\n6. check_new_cluster_is_empty\n\n RelInfoArr *rel_arr = &new_cluster.dbarr.dbs[dbnum].rel_arr;\n+ DbInfo *pDbInfo = &new_cluster.dbarr.dbs[dbnum];\n+ LogicalSlotInfoArr *slot_arr = &pDbInfo->slot_arr;\n\nIIRC I previously suggested adding this 'pDbInfo' variable because\nthere are several places that can make use of it.\n\nYou are using it only in the NEW code, but did not replace the\nexisting other code to make use of it:\npg_fatal(\"New cluster database \\\"%s\\\" is not empty: found relation \\\"%s.%s\\\"\",\nnew_cluster.dbarr.dbs[dbnum].db_name,\n\n~~~\n\n7. check_for_logical_replication_slots\n\n+\n+/*\n+ * Verify the parameter settings necessary for creating logical replication\n+ * slots.\n+ */\n+static void\n+check_for_logical_replication_slots(ClusterInfo *new_cluster)\n+{\n+ PGresult *res;\n+ PGconn *conn = connectToServer(new_cluster, \"template1\");\n+ int max_replication_slots;\n+ char *wal_level;\n+\n+ /* logical replication slots can be dumped since PG17. */\n+ if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1600)\n+ return;\n+\n+ prep_status(\"Checking parameter settings for logical replication slots\");\n+\n+ res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\n+ max_replication_slots = atoi(PQgetvalue(res, 0, 0));\n+\n+ if (max_replication_slots == 0)\n+ pg_fatal(\"max_replication_slots must be greater than 0\");\n+ else if (num_slots_on_old_cluster > max_replication_slots)\n+ pg_fatal(\"max_replication_slots must be greater than existing logical \"\n+ \"replication slots on old node.\");\n+\n+ PQclear(res);\n+\n+ res = executeQueryOrDie(conn, \"SHOW wal_level;\");\n+ wal_level = PQgetvalue(res, 0, 0);\n+\n+ if (strcmp(wal_level, \"logical\") != 0)\n+ pg_fatal(\"wal_level must be \\\"logical\\\", but is set to \\\"%s\\\"\",\n+ wal_level);\n+\n+ PQclear(res);\n+\n+ PQfinish(conn);\n+\n+ check_ok();\n\n~\n\n7a.\n+check_for_logical_replication_slots(ClusterInfo *new_cluster)\n\nIMO it is bad practice to name this argument 'new_cluster'. You will\nend up shadowing the global variable of the same name. It seems in\nother similar code where &new_cluster is passed as a parameter the\nfunction arg there is called just 'cluster'.\n\n~\n\n7b.\n\"/* logical replication slots can be dumped since PG17. */\"\n\nIs \"dumped\" the correct word to be used here? Where is the \"dump\"?\n\n~\n\n7c.\n\n+ if (max_replication_slots == 0)\n+ pg_fatal(\"max_replication_slots must be greater than 0\");\n+ else if (num_slots_on_old_cluster > max_replication_slots)\n+ pg_fatal(\"max_replication_slots must be greater than existing logical \"\n+ \"replication slots on old node.\");\n\nWhy is the 1st condition here even needed? Isn't it sufficient just to\nhave that 2nd condition to check max_replication_slot is big enough?\n\n======\n\n8. src/bin/pg_upgrade/dump.c\n\n {\n char sql_file_name[MAXPGPATH],\n log_file_name[MAXPGPATH];\n+\n DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\n~\n\nUnnecessary whitespace change.\n\n======\nsrc/bin/pg_upgrade/function.c\n\n9. get_loadable_libraries -- GENERAL\n\n@@ -46,7 +46,8 @@ library_name_compare(const void *p1, const void *p2)\n /*\n * get_loadable_libraries()\n *\n- * Fetch the names of all old libraries containing C-language functions.\n+ * Fetch the names of all old libraries containing C-language functions, and\n+ * output plugins used by existing logical replication slots.\n * We will later check that they all exist in the new installation.\n */\n void\n@@ -66,14 +67,21 @@ get_loadable_libraries(void)\n PGconn *conn = connectToServer(&old_cluster, active_db->db_name);\n\n /*\n- * Fetch all libraries containing non-built-in C functions in this DB.\n+ * Fetch all libraries containing non-built-in C functions and\n+ * output plugins in this DB.\n */\n ress[dbnum] = executeQueryOrDie(conn,\n \"SELECT DISTINCT probin \"\n \"FROM pg_catalog.pg_proc \"\n \"WHERE prolang = %u AND \"\n \"probin IS NOT NULL AND \"\n- \"oid >= %u;\",\n+ \"oid >= %u \"\n+ \"UNION \"\n+ \"SELECT DISTINCT plugin \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE wal_status <> 'lost' AND \"\n+ \"database = current_database() AND \"\n+ \"temporary IS FALSE;\",\n ClanguageId,\n FirstNormalObjectId);\n totaltups += PQntuples(ress[dbnum]);\n\n~\n\nMaybe it is OK, but it somehow seems like the new logic has been\njammed into the get_loadable_libraries() function for coding\nconvenience. For example, all the names (function names, variable\nnames, structure field names) are referring to \"libraries\", so the\nplugin seems a bit out of place.\n\n~~~\n\n10. get_loadable_libraries\n\n/* Fetch all library names, removing duplicates within each DB */\nfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n~\n\nThis code comment still refers only to library names.\n\n~~~\n10. get_loadable_libraries\n\n+ \"UNION \"\n+ \"SELECT DISTINCT plugin \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE wal_status <> 'lost' AND \"\n+ \"database = current_database() AND \"\n+ \"temporary IS FALSE;\",\n\nIMO this SQL might be more readable if it uses an alias (like 'rs')\nfor the catalog. Then rs.wal_status, rs.database, rs.temporary etc.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n11. get_logical_slot_infos_per_db\n\n+ snprintf(query, sizeof(query),\n+ \"SELECT slot_name, plugin, two_phase \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE database = current_database() AND temporary = false \"\n+ \"AND wal_status <> 'lost';\");\n\nThere was similar SQL in get_loadable_libraries() but there you wrote:\n\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE wal_status <> 'lost' AND \"\n+ \"database = current_database() AND \"\n+ \"temporary IS FALSE;\",\n\nThe WHERE condition order and case are all slightly different. IMO it\nwould be better for both SQL fragments to be exactly the same.\n\n~~~\n\n12. get_logical_slot_infos\n\n+int\n+get_logical_slot_infos(ClusterInfo *cluster)\n+{\n+ int dbnum;\n+ int slotnum = 0;\n+\n\nI think 'slotnum' is not a good name. In other nearby code (e.g.\nprint_slot_infos) 'slotnum' is used to mean the index of each slot,\nbut here it means the total number of slots. How about a name like\n'slot_count' or 'nslots' something where the name is more meaningful?\n\n~~~\n\n13. free_db_and_rel_infos\n\n+\n+ /*\n+ * Logical replication slots must not exist on the new cluster before\n+ * doing create_logical_replication_slots().\n+ */\n+ Assert(db_arr->dbs[dbnum].slot_arr.slots == NULL);\n\nIsn't it more natural to do: Assert(db_arr->dbs[dbnum].slot_arr.nslots == 0);\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.c\n\n14. create_logical_replication_slots\n\n+create_logical_replication_slots(void)\n+{\n+ int dbnum;\n+ int slotnum;\n\nThe 'slotnum' can be declared at a lower scope than this to be closer\nto where it is actually used.\n\n~~~\n\n15. create_logical_replication_slots\n\n+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ {\n+ DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\n+ LogicalSlotInfoArr *slot_arr = &old_db->slot_arr;\n+ PQExpBuffer query,\n+ escaped;\n+ PGconn *conn;\n+ char log_file_name[MAXPGPATH];\n+\n+ /* Quick exit if there are no slots */\n+ if (!slot_arr->nslots)\n+ continue;\n\nThe comment is misleading. There is no exiting. Maybe better to say\nsomething like \"Skip this DB if there are no slots\".\n\n~~~\n\n16. create_logical_replication_slots\n\n+ appendPQExpBuffer(query, \"SELECT\npg_catalog.pg_create_logical_replication_slot(\");\n+ appendStringLiteral(query, slot_arr->slots[slotnum].slotname,\n+ slot_arr->encoding, slot_arr->std_strings);\n+ appendPQExpBuffer(query, \", \");\n+ appendStringLiteral(query, slot_arr->slots[slotnum].plugin,\n+ slot_arr->encoding, slot_arr->std_strings);\n+ appendPQExpBuffer(query, \", false, %s);\",\n+ slot_arr->slots[slotnum].two_phase ? \"true\" : \"false\");\n\nI noticed that the function comment for appendStringLiteral() says:\n\"We need it in situations where we do not have a PGconn available.\nWhere we do, appendStringLiteralConn is a better choice.\".\n\nBut in this code, we *do* have PGconn available. So, shouldn't we be\nfollowing the advice of the appendStringLiteral() function comment and\nuse the other API instead?\n\n~~~\n\n17. create_logical_replication_slots\n\n+ /*\n+ * The string must be escaped to shell-style, because there is a\n+ * possibility that output plugin name contains quotes. The output\n+ * string would be sandwiched by the single quotes, so it does not have\n+ * to be wrapped by any quotes when it is passed to\n+ * parallel_exec_prog().\n+ */\n+ appendShellString(escaped, query->data);\n\n/sandwiched by/enclosed by/ ???\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n18. LogicalSlotInfo\n\n+/*\n+ * Structure to store logical replication slot information\n+ */\n+typedef struct\n+{\n+ char *slotname; /* slot name */\n+ char *plugin; /* plugin */\n+ bool two_phase; /* Can the slot decode 2PC? */\n+} LogicalSlotInfo;\n\nLooks a bit strange when the only last field comment is uppercase but\nthe others are not. Maybe lowercase everything like for other nearby\nstructs.\n\n~~~\n\n19. LogicalSlotInfoArr\n\n+\n+typedef struct\n+{\n+ int nslots;\n+ LogicalSlotInfo *slots;\n+ int encoding;\n+ bool std_strings;\n+} LogicalSlotInfoArr;\n+\n\nThe meaning of those fields is not always obvious. IMO they can all be\ncommented on.\n\n======\n.../pg_upgrade/t/003_logical_replication_slots.pl\n\n20.\n\n# Cause a failure at the start of pg_upgrade because wal_level is replica\n\n~\n\nI wondered if it would be clearer if you had to explicitly set the\nnew_node to \"replica\" initially, instead of leaving it default.\n\n~~~\n\n21.\n\n# Cause a failure at the start of pg_upgrade because max_replication_slots is 0\n\n~\n\nThis related to my earlier code comment in this post -- I didn't\nunderstand the need to specially test for 0. IIUC, we really are\ninterested only to know if there are *sufficient*\nmax_replication_slots.\n\n~~~\n\n22.\n\n'run of pg_upgrade of old node with small max_replication_slots');\n\n~\n\nSUGGESTION\nrun of pg_upgrade where the new node has insufficient max_replication_slots\n\n~~~\n\n23.\n\n# Preparations for the subsequent test. max_replication_slots is set to\n# appropriate value\n$new_node->append_conf('postgresql.conf', \"max_replication_slots = 10\");\n\n# Remove an unnecessary slot and consume WALs\n$old_node->start;\n$old_node->safe_psql(\n'postgres', qq[\nSELECT pg_drop_replication_slot('test_slot1');\nSELECT count(*) FROM pg_logical_slot_get_changes('test_slot2', NULL, NULL)\n]);\n$old_node->stop;\n\n~\n\nSome of that preparation seems unnecessary. I think the new node\nmax_replication_slots is 1 already, so if you are going to remove one\nof test_slot1 here then there is only ONE slot left, right? So the\nmax_replication_slots on the new node should be OK now. Not only will\nthere be less test code needed here, but you will be testing the\nboundary condition of max_replication_slots (which is probably a good\nthing to do).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 17 Aug 2023 18:39:41 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > I updated the patch to allow parallel executions. Workers are launched per slots,\r\n> > each one connects to the new node via psql and executes\r\n> pg_create_logical_replication_slot().\r\n> >\r\n> \r\n> Will it be beneficial for slots? Invoking a separate process each time\r\n> could be more costlier than slot creation. The other thing is during\r\n> slot creation, the snapbuild waits for parallel transactions to finish\r\n> so that can also hurt the patch. I think we can test it by having 50,\r\n> 100, or 500 slots on the old cluster and see if doing parallel\r\n> execution for the creation of those on the new cluster has any benefit\r\n> over serial execution.\r\n\r\nIndeed. I have tested based on the comment and found that serial execution was\r\nfaster. PSA graphs and tables. The x-axis shows the number of upgraded slots,\r\ny-axis shows the execution time. The parallelism of pg_upgrade (-j) was also\r\nvaried during the test.\r\n\r\nI've planned to revert the change in upcoming versions.\r\n\r\n# compared source code\r\n\r\nFor parallel execution case, the v21 patch set was used.\r\nFor serial execution case, logics in create_logical_replication_slots() are changed,\r\nwhich is basically same as v20 (I can share if needed).\r\n\r\nMoreover, in both cases, debug logs for measuring time were added.\r\n\r\n# method\r\n\r\nPSA the script. Some given number of slots are created and then pg_upgrade was executed.\r\n\r\n# consideration\r\n\r\n* In any conditions, the serial execution was faster than parallel. Maybe the\r\n launching process was more costly than I expected.\r\n* Another reason I thougth was that in case of serial execution, the connection\r\n to new node was established only once. Parallel case, however, workers must\r\n establish connections every time. IIUC this requires long duration.\r\n* (very trivial) Number of workers were not affected in serial execution. This\r\n means the coding seems right.\r\n\r\n> > * Added checks for output plugin libraries. pg_upgrade ensures that plugins\r\n> > referred by old slots were installed to the new executable directory.\r\n> >\r\n> \r\n> I think this is a good idea but did you test it with out-of-core\r\n> plugins, if so, can you please share the results? Also, let's update\r\n> this information in docs as well.\r\n\r\nI have not used other plugins, but forcibly renamed the shared object file.\r\nI would test by plugins like wal2json[1] if more cases are needed.\r\n\r\n1. created logical replication slots on old node\r\n SELECT * FROM pg_create_logical_replication_slot('test', 'test_decoding')\r\n2. stopped the old nde\r\n3. forcibly renamed the so file. I used following script:\r\n sudo mv /path/to/test_decoding.so /path/to//test\\\"_decoding.so\r\n4. executed pg_upgrade and failed. Outputs what I got were:\r\n\r\n```\r\nChecking for presence of required libraries fatal\r\n\r\nYour installation references loadable libraries that are missing from the\r\nnew installation. You can add these libraries to the new installation,\r\nor remove the functions using them from the old installation. A list of\r\nproblem libraries is in the file:\r\n data_N3/pg_upgrade_output.d/20230817T100926.979/loadable_libraries.txt\r\nFailure, exiting\r\n```\r\n\r\nAnd contents of loadable_libraries.txt were below:\r\n\r\n```\r\ncould not load library \"test_decoding\": ERROR: could not access file \"test_decoding\": No such file or directory\r\nIn database: postgres\r\n```\r\n\r\n[1]: https://github.com/eulerto/wal2json\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 17 Aug 2023 10:18:42 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 12:06 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Aug 15, 2023 at 7:51 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Aug 14, 2023 at 2:07 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > > Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.\n> > > >\n> > >\n> > > Right, this is somewhat closer to what Patch is already doing. But\n> > > remember in this case we need to remember and use the latest\n> > > checkpoint from the control file before the old cluster is started\n> > > because otherwise the latest checkpoint location could be even updated\n> > > during the upgrade. So, instead of reading from WAL, we need to change\n> > > so that we rely on the control file's latest LSN.\n> >\n> > Yes, I was thinking the same idea.\n> >\n> > But it works for only replication slots for logical replication. Do we\n> > want to check if no meaningful WAL records are generated after the\n> > latest shutdown checkpoint, for manually created slots (or non-logical\n> > replication slots)? If so, we would need to have something reading WAL\n> > records in the end.\n> >\n>\n> This feature only targets logical replication slots. I don't see a\n> reason to be different for manually created logical replication slots.\n> Is there something particular that you think we could be missing?\n\nSorry I was not clear. I meant the logical replication slots that are\n*not* used by logical replication, i.e., are created manually and used\nby third party tools that periodically consume decoded changes. As we\ndiscussed before, these slots will never be able to pass that\nconfirmed_flush_lsn check. After some thoughts, one thing we might\nneed to consider is that in practice, the upgrade project is performed\nduring the maintenance window and has a backup plan that revert the\nupgrade process, in case something bad happens. If we require the\nusers to drop such logical replication slots, they cannot resume to\nuse the old cluster in that case, since they would need to create new\nslots, missing some changes. Other checks in pg_upgrade seem to be\ncompatibility checks that would eventually be required for the upgrade\nanyway. Do we need to consider this case? For example, we do that\nconfirmed_flush_lsn check for only the slots with pgoutput plugin.\n\n> > >\n> > > Yet another thing I am trying to consider is whether we can allow to\n> > > upgrade slots from 16 or 15 to later versions. As of now, the patch\n> > > has the following check:\n> > > getLogicalReplicationSlots()\n> > > {\n> > > ...\n> > > + /* Check whether we should dump or not */\n> > > + if (fout->remoteVersion < 170000)\n> > > + return;\n> > > ...\n> > > }\n> > >\n> > > If we decide to use the existing view pg_replication_slots then can we\n> > > consider upgrading slots from the prior version to 17? Now, if we want\n> > > to invent any new API similar to pg_replslotdata then we can't do this\n> > > because it won't exist in prior versions but OTOH using existing view\n> > > pg_replication_slots can allow us to fetch slot info from older\n> > > versions as well. So, I think it is worth considering.\n> >\n> > I think that without 0001 patch the replication slots will not be able\n> > to pass the confirmed_flush_lsn check.\n> >\n>\n> Right, but we can think of backpatching the same. Anyway, we can do\n> that as a separate work by starting a new thread to see if there is a\n> broader agreement for backpatching such a change. For now, we can\n> focus on >=v17.\n>\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 17 Aug 2023 21:36:30 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 6:07 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Tue, Aug 15, 2023 at 12:06 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Aug 15, 2023 at 7:51 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > On Mon, Aug 14, 2023 at 2:07 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > > > Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.\n> > > > >\n> > > >\n> > > > Right, this is somewhat closer to what Patch is already doing. But\n> > > > remember in this case we need to remember and use the latest\n> > > > checkpoint from the control file before the old cluster is started\n> > > > because otherwise the latest checkpoint location could be even updated\n> > > > during the upgrade. So, instead of reading from WAL, we need to change\n> > > > so that we rely on the control file's latest LSN.\n> > >\n> > > Yes, I was thinking the same idea.\n> > >\n> > > But it works for only replication slots for logical replication. Do we\n> > > want to check if no meaningful WAL records are generated after the\n> > > latest shutdown checkpoint, for manually created slots (or non-logical\n> > > replication slots)? If so, we would need to have something reading WAL\n> > > records in the end.\n> > >\n> >\n> > This feature only targets logical replication slots. I don't see a\n> > reason to be different for manually created logical replication slots.\n> > Is there something particular that you think we could be missing?\n>\n> Sorry I was not clear. I meant the logical replication slots that are\n> *not* used by logical replication, i.e., are created manually and used\n> by third party tools that periodically consume decoded changes. As we\n> discussed before, these slots will never be able to pass that\n> confirmed_flush_lsn check.\n>\n\nI think normally one would have a background process to periodically\nconsume changes. Won't one can use the walsender infrastructure for\ntheir plugins to consume changes probably by using replication\nprotocol? Also, I feel it is the plugin author's responsibility to\nconsume changes or advance slot to the required position before\nshutdown.\n\n> After some thoughts, one thing we might\n> need to consider is that in practice, the upgrade project is performed\n> during the maintenance window and has a backup plan that revert the\n> upgrade process, in case something bad happens. If we require the\n> users to drop such logical replication slots, they cannot resume to\n> use the old cluster in that case, since they would need to create new\n> slots, missing some changes.\n>\n\nCan't one keep the backup before removing slots?\n\n> Other checks in pg_upgrade seem to be\n> compatibility checks that would eventually be required for the upgrade\n> anyway. Do we need to consider this case? For example, we do that\n> confirmed_flush_lsn check for only the slots with pgoutput plugin.\n>\n\nI think one is allowed to use pgoutput plugin even for manually\ncreated slots. So, such a check may not work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Aug 2023 19:01:02 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 3:48 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > * Added checks for output plugin libraries. pg_upgrade ensures that plugins\n> > > referred by old slots were installed to the new executable directory.\n> > >\n> >\n> > I think this is a good idea but did you test it with out-of-core\n> > plugins, if so, can you please share the results? Also, let's update\n> > this information in docs as well.\n>\n> I have not used other plugins, but forcibly renamed the shared object file.\n> I would test by plugins like wal2json[1] if more cases are needed.\n>\n> 1. created logical replication slots on old node\n> SELECT * FROM pg_create_logical_replication_slot('test', 'test_decoding')\n> 2. stopped the old nde\n> 3. forcibly renamed the so file. I used following script:\n> sudo mv /path/to/test_decoding.so /path/to//test\\\"_decoding.so\n> 4. executed pg_upgrade and failed. Outputs what I got were:\n>\n> ```\n> Checking for presence of required libraries fatal\n>\n\nYour test sounds reasonable but there is no harm in testing wal2json\nor some other plugin just to mimic the actual production scenario.\nAdditionally, it would give us better coverage for the patch by\ntesting out-of-core plugins for some other tests as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Aug 2023 19:05:02 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 2:10 PM Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for the first 2 patches.\n>\n>\n> 3.\n>\n> + <para>\n> + Before you start upgrading the publisher node, ensure that the\n> + subscription is temporarily disabled. After the upgrade is complete,\n> + execute the\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\n> SUBSCRIPTION ... DISABLE</command></link>\n> + command to update the connection string, and then re-enable the\n> + subscription.\n> + </para>\n>\n> 3a.\n> That link made no sense in this context.\n>\n> Don't you mean to say:\n> <command>ALTER SUBSCRIPTION ... CONNECTION ...</command>\n>\n\nI think the command is correct here but the wording should mention\nabout disabling the subscription.\n\n>\n> /*\n> - * Fetch all libraries containing non-built-in C functions in this DB.\n> + * Fetch all libraries containing non-built-in C functions and\n> + * output plugins in this DB.\n> */\n> ress[dbnum] = executeQueryOrDie(conn,\n> \"SELECT DISTINCT probin \"\n> \"FROM pg_catalog.pg_proc \"\n> \"WHERE prolang = %u AND \"\n> \"probin IS NOT NULL AND \"\n> - \"oid >= %u;\",\n> + \"oid >= %u \"\n> + \"UNION \"\n> + \"SELECT DISTINCT plugin \"\n> + \"FROM pg_catalog.pg_replication_slots \"\n> + \"WHERE wal_status <> 'lost' AND \"\n> + \"database = current_database() AND \"\n> + \"temporary IS FALSE;\",\n> ClanguageId,\n> FirstNormalObjectId);\n> totaltups += PQntuples(ress[dbnum]);\n>\n> ~\n>\n> Maybe it is OK, but it somehow seems like the new logic has been\n> jammed into the get_loadable_libraries() function for coding\n> convenience. For example, all the names (function names, variable\n> names, structure field names) are referring to \"libraries\", so the\n> plugin seems a bit out of place.\n>\n\nBut the same name library (as plugin) should exist for the upgrade of\nslots. I feel doing it separately could either lead to a redundant\ncode or a different way to achieve the same thing. Do you envision any\nproblem which we are not seeing?\n\n> ~~~\n> 10. get_loadable_libraries\n>\n> + \"UNION \"\n> + \"SELECT DISTINCT plugin \"\n> + \"FROM pg_catalog.pg_replication_slots \"\n> + \"WHERE wal_status <> 'lost' AND \"\n> + \"database = current_database() AND \"\n> + \"temporary IS FALSE;\",\n>\n> IMO this SQL might be more readable if it uses an alias (like 'rs')\n> for the catalog. Then rs.wal_status, rs.database, rs.temporary etc.\n>\n\nThen it will become inconsistent with the existing query which doesn't\nuse any alias. So, I think we should either change the existing query\nto use an alias or not use it at all as the patch does. I would prefer\nlater.\n\n>\n> 16. create_logical_replication_slots\n>\n> + appendPQExpBuffer(query, \"SELECT\n> pg_catalog.pg_create_logical_replication_slot(\");\n> + appendStringLiteral(query, slot_arr->slots[slotnum].slotname,\n> + slot_arr->encoding, slot_arr->std_strings);\n> + appendPQExpBuffer(query, \", \");\n> + appendStringLiteral(query, slot_arr->slots[slotnum].plugin,\n> + slot_arr->encoding, slot_arr->std_strings);\n> + appendPQExpBuffer(query, \", false, %s);\",\n> + slot_arr->slots[slotnum].two_phase ? \"true\" : \"false\");\n>\n> I noticed that the function comment for appendStringLiteral() says:\n> \"We need it in situations where we do not have a PGconn available.\n> Where we do, appendStringLiteralConn is a better choice.\".\n>\n> But in this code, we *do* have PGconn available. So, shouldn't we be\n> following the advice of the appendStringLiteral() function comment and\n> use the other API instead?\n>\n\nI think that will avoid maintaining encoding and std_strings in the\nslot's array. So, this sounds like a good idea to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 18 Aug 2023 08:17:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > I have not used other plugins, but forcibly renamed the shared object file.\r\n> > I would test by plugins like wal2json[1] if more cases are needed.\r\n> >\r\n> > 1. created logical replication slots on old node\r\n> > SELECT * FROM pg_create_logical_replication_slot('test', 'test_decoding')\r\n> > 2. stopped the old nde\r\n> > 3. forcibly renamed the so file. I used following script:\r\n> > sudo mv /path/to/test_decoding.so /path/to//test\\\"_decoding.so\r\n> > 4. executed pg_upgrade and failed. Outputs what I got were:\r\n> >\r\n> > ```\r\n> > Checking for presence of required libraries fatal\r\n> >\r\n> \r\n> Your test sounds reasonable but there is no harm in testing wal2json\r\n> or some other plugin just to mimic the actual production scenario.\r\n> Additionally, it would give us better coverage for the patch by\r\n> testing out-of-core plugins for some other tests as well.\r\n\r\nI've tested by using wal2json, decoder_raw[1], and my small decoder. The results were\r\nthe same: pg_upgrade correctly raised an ERROR. Following demo shows the case for wal2json.\r\n\r\nIn this test, the plugin was installed only on the old node and a slot was created.\r\nBelow shows the created slot:\r\n\r\n```\r\n(Old)=# SELECT slot_name, plugin FROM pg_replication_slots\r\nslot_name | plugin \r\n-----------+----------\r\n test | wal2json\r\n(1 row)\r\n```\r\n\r\nAnd I confirmed that the plugin worked well via pg_logical_slot_get_changes()\r\n(This was needed to move forward the confirmed_flush_lsn)\r\n\r\n```\r\n(Old)=# INSERT INTO foo VALUES (1)\r\nINSERT 0 1\r\n(Old)=# SELECT * FROM pg_logical_slot_get_changes('test', NULL, NULL);\r\n lsn | xid | data \r\n \r\n----------+-----+-------------------------------------------------------------------------------------------------------------\r\n---------------------\r\n 0/63C8A8 | 731 | {\"change\":[{\"kind\":\"insert\",\"schema\":\"public\",\"table\":\"foo\",\"columnnames\":[\"id\"],\"columntypes\":[\"integer\"],\"\r\ncolumnvalues\":[1]}]}\r\n(1 row)\r\n```\r\n\r\nThen the pg_upgrade was executed but failed, same as the previous example.\r\n\r\n```\r\nChecking for presence of required libraries fatal\r\n\r\nYour installation references loadable libraries that are missing from the\r\nnew installation. You can add these libraries to the new installation,\r\nor remove the functions using them from the old installation. A list of\r\nproblem libraries is in the file:\r\n data_N3/pg_upgrade_output.d/20230818T030006.675/loadable_libraries.txt\r\nFailure, exiting\r\n```\r\n\r\nIn the loadable_libraries.txt, it mentioned that wal2json was not installed to new directory.\r\n\r\n```\r\ncould not load library \"wal2json\": ERROR: could not access file \"wal2json\": No such file or directory\r\nIn database: postgres\r\n```\r\n\r\nNote that upgrade was done if the plugin was installed to new binary too.\r\n\r\nAcknowledgement: Thank you Michael and Euler for creating great plugins!\r\n\r\n[1]: https://github.com/michaelpq/pg_plugins\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 18 Aug 2023 03:12:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for the patch v21-0003\n\n======\nCommit message\n\n1.\npg_upgrade fails if the old node has slots which status is 'lost' or they do not\nconsume all WAL records. These are needed for prevent the data loss.\n\n~\n\nMaybe some minor brush-up like:\n\nSUGGESTION\nIn order to prevent data loss, pg_upgrade will fail if the old node\nhas slots with the status 'lost', or with unconsumed WAL records.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n2. check_for_confirmed_flush_lsn\n\n+ /* Check that all logical slots are not in 'lost' state. */\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE temporary = false AND wal_status = 'lost';\");\n+\n+ ntups = PQntuples(res);\n+ i_slotname = PQfnumber(res, \"slot_name\");\n+\n+ for (i = 0; i < ntups; i++)\n+ {\n+ is_error = true;\n+\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" is obsolete.\",\n+ PQgetvalue(res, i, i_slotname));\n+ }\n+\n+ PQclear(res);\n+\n+ if (is_error)\n+ pg_fatal(\"logical replication slots not to be in 'lost' state.\");\n+\n\n2a. (GENERAL)\nThe above code for checking lost state seems out of place in this\nfunction which is meant for checking confirmed flush lsn.\n\nMaybe you jammed both kinds of logic into one function to save on the\nextra PGconn or something but IMO two separate functions would be\nbetter. e.g.\n- check_for_lost_slots\n- check_for_confirmed_flush_lsn\n\n~\n\n2b.\n+ /* Check that all logical slots are not in 'lost' state. */\n\nSUGGESTION\n/* Check there are no logical replication slots with a 'lost' state. */\n\n~\n\n2c.\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE temporary = false AND wal_status = 'lost';\");\n\nThis SQL fragment is very much like others in previous patches. Be\nsure to make all the cases and clauses consistent with all those\nsimilar SQL fragments.\n\n~\n\n2d.\n+ is_error = true;\n\nThat doesn't need to be in the loop. Better to just say:\nis_error = (ntups > 0);\n\n~\n\n2e.\nThere is a mix of terms in the WARNING and in the pg_fatal -- e.g.\n\"obsolete\" versus \"lost\". Is it OK?\n\n~\n\n2f.\n+ pg_fatal(\"logical replication slots not to be in 'lost' state.\");\n\nEnglish? And maybe it should be much more verbose...\n\n\"Upgrade of this installation is not allowed because one or more\nlogical replication slots with a state of 'lost' were detected.\"\n\n~~~\n\n3. check_for_confirmed_flush_lsn\n\n+ /*\n+ * Check that all logical replication slots have reached the latest\n+ * checkpoint position (SHUTDOWN_CHECKPOINT record). This checks cannot be\n+ * done in case of live_check because the server has not been written the\n+ * SHUTDOWN_CHECKPOINT record yet.\n+ */\n+ if (!live_check)\n+ {\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE confirmed_flush_lsn != '%X/%X' AND temporary = false;\",\n+ old_cluster.controldata.chkpnt_latest_upper,\n+ old_cluster.controldata.chkpnt_latest_lower);\n+\n+ ntups = PQntuples(res);\n+ i_slotname = PQfnumber(res, \"slot_name\");\n+\n+ for (i = 0; i < ntups; i++)\n+ {\n+ is_error = true;\n+\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",\n+ PQgetvalue(res, i, i_slotname));\n+ }\n+\n+ PQclear(res);\n+ PQfinish(conn);\n+\n+ if (is_error)\n+ pg_fatal(\"All logical replication slots consumed all the WALs.\");\n\n~\n\n3a.\n/This checks/This check/\n\n~\n\n3b.\nI don't think the separation of\nchkpnt_latest_upper/chkpnt_latest_lower is needed like this. AFAIK\nthere is an LSN_FORMAT_ARGS(lsn) macro designed for handling exactly\nthis kind of parameter substitution.\n\n~\n\n3c.\n+ is_error = true;\n\nThat doesn't need to be in the loop. Better to just say:\nis_error = (ntups > 0);\n\n~\n\n3d.\n+ pg_fatal(\"All logical replication slots consumed all the WALs.\");\n\nThe message seems backward. shouldn't it say something like:\n\"Upgrade of this installation is not allowed because one or more\nlogical replication slots still have unconsumed WAL records.\"\n\n======\nsrc/bin/pg_upgrade/controldata.c\n\n4. get_control_data\n\n+ /*\n+ * Upper and lower part of LSN must be read and stored\n+ * separately because it is reported as %X/%X format.\n+ */\n+ cluster->controldata.chkpnt_latest_upper =\n+ strtoul(p, &slash, 16);\n+ cluster->controldata.chkpnt_latest_lower =\n+ strtoul(++slash, NULL, 16);\n\nI felt that this field separation code is maybe not necessary. Please\nrefer to other review comments in this post.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n5. ControlData\n\n+\n+ uint32 chkpnt_latest_upper;\n+ uint32 chkpnt_latest_lower;\n } ControlData;\n\n~\n\nActually, I did not recognise the reason why this cannot be stored\nproperly as a single XLogRecPtr field. Please see other review\ncomments in this post.\n\n======\n.../t/003_logical_replication_slots.pl\n\n6. GENERAL\n\nMany of the changes to this file are just renaming the\n'old_node'/'new_node' to 'old_publisher'/'new_publisher'.\n\nThis seems a basic change not really associated with this patch 0003.\nTo reduce the code churn, this change should be moved into the earlier\npatch where this test file (003_logical_replication_slots.pl) was\nfirst introduced,\n\n~~~\n\n7.\n\n# Cause a failure at the start of pg_upgrade because slot do not finish\n# consuming all the WALs\n\n~\n\nCan you give a more detailed explanation in the comment of how this\ntest case achieves what it says?\n\n======\nsrc/test/regress/sql/misc_functions.sql\n\n8.\n@@ -236,4 +236,4 @@ SELECT * FROM pg_split_walfile_name('invalid');\n SELECT segment_number > 0 AS ok_segment_number, timeline_id\n FROM pg_split_walfile_name('000000010000000100000000');\n SELECT segment_number > 0 AS ok_segment_number, timeline_id\n- FROM pg_split_walfile_name('ffffffFF00000001000000af');\n+ FROM pg_split_walfile_name('ffffffFF00000001000000af');\n\\ No newline at end of file\n\n~\n\nWhat is this change for? It looks like maybe some accidental\nwhitespace change happened.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 18 Aug 2023 13:51:38 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 12:47 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 17, 2023 at 2:10 PM Peter Smith <[email protected]> wrote:\n> >\n> > Here are some review comments for the first 2 patches.\n> >\n> > /*\n> > - * Fetch all libraries containing non-built-in C functions in this DB.\n> > + * Fetch all libraries containing non-built-in C functions and\n> > + * output plugins in this DB.\n> > */\n> > ress[dbnum] = executeQueryOrDie(conn,\n> > \"SELECT DISTINCT probin \"\n> > \"FROM pg_catalog.pg_proc \"\n> > \"WHERE prolang = %u AND \"\n> > \"probin IS NOT NULL AND \"\n> > - \"oid >= %u;\",\n> > + \"oid >= %u \"\n> > + \"UNION \"\n> > + \"SELECT DISTINCT plugin \"\n> > + \"FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE wal_status <> 'lost' AND \"\n> > + \"database = current_database() AND \"\n> > + \"temporary IS FALSE;\",\n> > ClanguageId,\n> > FirstNormalObjectId);\n> > totaltups += PQntuples(ress[dbnum]);\n> >\n> > ~\n> >\n> > Maybe it is OK, but it somehow seems like the new logic has been\n> > jammed into the get_loadable_libraries() function for coding\n> > convenience. For example, all the names (function names, variable\n> > names, structure field names) are referring to \"libraries\", so the\n> > plugin seems a bit out of place.\n> >\n>\n> But the same name library (as plugin) should exist for the upgrade of\n> slots. I feel doing it separately could either lead to a redundant\n> code or a different way to achieve the same thing. Do you envision any\n> problem which we are not seeing?\n>\n\nNo problem. I'd misunderstood that the \"plugin\" referred to here is a\nshared object file (aka library) name, so it does belong here after\nall. I think the new comments could be made more clear about this\npoint though.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 18 Aug 2023 17:02:36 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> I was thinking whether we can go a step ahead and remove this variable\r\n> altogether. In old cluster handling, we can get and check together at\r\n> the same place and for the new cluster, if we have a function that\r\n> returns slot_count by traversing old clusterinfo that should be\r\n> sufficient. If you have other better ideas to eliminate this variable\r\n> that is also fine. I think this will make the patch bit clean w.r.t\r\n> this new variable.\r\n\r\nSeems better, removed the variable. Also, the timing of checks were changed\r\nto the end of get_logical_slot_infos(). The check whether we are in live_check\r\nare moved to the function, so the argument was removed again.\r\n\r\nThe whole of changes can be checked in upcoming e-mail.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 18 Aug 2023 13:32:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\nThank you for reviewing!\r\n\r\n> +static void\r\n> +create_logical_replication_slots(void)\r\n> ...\r\n> +\t\tquery = createPQExpBuffer();\r\n> +\t\tescaped = createPQExpBuffer();\r\n> +\t\tconn = connectToServer(&new_cluster, old_db->db_name);\r\n> \r\n> Since the connection here is not used anymore, so I think we can remove it.\r\n\r\nPer discussion [1], pg_upgrade must use connection again. So I kept it.\r\n\r\n> 2.\r\n> \r\n> +static void\r\n> +create_logical_replication_slots(void)\r\n> ...\r\n> +\t/* update new_cluster info again */\r\n> +\tget_logical_slot_infos(&new_cluster);\r\n> +}\r\n> \r\n> Do we need to get new slots again after restoring ?\r\n\r\nI checked again and thought that it was not needed, removed.\r\nSimilar function, create_new_objects(), was updated the information at the end.\r\nThis was needed because the information was used to compare objects between\r\nold and new cluster, in transfer_all_new_tablespaces(). In terms of logical replication\r\nslots, however, such comparison was not done. No functions use updated information.\r\n\r\n> 3.\r\n> \r\n> +\tsnprintf(query, sizeof(query),\r\n> +\t\t\t \"SELECT slot_name, plugin, two_phase \"\r\n> +\t\t\t \"FROM pg_catalog.pg_replication_slots \"\r\n> +\t\t\t \"WHERE database = current_database() AND\r\n> temporary = false \"\r\n> +\t\t\t \"AND wal_status <> 'lost';\");\r\n> +\r\n> +\tres = executeQueryOrDie(conn, \"%s\", query);\r\n> +\r\n> \r\n> Instead of building the query in a new variable, can we directly put the SQL in\r\n> executeQueryOrDie()\r\n> e.g.\r\n> executeQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase ...\");\r\n\r\nRight, fixed.\r\n\r\n> 4.\r\n> +int\tnum_slots_on_old_cluster;\r\n> \r\n> Instead of a new global variable, would it be better to record this in the cluster\r\n> info ?\r\n\r\nPer suggestion [2], the variable was removed.\r\n\r\n> 5.\r\n> \r\n> \t\tchar\t\tsql_file_name[MAXPGPATH],\r\n> \t\t\t\t\tlog_file_name[MAXPGPATH];\r\n> +\r\n> \t\tDbInfo\t *old_db = &old_cluster.dbarr.dbs[dbnum];\r\n> \r\n> There is an extra change here.\r\n\r\nRemoved.\r\n\r\n> 6.\r\n> +\tfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> ..\r\n> +\t\t/* reap all children */\r\n> +\t\twhile (reap_child(true) == true)\r\n> +\t\t\t;\r\n> +\t}\r\n> \r\n> Maybe we can move the \"while (reap_child(true) == true)\" out of the for() loop ?\r\n\r\nPer discussion [1], I stopped to do in parallel. So this part was not needed anymore.\r\n\r\nThe patch would be available in upcoming posts.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB58701DAEE5E61B07AC84ADBBF51AA%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866691219B9CB280B709600F51BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 18 Aug 2023 13:34:00 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> \r\n> Few minor comments\r\n> 1. Why the patch updates the slots info at the end of\r\n> create_logical_replication_slots()? Can you please update the comments\r\n> for the same?\r\n\r\nI checked and agreed that it was not needed. More detail, please see [1].\r\n\r\n> 2.\r\n> @@ -36,6 +36,7 @@ generate_old_dump(void)\r\n> {\r\n> char sql_file_name[MAXPGPATH],\r\n> log_file_name[MAXPGPATH];\r\n> +\r\n> DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\r\n> \r\n> Spurious line change.\r\n>\r\n\r\nRemoved.\r\n\r\nNext patch set would be available in upcoming posts.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866F384AC62E12E9638BEC1F51BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 18 Aug 2023 13:35:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing!\r\n\r\n> For patch v21-0001...\r\n> \r\n> ======\r\n> 1. SaveSlotToPath\r\n> \r\n> - /* and don't do anything if there's nothing to write */\r\n> - if (!was_dirty)\r\n> + /*\r\n> + * and don't do anything if there's nothing to write, unless it's this is\r\n> + * called for a logical slot during a shutdown checkpoint, as we want to\r\n> + * persist the confirmed_flush_lsn in that case, even if that's the only\r\n> + * modification.\r\n> + */\r\n> + if (!was_dirty && (SlotIsPhysical(slot) || !is_shutdown))\r\n> return;\r\n> \r\n> The condition seems to be coded in a slightly awkward way when\r\n> compared to how the comment was worded.\r\n> \r\n> How about:\r\n> if (!was_dirty && !(SlotIsLogical(slot) && is_shutdown))\r\n\r\nChanged.\r\n\r\n> For patch v21-0002...\r\n> \r\n> ======\r\n> Commit Message\r\n> \r\n> 1.\r\n> \r\n> For pg_upgrade, it query the logical replication slots information from the old\r\n> cluter and restores the slots using the pg_create_logical_replication_slots()\r\n> statements. Note that we need to separate the timing of restoring replication\r\n> slots and other objects. Replication slots, in particular, should not be\r\n> restored before executing the pg_resetwal command because it will remove WALs\r\n> that are required by the slots.\r\n> \r\n> ~\r\n> \r\n> Revisit this paragraph. There are lots of typos etc.\r\n\r\nMaybe I sent the patch before finalizing the commit message. Sorry for that.\r\nI reworded the part. Grammarly says OK the new part.\r\n\r\n> 1a.\r\n> \"For pg_upgrade\". I think this wording is a hangover from back when\r\n> the patch was split into two parts for pg_dump and pg_upgrade, but now\r\n> it seems strange.\r\n\r\nYeah, so removed the word.\r\n\r\n> 1b.\r\n> /cluter/cluster/\r\n\r\nChanged.\r\n\r\n> 1c\r\n> /because it/because pg_resetwal/\r\n\r\nChanged.\r\n\r\n> src/sgml/ref/pgupgrade.sgml\r\n> \r\n> 2.\r\n> \r\n> + <step>\r\n> + <title>Prepare for publisher upgrades</title>\r\n> +\r\n> + <para>\r\n> + <application>pg_upgrade</application> try to dump and restore logical\r\n> + replication slots. This helps avoid the need for manually defining the\r\n> + same replication slot on the new publisher.\r\n> + </para>\r\n> +\r\n> \r\n> 2a.\r\n> /try/attempts to/ ??\r\n\r\nChanged.\r\n\r\n> 2b.\r\n> Is \"dump\" the right word here? I didn't see dumping happening in the\r\n> patch anymore.\r\n\r\nI replaced \"dump and restore\" to \" migrate\". How do you think?\r\n\r\n> 3.\r\n> \r\n> + <para>\r\n> + Before you start upgrading the publisher node, ensure that the\r\n> + subscription is temporarily disabled. After the upgrade is complete,\r\n> + execute the\r\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\r\n> SUBSCRIPTION ... DISABLE</command></link>\r\n> + command to update the connection string, and then re-enable the\r\n> + subscription.\r\n> + </para>\r\n> \r\n> 3a.\r\n> That link made no sense in this context.\r\n> \r\n> Don't you mean to say:\r\n> <command>ALTER SUBSCRIPTION ... CONNECTION ...</command>\r\n> 3b.\r\n> Hmm. I wonder now did you *also* mean to describe how to disable? For example:\r\n> \r\n> Before you start upgrading the publisher node, ensure that the\r\n> subscription is temporarily disabled, by executing\r\n> <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\r\n> DISABLE</command></link>.\r\n\r\nI wondered which statement should be referred, and finally did incompletely.\r\nBoth of ALTER SUBSCRIPTION statements was cited, and a link was added to\r\nDISABLE clause. Is it OK?\r\n\r\n> 4.\r\n> \r\n> +\r\n> + <para>\r\n> + Upgrading slots has some settings. At first, all the slots must not be in\r\n> + <literal>lost</literal>, and they must have consumed all the WALs on old\r\n> + node. Furthermore, new node must have larger\r\n> + <link\r\n> linkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varna\r\n> me></link>\r\n> + than existing slots on old node, and\r\n> + <link linkend=\"guc-wal-level\"><varname>wal_level</varname></link>\r\n> must be\r\n> + <literal>logical</literal>. <application>pg_upgrade</application> will\r\n> + run error if something wrong.\r\n> + </para>\r\n> + </step>\r\n> +\r\n> \r\n> 4a.\r\n> \"At first, all the slots must not be in lost\"\r\n> \r\n> Apart from being strangely worded, I was not familiar with what it\r\n> meant to say \"must not be in lost\". Will this be meaningful to the\r\n> user?\r\n> \r\n> IMO this should have more description, e.g. including mentioning the\r\n> \"wal_status\" attribute with the appropriate link to\r\n> https://www.postgresql.org/docs/current/view-pg-replication-slots.html\r\n\r\nAdded the reference.\r\n\r\n> 4b.\r\n> BEFORE\r\n> Upgrading slots has some settings. ...\r\n> <application>pg_upgrade</application> will run error if something\r\n> wrong.\r\n> \r\n> SUGGESTION\r\n> There are some prerequisites for <application>pg_upgrade</application>\r\n> to be able to upgrade the replication slots. If these are not met an\r\n> error will be reported.\r\n\r\nChanged.\r\n\r\n> 4c.\r\n> Wondered if this list of prerequisites might be better presented as an\r\n> SGML list.\r\n\r\nChanged to <itemizedlist> style.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 5.\r\n> extern char *output_files[];\r\n> +extern int num_slots_on_old_cluster;\r\n> \r\n> ~\r\n> \r\n> IMO something feels not quite right about having this counter floating\r\n> around as a global variable.\r\n> \r\n> Shouldn't this instead be a field member of the old_cluster. That\r\n> seems to be the normal way to hold the cluster-wise info.\r\n\r\nPer comment from Amit, the variable was removed.\r\n\r\n> 6. check_new_cluster_is_empty\r\n> \r\n> RelInfoArr *rel_arr = &new_cluster.dbarr.dbs[dbnum].rel_arr;\r\n> + DbInfo *pDbInfo = &new_cluster.dbarr.dbs[dbnum];\r\n> + LogicalSlotInfoArr *slot_arr = &pDbInfo->slot_arr;\r\n> \r\n> IIRC I previously suggested adding this 'pDbInfo' variable because\r\n> there are several places that can make use of it.\r\n> \r\n> You are using it only in the NEW code, but did not replace the\r\n> existing other code to make use of it:\r\n> pg_fatal(\"New cluster database \\\"%s\\\" is not empty: found relation \\\"%s.%s\\\"\",\r\n> new_cluster.dbarr.dbs[dbnum].db_name,\r\n\r\nRight, switched to use it. Additionally, it was also used for definition of rel_arr.\r\n\r\n> 7. check_for_logical_replication_slots\r\n> \r\n> +\r\n> +/*\r\n> + * Verify the parameter settings necessary for creating logical replication\r\n> + * slots.\r\n> + */\r\n> +static void\r\n> +check_for_logical_replication_slots(ClusterInfo *new_cluster)\r\n> +{\r\n> + PGresult *res;\r\n> + PGconn *conn = connectToServer(new_cluster, \"template1\");\r\n> + int max_replication_slots;\r\n> + char *wal_level;\r\n> +\r\n> + /* logical replication slots can be dumped since PG17. */\r\n> + if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1600)\r\n> + return;\r\n> +\r\n> + prep_status(\"Checking parameter settings for logical replication slots\");\r\n> +\r\n> + res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\r\n> + max_replication_slots = atoi(PQgetvalue(res, 0, 0));\r\n> +\r\n> + if (max_replication_slots == 0)\r\n> + pg_fatal(\"max_replication_slots must be greater than 0\");\r\n> + else if (num_slots_on_old_cluster > max_replication_slots)\r\n> + pg_fatal(\"max_replication_slots must be greater than existing logical \"\r\n> + \"replication slots on old node.\");\r\n> +\r\n> + PQclear(res);\r\n> +\r\n> + res = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n> + wal_level = PQgetvalue(res, 0, 0);\r\n> +\r\n> + if (strcmp(wal_level, \"logical\") != 0)\r\n> + pg_fatal(\"wal_level must be \\\"logical\\\", but is set to \\\"%s\\\"\",\r\n> + wal_level);\r\n> +\r\n> + PQclear(res);\r\n> +\r\n> + PQfinish(conn);\r\n> +\r\n> + check_ok();\r\n> \r\n> ~\r\n> \r\n> 7a.\r\n> +check_for_logical_replication_slots(ClusterInfo *new_cluster)\r\n> \r\n> IMO it is bad practice to name this argument 'new_cluster'. You will\r\n> end up shadowing the global variable of the same name. It seems in\r\n> other similar code where &new_cluster is passed as a parameter the\r\n> function arg there is called just 'cluster'.\r\n\r\nHmm, but check_for_new_tablespace_dir() has an argument 'new_cluster',\r\nAFAICS, the check function only called for new cluster has an argument \"new_cluster\",\r\nwhereas the function called for both or old cluster has \"cluster\". Am I missing\r\nsomething, or anyway it should be fixed? Currently I kept it.\r\n\r\n> 7b.\r\n> \"/* logical replication slots can be dumped since PG17. */\"\r\n> \r\n> Is \"dumped\" the correct word to be used here? Where is the \"dump\"?\r\n\r\nChanged to \"migrated\"\r\n\r\n> 7c.\r\n> \r\n> + if (max_replication_slots == 0)\r\n> + pg_fatal(\"max_replication_slots must be greater than 0\");\r\n> + else if (num_slots_on_old_cluster > max_replication_slots)\r\n> + pg_fatal(\"max_replication_slots must be greater than existing logical \"\r\n> + \"replication slots on old node.\");\r\n> \r\n> Why is the 1st condition here even needed? Isn't it sufficient just to\r\n> have that 2nd condition to check max_replication_slot is big enough?\r\n\r\nYeah, sufficient. This is a garbage of previous changes. Fixed.\r\n\r\n> 8. src/bin/pg_upgrade/dump.c\r\n> \r\n> {\r\n> char sql_file_name[MAXPGPATH],\r\n> log_file_name[MAXPGPATH];\r\n> +\r\n> DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\r\n> ~\r\n\r\nRemoved.\r\n\r\n> ======\r\n> src/bin/pg_upgrade/function.c\r\n> \r\n> 9. get_loadable_libraries -- GENERAL\r\n> \r\n> @@ -46,7 +46,8 @@ library_name_compare(const void *p1, const void *p2)\r\n> /*\r\n> * get_loadable_libraries()\r\n> *\r\n> - * Fetch the names of all old libraries containing C-language functions.\r\n> + * Fetch the names of all old libraries containing C-language functions, and\r\n> + * output plugins used by existing logical replication slots.\r\n> * We will later check that they all exist in the new installation.\r\n> */\r\n> void\r\n> @@ -66,14 +67,21 @@ get_loadable_libraries(void)\r\n> PGconn *conn = connectToServer(&old_cluster, active_db->db_name);\r\n> \r\n> /*\r\n> - * Fetch all libraries containing non-built-in C functions in this DB.\r\n> + * Fetch all libraries containing non-built-in C functions and\r\n> + * output plugins in this DB.\r\n> */\r\n> ress[dbnum] = executeQueryOrDie(conn,\r\n> \"SELECT DISTINCT probin \"\r\n> \"FROM pg_catalog.pg_proc \"\r\n> \"WHERE prolang = %u AND \"\r\n> \"probin IS NOT NULL AND \"\r\n> - \"oid >= %u;\",\r\n> + \"oid >= %u \"\r\n> + \"UNION \"\r\n> + \"SELECT DISTINCT plugin \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE wal_status <> 'lost' AND \"\r\n> + \"database = current_database() AND \"\r\n> + \"temporary IS FALSE;\",\r\n> ClanguageId,\r\n> FirstNormalObjectId);\r\n> totaltups += PQntuples(ress[dbnum]);\r\n> \r\n> ~\r\n> \r\n> Maybe it is OK, but it somehow seems like the new logic has been\r\n> jammed into the get_loadable_libraries() function for coding\r\n> convenience. For example, all the names (function names, variable\r\n> names, structure field names) are referring to \"libraries\", so the\r\n> plugin seems a bit out of place.\r\n\r\nPer discussion with Amit and you [1], I kept the style. Comments atop and in\r\nthe function was changed instead.\r\n\r\n> 10. get_loadable_libraries\r\n> \r\n> /* Fetch all library names, removing duplicates within each DB */\r\n> for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> ~\r\n> \r\n> This code comment still refers only to library names.\r\n\r\nI think this is right, because output plugins are also the library.\r\n\r\n> 10. get_loadable_libraries\r\n> \r\n> + \"UNION \"\r\n> + \"SELECT DISTINCT plugin \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE wal_status <> 'lost' AND \"\r\n> + \"database = current_database() AND \"\r\n> + \"temporary IS FALSE;\",\r\n> \r\n> IMO this SQL might be more readable if it uses an alias (like 'rs')\r\n> for the catalog. Then rs.wal_status, rs.database, rs.temporary etc.\r\n\r\nPer discussion with Amit and you [1], this comment was ignored.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 11. get_logical_slot_infos_per_db\r\n> \r\n> + snprintf(query, sizeof(query),\r\n> + \"SELECT slot_name, plugin, two_phase \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE database = current_database() AND temporary = false \"\r\n> + \"AND wal_status <> 'lost';\");\r\n> \r\n> There was similar SQL in get_loadable_libraries() but there you wrote:\r\n> \r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE wal_status <> 'lost' AND \"\r\n> + \"database = current_database() AND \"\r\n> + \"temporary IS FALSE;\",\r\n> \r\n> The WHERE condition order and case are all slightly different. IMO it\r\n> would be better for both SQL fragments to be exactly the same.\r\n\r\nUnified to later one.\r\n\r\n> 12. get_logical_slot_infos\r\n> \r\n> +int\r\n> +get_logical_slot_infos(ClusterInfo *cluster)\r\n> +{\r\n> + int dbnum;\r\n> + int slotnum = 0;\r\n> +\r\n> \r\n> I think 'slotnum' is not a good name. In other nearby code (e.g.\r\n> print_slot_infos) 'slotnum' is used to mean the index of each slot,\r\n> but here it means the total number of slots. How about a name like\r\n> 'slot_count' or 'nslots' something where the name is more meaningful?\r\n\r\nChanged to slot_count.\r\n\r\n> 13. free_db_and_rel_infos\r\n> \r\n> +\r\n> + /*\r\n> + * Logical replication slots must not exist on the new cluster before\r\n> + * doing create_logical_replication_slots().\r\n> + */\r\n> + Assert(db_arr->dbs[dbnum].slot_arr.slots == NULL);\r\n> \r\n> Isn't it more natural to do: Assert(db_arr->dbs[dbnum].slot_arr.nslots == 0);\r\n\r\nChanged.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.c\r\n> \r\n> 14. create_logical_replication_slots\r\n> \r\n> +create_logical_replication_slots(void)\r\n> +{\r\n> + int dbnum;\r\n> + int slotnum;\r\n> \r\n> The 'slotnum' can be declared at a lower scope than this to be closer\r\n> to where it is actually used.\r\n\r\nMoved.\r\n\r\n> 15. create_logical_replication_slots\r\n> \r\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> + {\r\n> + DbInfo *old_db = &old_cluster.dbarr.dbs[dbnum];\r\n> + LogicalSlotInfoArr *slot_arr = &old_db->slot_arr;\r\n> + PQExpBuffer query,\r\n> + escaped;\r\n> + PGconn *conn;\r\n> + char log_file_name[MAXPGPATH];\r\n> +\r\n> + /* Quick exit if there are no slots */\r\n> + if (!slot_arr->nslots)\r\n> + continue;\r\n> \r\n> The comment is misleading. There is no exiting. Maybe better to say\r\n> something like \"Skip this DB if there are no slots\".\r\n\r\nChanged.\r\n\r\n> 16. create_logical_replication_slots\r\n> \r\n> + appendPQExpBuffer(query, \"SELECT\r\n> pg_catalog.pg_create_logical_replication_slot(\");\r\n> + appendStringLiteral(query, slot_arr->slots[slotnum].slotname,\r\n> + slot_arr->encoding, slot_arr->std_strings);\r\n> + appendPQExpBuffer(query, \", \");\r\n> + appendStringLiteral(query, slot_arr->slots[slotnum].plugin,\r\n> + slot_arr->encoding, slot_arr->std_strings);\r\n> + appendPQExpBuffer(query, \", false, %s);\",\r\n> + slot_arr->slots[slotnum].two_phase ? \"true\" : \"false\");\r\n> \r\n> I noticed that the function comment for appendStringLiteral() says:\r\n> \"We need it in situations where we do not have a PGconn available.\r\n> Where we do, appendStringLiteralConn is a better choice.\".\r\n> \r\n> But in this code, we *do* have PGconn available. So, shouldn't we be\r\n> following the advice of the appendStringLiteral() function comment and\r\n> use the other API instead?\r\n\r\nChanged to use appendStringLiteralConn.\r\n\r\n> 17. create_logical_replication_slots\r\n> \r\n> + /*\r\n> + * The string must be escaped to shell-style, because there is a\r\n> + * possibility that output plugin name contains quotes. The output\r\n> + * string would be sandwiched by the single quotes, so it does not have\r\n> + * to be wrapped by any quotes when it is passed to\r\n> + * parallel_exec_prog().\r\n> + */\r\n> + appendShellString(escaped, query->data);\r\n> \r\n> /sandwiched by/enclosed by/ ???\r\n\r\nThis part was no longer needed because we do not bypass strings to the\r\nshell. The initial motivation of the change was to execute in parallel, and\r\nthe string was escaped with shell-style and pass to psql -c option for the\r\npurpose. But I found that it shows huge performance degradation, so reverted\r\nthe change. See my report [2].\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 18. LogicalSlotInfo\r\n> \r\n> +/*\r\n> + * Structure to store logical replication slot information\r\n> + */\r\n> +typedef struct\r\n> +{\r\n> + char *slotname; /* slot name */\r\n> + char *plugin; /* plugin */\r\n> + bool two_phase; /* Can the slot decode 2PC? */\r\n> +} LogicalSlotInfo;\r\n> \r\n> Looks a bit strange when the only last field comment is uppercase but\r\n> the others are not. Maybe lowercase everything like for other nearby\r\n> structs.\r\n\r\nChanged.\r\n\r\n> 19. LogicalSlotInfoArr\r\n> \r\n> +\r\n> +typedef struct\r\n> +{\r\n> + int nslots;\r\n> + LogicalSlotInfo *slots;\r\n> + int encoding;\r\n> + bool std_strings;\r\n> +} LogicalSlotInfoArr;\r\n> +\r\n> \r\n> The meaning of those fields is not always obvious. IMO they can all be\r\n> commented on.\r\n\r\nAdded. Note that encoding and std_strings were removed because it was\r\nused by appendStringLiteral().\r\n\r\n> .../pg_upgrade/t/003_logical_replication_slots.pl\r\n> \r\n> 20.\r\n> \r\n> # Cause a failure at the start of pg_upgrade because wal_level is replica\r\n> \r\n> ~\r\n> \r\n> I wondered if it would be clearer if you had to explicitly set the\r\n> new_node to \"replica\" initially, instead of leaving it default.\r\n\r\nChanged.\r\n\r\n> 21.\r\n> \r\n> # Cause a failure at the start of pg_upgrade because max_replication_slots is 0\r\n> \r\n> ~\r\n> \r\n> This related to my earlier code comment in this post -- I didn't\r\n> understand the need to specially test for 0. IIUC, we really are\r\n> interested only to know if there are *sufficient*\r\n> max_replication_slots.\r\n\r\nAgreed, removed.\r\n\r\n> 22.\r\n> \r\n> 'run of pg_upgrade of old node with small max_replication_slots');\r\n> \r\n> ~\r\n> \r\n> SUGGESTION\r\n> run of pg_upgrade where the new node has insufficient max_replication_slots\r\n\r\nChanged.\r\n\r\n> 23.\r\n> \r\n> # Preparations for the subsequent test. max_replication_slots is set to\r\n> # appropriate value\r\n> $new_node->append_conf('postgresql.conf', \"max_replication_slots = 10\");\r\n> \r\n> # Remove an unnecessary slot and consume WALs\r\n> $old_node->start;\r\n> $old_node->safe_psql(\r\n> 'postgres', qq[\r\n> SELECT pg_drop_replication_slot('test_slot1');\r\n> SELECT count(*) FROM pg_logical_slot_get_changes('test_slot2', NULL, NULL)\r\n> ]);\r\n> $old_node->stop;\r\n> \r\n> ~\r\n> \r\n> Some of that preparation seems unnecessary. I think the new node\r\n> max_replication_slots is 1 already, so if you are going to remove one\r\n> of test_slot1 here then there is only ONE slot left, right? So the\r\n> max_replication_slots on the new node should be OK now. Not only will\r\n> there be less test code needed here, but you will be testing the\r\n> boundary condition of max_replication_slots (which is probably a good\r\n> thing to do).\r\n\r\nRemoved.\r\n\r\nNext version would be available in the upcoming post.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1LhEwxQmK2ZepYTYDOKp6F8JCFbiBcw5EoQFbs-CjmY7Q%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/TYCPR01MB58701DAEE5E61B07AC84ADBBF51AA%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n\r\n",
"msg_date": "Fri, 18 Aug 2023 13:43:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nPSA new version patch set.\r\n\r\n> Here are some review comments for the patch v21-0003\r\n> \r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> pg_upgrade fails if the old node has slots which status is 'lost' or they do not\r\n> consume all WAL records. These are needed for prevent the data loss.\r\n> \r\n> ~\r\n> \r\n> Maybe some minor brush-up like:\r\n> \r\n> SUGGESTION\r\n> In order to prevent data loss, pg_upgrade will fail if the old node\r\n> has slots with the status 'lost', or with unconsumed WAL records.\r\n\r\nImproved.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 2. check_for_confirmed_flush_lsn\r\n> \r\n> + /* Check that all logical slots are not in 'lost' state. */\r\n> + res = executeQueryOrDie(conn,\r\n> + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE temporary = false AND wal_status = 'lost';\");\r\n> +\r\n> + ntups = PQntuples(res);\r\n> + i_slotname = PQfnumber(res, \"slot_name\");\r\n> +\r\n> + for (i = 0; i < ntups; i++)\r\n> + {\r\n> + is_error = true;\r\n> +\r\n> + pg_log(PG_WARNING,\r\n> + \"\\nWARNING: logical replication slot \\\"%s\\\" is obsolete.\",\r\n> + PQgetvalue(res, i, i_slotname));\r\n> + }\r\n> +\r\n> + PQclear(res);\r\n> +\r\n> + if (is_error)\r\n> + pg_fatal(\"logical replication slots not to be in 'lost' state.\");\r\n> +\r\n> \r\n> 2a. (GENERAL)\r\n> The above code for checking lost state seems out of place in this\r\n> function which is meant for checking confirmed flush lsn.\r\n> \r\n> Maybe you jammed both kinds of logic into one function to save on the\r\n> extra PGconn or something but IMO two separate functions would be\r\n> better. e.g.\r\n> - check_for_lost_slots\r\n> - check_for_confirmed_flush_lsn\r\n\r\nSeparated into check_for_lost_slots and check_for_confirmed_flush_lsn.\r\n\r\n> 2b.\r\n> + /* Check that all logical slots are not in 'lost' state. */\r\n> \r\n> SUGGESTION\r\n> /* Check there are no logical replication slots with a 'lost' state. */\r\n\r\nChanged.\r\n\r\n> 2c.\r\n> + res = executeQueryOrDie(conn,\r\n> + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE temporary = false AND wal_status = 'lost';\");\r\n> \r\n> This SQL fragment is very much like others in previous patches. Be\r\n> sure to make all the cases and clauses consistent with all those\r\n> similar SQL fragments.\r\n\r\nUnified the order. Note that they could not be the completely the same.\r\n\r\n> 2d.\r\n> + is_error = true;\r\n> \r\n> That doesn't need to be in the loop. Better to just say:\r\n> is_error = (ntups > 0);\r\n\r\nRemoved the variable.\r\n\r\n> 2e.\r\n> There is a mix of terms in the WARNING and in the pg_fatal -- e.g.\r\n> \"obsolete\" versus \"lost\". Is it OK?\r\n\r\nUnified to 'lost'.\r\n\r\n> 2f.\r\n> + pg_fatal(\"logical replication slots not to be in 'lost' state.\");\r\n> \r\n> English? And maybe it should be much more verbose...\r\n> \r\n> \"Upgrade of this installation is not allowed because one or more\r\n> logical replication slots with a state of 'lost' were detected.\"\r\n\r\nI checked other pg_fatal() and the statement like \"Upgrade of this installation is not allowed\"\r\ncould not be found. So I used later part.\r\n\r\n> 3. check_for_confirmed_flush_lsn\r\n> \r\n> + /*\r\n> + * Check that all logical replication slots have reached the latest\r\n> + * checkpoint position (SHUTDOWN_CHECKPOINT record). This checks cannot\r\n> be\r\n> + * done in case of live_check because the server has not been written the\r\n> + * SHUTDOWN_CHECKPOINT record yet.\r\n> + */\r\n> + if (!live_check)\r\n> + {\r\n> + res = executeQueryOrDie(conn,\r\n> + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE confirmed_flush_lsn != '%X/%X' AND temporary = false;\",\r\n> + old_cluster.controldata.chkpnt_latest_upper,\r\n> + old_cluster.controldata.chkpnt_latest_lower);\r\n> +\r\n> + ntups = PQntuples(res);\r\n> + i_slotname = PQfnumber(res, \"slot_name\");\r\n> +\r\n> + for (i = 0; i < ntups; i++)\r\n> + {\r\n> + is_error = true;\r\n> +\r\n> + pg_log(PG_WARNING,\r\n> + \"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",\r\n> + PQgetvalue(res, i, i_slotname));\r\n> + }\r\n> +\r\n> + PQclear(res);\r\n> + PQfinish(conn);\r\n> +\r\n> + if (is_error)\r\n> + pg_fatal(\"All logical replication slots consumed all the WALs.\");\r\n> \r\n> ~\r\n> \r\n> 3a.\r\n> /This checks/This check/\r\n\r\nThe comment was no longer needed, because the caller checks live_check variable.\r\nMore detail, please see my another post [1].\r\n\r\n> 3b.\r\n> I don't think the separation of\r\n> chkpnt_latest_upper/chkpnt_latest_lower is needed like this. AFAIK\r\n> there is an LSN_FORMAT_ARGS(lsn) macro designed for handling exactly\r\n> this kind of parameter substitution.\r\n\r\nFixed to use the macro.\r\n\r\nPreviously I considered that the header \"access/xlogdefs.h\" could not be included\r\nfrom pg_upgrade, and it was the reason why I did not use. But it seemed my\r\nmisunderstanding - I could include the file.\r\n\r\n> 3c.\r\n> + is_error = true;\r\n> \r\n> That doesn't need to be in the loop. Better to just say:\r\n> is_error = (ntups > 0);\r\n\r\nRemoved.\r\n\r\n> 3d.\r\n> + pg_fatal(\"All logical replication slots consumed all the WALs.\");\r\n> \r\n> The message seems backward. shouldn't it say something like:\r\n> \"Upgrade of this installation is not allowed because one or more\r\n> logical replication slots still have unconsumed WAL records.\"\r\n\r\nI used only later part, see above reply.\r\n\r\n> src/bin/pg_upgrade/controldata.c\r\n> \r\n> 4. get_control_data\r\n> \r\n> + /*\r\n> + * Upper and lower part of LSN must be read and stored\r\n> + * separately because it is reported as %X/%X format.\r\n> + */\r\n> + cluster->controldata.chkpnt_latest_upper =\r\n> + strtoul(p, &slash, 16);\r\n> + cluster->controldata.chkpnt_latest_lower =\r\n> + strtoul(++slash, NULL, 16);\r\n> \r\n> I felt that this field separation code is maybe not necessary. Please\r\n> refer to other review comments in this post.\r\n\r\nHmm. I thought they must be read separately even if we stored as XLogRecPtr (uint64).\r\nThis is because the pg_controldata reports the LSN as %X/%X style. Am I missing something?\r\n\r\n```\r\n$ pg_controldata -D data_N1/ | grep \"Latest checkpoint location\"\r\nLatest checkpoint location: 0/153C8D0\r\n```\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 5. ControlData\r\n> \r\n> +\r\n> + uint32 chkpnt_latest_upper;\r\n> + uint32 chkpnt_latest_lower;\r\n> } ControlData;\r\n> \r\n> ~\r\n> \r\n> Actually, I did not recognise the reason why this cannot be stored\r\n> properly as a single XLogRecPtr field. Please see other review\r\n> comments in this post.\r\n\r\nChanged to use XLogRecPtr. See above comment.\r\n\r\n> .../t/003_logical_replication_slots.pl\r\n> \r\n> 6. GENERAL\r\n> \r\n> Many of the changes to this file are just renaming the\r\n> 'old_node'/'new_node' to 'old_publisher'/'new_publisher'.\r\n> \r\n> This seems a basic change not really associated with this patch 0003.\r\n> To reduce the code churn, this change should be moved into the earlier\r\n> patch where this test file (003_logical_replication_slots.pl) was\r\n> first introduced,\r\n\r\nMoved these renaming to 0002.\r\n\r\n> 7.\r\n> \r\n> # Cause a failure at the start of pg_upgrade because slot do not finish\r\n> # consuming all the WALs\r\n> \r\n> ~\r\n> \r\n> Can you give a more detailed explanation in the comment of how this\r\n> test case achieves what it says?\r\n\r\nSlightly reworded above and this comment. How do you think?\r\n\r\n> src/test/regress/sql/misc_functions.sql\r\n> \r\n> 8.\r\n> @@ -236,4 +236,4 @@ SELECT * FROM pg_split_walfile_name('invalid');\r\n> SELECT segment_number > 0 AS ok_segment_number, timeline_id\r\n> FROM pg_split_walfile_name('000000010000000100000000');\r\n> SELECT segment_number > 0 AS ok_segment_number, timeline_id\r\n> - FROM pg_split_walfile_name('ffffffFF00000001000000af');\r\n> + FROM pg_split_walfile_name('ffffffFF00000001000000af');\r\n> \\ No newline at end of file\r\n> \r\n> ~\r\n> \r\n> What is this change for? It looks like maybe some accidental\r\n> whitespace change happened.\r\n\r\nIt was unexpected, removed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866691219B9CB280B709600F51BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 18 Aug 2023 13:51:36 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 7:21 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n\nFew comments on new patches:\n1.\n+ <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... DISABLE</command></link>.\n+ After the upgrade is complete, execute the\n+ <command>ALTER SUBSCRIPTION ... CONNECTION</command> command to update the\n+ connection string, and then re-enable the subscription.\n\nWhy does one need to update the connection string?\n\n2.\n+ /*\n+ * Checking for logical slots must be done before\n+ * check_new_cluster_is_empty() because the slot_arr attribute of the\n+ * new_cluster will be checked in that function.\n+ */\n+ if (count_logical_slots(&old_cluster))\n+ {\n+ get_logical_slot_infos(&new_cluster, false);\n+ check_for_logical_replication_slots(&new_cluster);\n+ }\n+\n check_new_cluster_is_empty();\n\nCan't we simplify this checking by simply querying\npg_replication_slots for any usable slot something similar to what we\nare doing in check_for_prepared_transactions()? We can add this check\nin the function check_for_logical_replication_slots(). Also, do we\nneed a count function, or instead can we have a simple function like\nis_logical_slot_present() where we return even if there is one slot\npresent?\n\nApart from this, (a) I have made a few changes (changed comments) in\npatch 0001 as shared in the email [1]; (b) some modifications in the\ndocs as you can see in the attached. Please include those changes in\nthe next version if you think they are okay.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JzJagMmb_E8D4au%3DGYQkxox0AfNBm1FbP7sy7t4YWXPQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 19 Aug 2023 15:39:20 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 10:31 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Aug 17, 2023 at 6:07 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Tue, Aug 15, 2023 at 12:06 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Tue, Aug 15, 2023 at 7:51 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > On Mon, Aug 14, 2023 at 2:07 PM Amit Kapila <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, Aug 14, 2023 at 7:57 AM Masahiko Sawada <[email protected]> wrote:\n> > > > > > Another idea is (which might have already discussed thoguh) that we check if the latest shutdown checkpoint LSN in the control file matches the confirmed_flush_lsn in pg_replication_slots view. That way, we can ensure that the slot has consumed all WAL records before the last shutdown. We don't need to worry about WAL records generated after starting the old cluster during the upgrade, at least for logical replication slots.\n> > > > > >\n> > > > >\n> > > > > Right, this is somewhat closer to what Patch is already doing. But\n> > > > > remember in this case we need to remember and use the latest\n> > > > > checkpoint from the control file before the old cluster is started\n> > > > > because otherwise the latest checkpoint location could be even updated\n> > > > > during the upgrade. So, instead of reading from WAL, we need to change\n> > > > > so that we rely on the control file's latest LSN.\n> > > >\n> > > > Yes, I was thinking the same idea.\n> > > >\n> > > > But it works for only replication slots for logical replication. Do we\n> > > > want to check if no meaningful WAL records are generated after the\n> > > > latest shutdown checkpoint, for manually created slots (or non-logical\n> > > > replication slots)? If so, we would need to have something reading WAL\n> > > > records in the end.\n> > > >\n> > >\n> > > This feature only targets logical replication slots. I don't see a\n> > > reason to be different for manually created logical replication slots.\n> > > Is there something particular that you think we could be missing?\n> >\n> > Sorry I was not clear. I meant the logical replication slots that are\n> > *not* used by logical replication, i.e., are created manually and used\n> > by third party tools that periodically consume decoded changes. As we\n> > discussed before, these slots will never be able to pass that\n> > confirmed_flush_lsn check.\n> >\n>\n> I think normally one would have a background process to periodically\n> consume changes. Won't one can use the walsender infrastructure for\n> their plugins to consume changes probably by using replication\n> protocol?\n\nNot sure.\n\n> Also, I feel it is the plugin author's responsibility to\n> consume changes or advance slot to the required position before\n> shutdown.\n\nHow does the plugin author ensure that the slot consumes all WAL\nrecords including shutdown_checkpoint before shutdown?\n\n>\n> > After some thoughts, one thing we might\n> > need to consider is that in practice, the upgrade project is performed\n> > during the maintenance window and has a backup plan that revert the\n> > upgrade process, in case something bad happens. If we require the\n> > users to drop such logical replication slots, they cannot resume to\n> > use the old cluster in that case, since they would need to create new\n> > slots, missing some changes.\n> >\n>\n> Can't one keep the backup before removing slots?\n\nYes, but restoring the back could take time.\n\n>\n> > Other checks in pg_upgrade seem to be\n> > compatibility checks that would eventually be required for the upgrade\n> > anyway. Do we need to consider this case? For example, we do that\n> > confirmed_flush_lsn check for only the slots with pgoutput plugin.\n> >\n>\n> I think one is allowed to use pgoutput plugin even for manually\n> created slots. So, such a check may not work.\n\nRight, but I thought it's a very rare case.\n\nSince the slot's flushed_confirmed_lsn check is not a compatibility\ncheck unlike the existing check, I wonder if we can make it optional.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 20 Aug 2023 22:18:42 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 10:51 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> PSA new version patch set.\n>\n\nI've looked at the v22 patch set, and here are some comments:\n\n0001:\n\nDo we need regression tests to make sure that the slot's\nconfirmed_flush_lsn matches the LSN of the latest shutdown_checkpoint\nrecord?\n\n0002:\n\n+ <step>\n+ <title>Prepare for publisher upgrades</title>\n+\n\nShould this step be done before \"8. Stop both servers\" as it might\nrequire to disable subscriptions and to drop 'lost' replication slots?\n\nWhy is there no explanation about the slots' confirmed_flush_lsn check\nas prerequisites?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 00:20:45 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sun, Aug 20, 2023 at 6:49 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Aug 17, 2023 at 10:31 PM Amit Kapila <[email protected]> wrote:\n> >\n> > >\n> > > Sorry I was not clear. I meant the logical replication slots that are\n> > > *not* used by logical replication, i.e., are created manually and used\n> > > by third party tools that periodically consume decoded changes. As we\n> > > discussed before, these slots will never be able to pass that\n> > > confirmed_flush_lsn check.\n> > >\n> >\n> > I think normally one would have a background process to periodically\n> > consume changes. Won't one can use the walsender infrastructure for\n> > their plugins to consume changes probably by using replication\n> > protocol?\n>\n> Not sure.\n>\n\nI think one can use Streaming Replication Protocol to achieve it [1].\n\n> > Also, I feel it is the plugin author's responsibility to\n> > consume changes or advance slot to the required position before\n> > shutdown.\n>\n> How does the plugin author ensure that the slot consumes all WAL\n> records including shutdown_checkpoint before shutdown?\n>\n\nBy using \"Streaming Replication Protocol\" so that walsender can take\ncare of it. If not, I think users should drop such slots before the\nupgrade because anyway, they won't be usable after the upgrade.\n\n> >\n> > > After some thoughts, one thing we might\n> > > need to consider is that in practice, the upgrade project is performed\n> > > during the maintenance window and has a backup plan that revert the\n> > > upgrade process, in case something bad happens. If we require the\n> > > users to drop such logical replication slots, they cannot resume to\n> > > use the old cluster in that case, since they would need to create new\n> > > slots, missing some changes.\n> > >\n> >\n> > Can't one keep the backup before removing slots?\n>\n> Yes, but restoring the back could take time.\n>\n> >\n> > > Other checks in pg_upgrade seem to be\n> > > compatibility checks that would eventually be required for the upgrade\n> > > anyway. Do we need to consider this case? For example, we do that\n> > > confirmed_flush_lsn check for only the slots with pgoutput plugin.\n> > >\n> >\n> > I think one is allowed to use pgoutput plugin even for manually\n> > created slots. So, such a check may not work.\n>\n> Right, but I thought it's a very rare case.\n>\n\nOkay, but not sure that we can ignore it.\n\n> Since the slot's flushed_confirmed_lsn check is not a compatibility\n> check unlike the existing check, I wonder if we can make it optional.\n>\n\nThere are arguments both ways. Initially, the patch proposed to make\nthem optional by having an option like\n--include-logical-replication-slots but Jonathan raised a point that\nit will be more work for users and should be the default. Then we also\ndiscussed having an option like --exclude-logical-replication-slots\nbut as we don't have any other similar option, it doesn't seem natural\nto add such an option. Also, I am afraid, if there is no user of such\nan option, it won't be worth it. BTW, how would you like to see it as\nan optional (via --include or via --exclude switch)?\n\nPersonally, I am okay to make it optional if we have a broader\nconsensus. My preference would be to have an --exclude kind of option.\nHow about first getting the main patch reviewed and committed, then\nbased on consensus, we can decide whether to make it optional and if\nso, what is the preferred way?\n\n[1] - https://www.postgresql.org/docs/current/protocol-replication.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Aug 2023 08:50:32 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for v22-0002\n\n======\nCommit Message\n\n1.\nThis commit allows nodes with logical replication slots to be upgraded. While\nreading information from the old cluster, a list of logical replication slots is\nnewly extracted. At the later part of upgrading, pg_upgrade revisits the list\nand restores slots by using the pg_create_logical_replication_slots() on the new\nclushter.\n\n~\n\n1a\n/is newly extracted/is fetched/\n\n~\n\n1b.\n/using the pg_create_logical_replication_slots()/executing\npg_create_logical_replication_slots()/\n\n~\n\n1c.\n/clushter/cluster/\n\n~~~\n\n2.\nNote that it must be done after the final pg_resetwal command during the upgrade\nbecause pg_resetwal will remove WALs that are required by the slots. Due to the\nrestriction, the timing of restoring replication slots is different from other\nobjects.\n\n~\n\n2a.\n/it must/slot restoration/\n\n~\n\n2b.\n/the restriction/this restriction/\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n3.\n+ <para>\n+ <application>pg_upgrade</application> attempts to migrate logical\n+ replication slots. This helps avoid the need for manually defining the\n+ same replication slot on the new publisher.\n+ </para>\n\n/same replication slot/same replication slots/\n\n~~~\n\n4.\n+ <para>\n+ Before you start upgrading the publisher cluster, ensure that the\n+ subscription is temporarily disabled, by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... DISABLE</command></link>.\n+ After the upgrade is complete, execute the\n+ <command>ALTER SUBSCRIPTION ... CONNECTION</command> command to update the\n+ connection string, and then re-enable the subscription.\n+ </para>\n\nOn the rendered page, it looks a bit strange that DISABLE has a link\nbut COMMENTION does not have a link.\n\n~~~\n\n5.\n+ <para>\n+ There are some prerequisites for <application>pg_upgrade</application> to\n+ be able to upgrade the replication slots. If these are not met an error\n+ will be reported.\n+ </para>\n+\n+ <itemizedlist>\n\n+1 to use all the itemizedlist changes that Amit suggested [1] in his\nattachment.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n6.\n+static void check_for_logical_replication_slots(ClusterInfo *new_cluster);\n\nIMO the arg name should not shadow a global with the same name. See\nother review comment for this function signature.\n\n~~~\n\n7.\n+ /* Extract a list of logical replication slots */\n+ get_logical_slot_infos(&old_cluster, live_check);\n\nBut 'live_check' is never used?\n\n~~~\n\n8. check_for_logical_replication_slots\n+\n+/*\n+ * Verify the parameter settings necessary for creating logical replication\n+ * slots.\n+ */\n+static void\n+check_for_logical_replication_slots(ClusterInfo *new_cluster)\n\nIMO the arg name should not shadow a global with the same name. If\nthis is never going to be called with any param other than\n&new_cluster then probably it is better not even to pass have that\nargument at all. Just refer to the global new_cluster inside the\nfunction.\n\nYou can't say that 'check_for_new_tablespace_dir' does it already so\nit must be OK -- I think that the existing function has the same issue\nand it also ought to be fixed to avoid shadowing!\n\n~~~\n\n9. check_for_logical_replication_slots\n\n+ /* logical replication slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1600)\n+ return;\n\nIMO the code matches the comment better if you say < 1700 instead of <= 1600.\n\n======\nsrc/bin/pg_upgrade/function.c\n\n10. get_loadable_libraries\n /*\n- * Fetch all libraries containing non-built-in C functions in this DB.\n+ * Fetch all libraries containing non-built-in C functions or referred\n+ * by logical replication slots in this DB.\n */\n ress[dbnum] = executeQueryOrDie(conn,\n~\n\n/referred by/referred to by/\n\n======\nsrc/bin/pg_upgrade/info.c\n\n11.\n+/*\n+ * get_logical_slot_infos()\n+ *\n+ * Higher level routine to generate LogicalSlotInfoArr for all databases.\n+ */\n+void\n+get_logical_slot_infos(ClusterInfo *cluster, bool live_check)\n+{\n+ int dbnum;\n+ int slot_count = 0;\n+\n+ if (cluster == &old_cluster)\n+ pg_log(PG_VERBOSE, \"\\nsource databases:\");\n+ else\n+ pg_log(PG_VERBOSE, \"\\ntarget databases:\");\n+\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ {\n+ DbInfo *pDbInfo = &cluster->dbarr.dbs[dbnum];\n+\n+ get_logical_slot_infos_per_db(cluster, pDbInfo);\n+ slot_count += pDbInfo->slot_arr.nslots;\n+\n+ if (log_opts.verbose)\n+ {\n+ pg_log(PG_VERBOSE, \"Database: \\\"%s\\\"\", pDbInfo->db_name);\n+ print_slot_infos(&pDbInfo->slot_arr);\n+ }\n+ }\n+}\n+\n\n11a.\nNow the variable 'slot_count' is no longer being returned so it seems redundant.\n\n~\n\n11b.\nWhat is the 'live_check' parameter for? Nobody is using it.\n\n~~~\n\n12. count_logical_slots\n\n+int\n+count_logical_slots(ClusterInfo *cluster)\n+{\n+ int dbnum;\n+ int slotnum = 0;\n+\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ slotnum += cluster->dbarr.dbs[dbnum].slot_arr.nslots;\n+\n+ return slotnum;\n+}\n\nIMO this variable should be called something like 'slot_count'. This\nis the same review comment also made in a previous review. (See [2]\ncomment#12).\n\n~~~\n\n13. print_slot_infos\n\n+\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ int slotnum;\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\n+ slot_arr->slots[slotnum].slotname,\n+ slot_arr->slots[slotnum].plugin,\n+ slot_arr->slots[slotnum].two_phase);\n+}\n\nIt might be nicer to introduce a variable, instead of all those array\ndereferences:\n\nLogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n\n~~~\n\n14.\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ /*\n+ * Constructs query for creating logical replication slots.\n+ *\n+ * XXX: For simplification, pg_create_logical_replication_slot() is\n+ * used. Is it sufficient?\n+ */\n+ appendPQExpBuffer(query, \"SELECT\npg_catalog.pg_create_logical_replication_slot(\");\n+ appendStringLiteralConn(query, slot_arr->slots[slotnum].slotname,\n+ conn);\n+ appendPQExpBuffer(query, \", \");\n+ appendStringLiteralConn(query, slot_arr->slots[slotnum].plugin,\n+ conn);\n+ appendPQExpBuffer(query, \", false, %s);\",\n+ slot_arr->slots[slotnum].two_phase ? \"true\" : \"false\");\n+\n+ PQclear(executeQueryOrDie(conn, \"%s\", query->data));\n+\n+ resetPQExpBuffer(query);\n+ }\n+\n+ PQfinish(conn);\n+\n+ destroyPQExpBuffer(query);\n+ }\n+\n+ end_progress_output();\n+ check_ok();\n\n14a\nSimilar to the previous comment (#13). It might be nicer to introduce\na variable, instead of all those array dereferences:\n\nLogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n~\n\n14b.\nIt was not clear to me why this command is not being built using\nexecuteQueryOrDie directly instead of using the query buffer. Is there\nsome reason?\n\n~\n\n14c.\nI think it would be cleaner to have a separate res variable like you\nused elsewhere:\nres = executeQueryOrDie(...)\n\ninstead of doing PQclear(executeQueryOrDie(conn, \"%s\", query->data));\nin one line\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.\n\n15.\n+void get_logical_slot_infos(ClusterInfo *cluster, bool live_check);\n\nI didn't see a reason for that 'live_check' parameter.\n\n======\n.../pg_upgrade/t/003_logical_replication_slots.pl\n\n16.\nIMO this would be much easier to read if there were BIG comments\nbetween the actual TEST parts\n\nFor example\n\n# ------------------------------\n# TEST: Confirm pg_upgrade fails is new node wal_level is not 'logical'\n<preparation>\n<test>\n<cleanup>\n\n# ------------------------------\n# TEST: Confirm pg_upgrade fails max_replication_slots on new node is too low\n<preparation>\n<test>\n<cleanup>\n\n# ------------------------------\n# TEST: Successful upgrade\n<preparation>\n<test>\n<cleanup>\n\n~~~\n\n17.\n+# Cause a failure at the start of pg_upgrade because wal_level is replica\n+command_fails(\n+ [\n+ 'pg_upgrade', '--no-sync',\n+ '-d', $old_publisher->data_dir,\n+ '-D', $new_publisher->data_dir,\n+ '-b', $bindir,\n+ '-B', $bindir,\n+ '-s', $new_publisher->host,\n+ '-p', $old_publisher->port,\n+ '-P', $new_publisher->port,\n+ $mode,\n+ ],\n+ 'run of pg_upgrade of old node with wrong wal_level');\n+ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\n+ \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n\nThe message is ambiguous\n\nBEFORE\n'run of pg_upgrade of old node with wrong wal_level'\n\nSUGGESTION\n'run of pg_upgrade where the new node has the wrong wal_level'\n\n~~~\n\n18.\n+# Create an unnecessary slot on old node\n+$old_publisher->start;\n+$old_publisher->safe_psql(\n+ 'postgres', qq[\n+ SELECT pg_create_logical_replication_slot('test_slot2',\n'test_decoding', false, true);\n+]);\n+\n+$old_publisher->stop;\n+\n+# Preparations for the subsequent test. max_replication_slots is set to\n+# smaller than existing slots on old node\n+$new_publisher->append_conf('postgresql.conf', \"wal_level = 'logical'\");\n+$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\n\n\nIMO the comment is misleading. It is not an \"unnecessary slot\", it is\njust a 2nd slot. And this is all part of the preparation for the next\ntest so it should be under the other comment.\n\nFor example SUGGESTION changes like this:\n\n# Preparations for the subsequent test.\n# 1. Create an unnecessary slot on the old node\n$old_publisher->start;\n$old_publisher->safe_psql(\n'postgres', qq[\nSELECT pg_create_logical_replication_slot('test_slot2',\n'test_decoding', false, true);\n]);\n$old_publisher->stop;\n# 2. max_replication_slots is set to smaller than the number of slots\n(2) present on the old node\n$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\n# 3. new node wal_level is set correctly\n$new_publisher->append_conf('postgresql.conf', \"wal_level = 'logical'\");\n\n~~~\n\n19.\n+# Remove an unnecessary slot and consume WAL records\n+$old_publisher->start;\n+$old_publisher->safe_psql(\n+ 'postgres', qq[\n+ SELECT pg_drop_replication_slot('test_slot2');\n+ SELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL)\n+]);\n+$old_publisher->stop;\n+\n\nThis comment should say more like:\n\n# Preparations for the subsequent test.\n\n~~~\n\n20.\n+# Actual run, pg_upgrade_output.d is removed at the end\n\nThis comment should mention that \"successful upgrade is expected\"\nbecause all the other prerequisites are now satisfied.\n\n~~~\n\n21.\n+$new_publisher->start;\n+my $result = $new_publisher->safe_psql('postgres',\n+ \"SELECT slot_name, two_phase FROM pg_replication_slots\");\n+is($result, qq(test_slot1|t), 'check the slot exists on new node');\n\nShould there be a matching new_pulisher->stop;?\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BdT2g8gmerguNd_TA%3DXMnm00nLzuEJ_Sddw6Pj-bvKVQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/TYAPR01MB586604802ABE42E11866762FF51BA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 21 Aug 2023 17:16:46 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Monday, August 21, 2023 11:21 AM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Sun, Aug 20, 2023 at 6:49 PM Masahiko Sawada\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Thu, Aug 17, 2023 at 10:31 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > >\r\n> > > > Sorry I was not clear. I meant the logical replication slots that\r\n> > > > are\r\n> > > > *not* used by logical replication, i.e., are created manually and\r\n> > > > used by third party tools that periodically consume decoded\r\n> > > > changes. As we discussed before, these slots will never be able to\r\n> > > > pass that confirmed_flush_lsn check.\r\n> > > >\r\n> > >\r\n> > > I think normally one would have a background process to periodically\r\n> > > consume changes. Won't one can use the walsender infrastructure for\r\n> > > their plugins to consume changes probably by using replication\r\n> > > protocol?\r\n> >\r\n> > Not sure.\r\n> >\r\n> \r\n> I think one can use Streaming Replication Protocol to achieve it [1].\r\n> \r\n> > > Also, I feel it is the plugin author's responsibility to consume\r\n> > > changes or advance slot to the required position before shutdown.\r\n> >\r\n> > How does the plugin author ensure that the slot consumes all WAL\r\n> > records including shutdown_checkpoint before shutdown?\r\n> >\r\n> \r\n> By using \"Streaming Replication Protocol\" so that walsender can take care of it.\r\n> If not, I think users should drop such slots before the upgrade because anyway,\r\n> they won't be usable after the upgrade.\r\n\r\nYes, I think pglogical is one example which start a bgworker(apply worker) on client to\r\nconsume changes which also uses Streaming Replication Protocol IIRC. And\r\npg_recvlogical is another example which connects to walsender and consume changes.\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Mon, 21 Aug 2023 09:45:44 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Friday, August 18, 2023 9:52 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Dear Peter,\r\n> \r\n> PSA new version patch set.\r\n\r\nThanks for updating the patch!\r\nHere are few comments about 0003 patch.\r\n\r\n1.\r\n\r\n+check_for_lost_slots(ClusterInfo *cluster)\r\n+{\r\n+\tint\t\t\ti,\r\n+\t\t\t\tntups,\r\n+\t\t\t\ti_slotname;\r\n+\tPGresult *res;\r\n+\tDbInfo\t *active_db = &cluster->dbarr.dbs[0];\r\n+\tPGconn\t *conn = connectToServer(cluster, active_db->db_name);\r\n+ \r\n+\t/* logical slots can be migrated since PG17. */\r\n+\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n+\t\treturn;\r\n\r\nI think we should build connection after this check, otherwise the connection\r\nmay be left open after returning.\r\n\r\n\r\n2.\r\n+check_for_confirmed_flush_lsn(ClusterInfo *cluster)\r\n+{\r\n+\tint\t\t\ti,\r\n+\t\t\t\tntups,\r\n+\t\t\t\ti_slotname;\r\n+\tPGresult *res;\r\n+\tDbInfo\t *active_db = &cluster->dbarr.dbs[0];\r\n+\tPGconn\t *conn = connectToServer(cluster, active_db->db_name);\r\n+\r\n+\t/* logical slots can be migrated since PG17. */\r\n+\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n+\t\treturn;\r\n\r\nSame as above.\r\n\r\n3.\r\n+\t\t\t\tif (GET_MAJOR_VERSION(cluster->major_version) >= 17)\r\n+\t\t\t\t{\r\n\r\nI think you mean 1700 here.\r\n\r\n\r\n4.\r\n+\t\t\t\t\tp = strpbrk(p, \"01234567890ABCDEF\");\r\n+\r\n+\t\t\t\t\t/*\r\n+\t\t\t\t\t * Upper and lower part of LSN must be read separately\r\n+\t\t\t\t\t * because it is reported as %X/%X format.\r\n+\t\t\t\t\t */\r\n+\t\t\t\t\tupper_lsn = strtoul(p, &slash, 16);\r\n+\t\t\t\t\tlower_lsn = strtoul(++slash, NULL, 16);\r\n\r\nMaybe we'd better add a sanity check after strpbrk like \"if (p == NULL ||\r\nstrlen(p) <= 1)\" to be consistent with other similar code.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Mon, 21 Aug 2023 10:12:03 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving comments! PSA new version patch set.\r\n\r\n> 1.\r\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\r\n> SUBSCRIPTION ... DISABLE</command></link>.\r\n> + After the upgrade is complete, execute the\r\n> + <command>ALTER SUBSCRIPTION ... CONNECTION</command>\r\n> command to update the\r\n> + connection string, and then re-enable the subscription.\r\n> \r\n> Why does one need to update the connection string?\r\n\r\nI wrote like that because the old and new port number can be different. But you\r\nare partially right - it is not always needed. Updated to clarify that.\r\n\r\n> 2.\r\n> + /*\r\n> + * Checking for logical slots must be done before\r\n> + * check_new_cluster_is_empty() because the slot_arr attribute of the\r\n> + * new_cluster will be checked in that function.\r\n> + */\r\n> + if (count_logical_slots(&old_cluster))\r\n> + {\r\n> + get_logical_slot_infos(&new_cluster, false);\r\n> + check_for_logical_replication_slots(&new_cluster);\r\n> + }\r\n> +\r\n> check_new_cluster_is_empty();\r\n> \r\n> Can't we simplify this checking by simply querying\r\n> pg_replication_slots for any usable slot something similar to what we\r\n> are doing in check_for_prepared_transactions()? We can add this check\r\n> in the function check_for_logical_replication_slots().\r\n\r\nSome checks were included to check_for_logical_replication_slots(), and\r\nget_logical_slot_infos() for new_cluster was removed as you said.\r\n\r\nBut get_logical_slot_infos() cannot be removed completely, because the old\r\ncluster has already been shut down when the new cluster is checked. We must\r\nstore the information of old cluster on the memory.\r\n\r\nNote that the existence of slots are now checked in any cases because such slots\r\ncould not be used after the upgrade.\r\n\r\ncheck_new_cluster_is_empty() is no longer checks logical slots, so all changes for\r\nthis function was reverted.\r\n\r\n> Also, do we\r\n> need a count function, or instead can we have a simple function like\r\n> is_logical_slot_present() where we return even if there is one slot\r\n> \r\n\r\nI think this is still needed, because max_replication_slots and the number\r\nof existing replication slots must be compared.\r\n\r\nOf course we can add another simple function like\r\nis_logical_slot_present_on_old_cluster() and use in main(), but not sure defining\r\nsome similar functions are good.\r\n\r\n> Apart from this, (a) I have made a few changes (changed comments) in\r\n> patch 0001 as shared in the email [1]; (b) some modifications in the\r\n> docs as you can see in the attached. Please include those changes in\r\n> the next version if you think they are okay.\r\n\r\nI checked and your modification seems nice. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 21 Aug 2023 13:02:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\nThank you for reviewing! New patch set can be available in [1].\r\n\r\n> \r\n> 0001:\r\n> \r\n> Do we need regression tests to make sure that the slot's\r\n> confirmed_flush_lsn matches the LSN of the latest shutdown_checkpoint\r\n> record?\r\n\r\nAdded. I wondered the location of the test, but put on\r\ntest_decoding/t/002_always_persist.pl.\r\n\r\n> 0002:\r\n> \r\n> + <step>\r\n> + <title>Prepare for publisher upgrades</title>\r\n> +\r\n> \r\n> Should this step be done before \"8. Stop both servers\" as it might\r\n> require to disable subscriptions and to drop 'lost' replication slots?\r\n\r\nRight, moved.\r\n\r\n> Why is there no explanation about the slots' confirmed_flush_lsn check\r\n> as prerequisites?\r\n\r\nAdded.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB5870B5C0FE0C61CD04CBD719F51EA%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 21 Aug 2023 13:04:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! The patch can be available in [1].\r\n\r\n> Commit Message\r\n> \r\n> 1.\r\n> This commit allows nodes with logical replication slots to be upgraded. While\r\n> reading information from the old cluster, a list of logical replication slots is\r\n> newly extracted. At the later part of upgrading, pg_upgrade revisits the list\r\n> and restores slots by using the pg_create_logical_replication_slots() on the new\r\n> clushter.\r\n> \r\n> ~\r\n> \r\n> 1a\r\n> /is newly extracted/is fetched/\r\n\r\nFixed.\r\n\r\n> 1b.\r\n> /using the pg_create_logical_replication_slots()/executing\r\n> pg_create_logical_replication_slots()/\r\n\r\nFixed.\r\n\r\n> 1c.\r\n> /clushter/cluster/\r\n\r\nFixed.\r\n\r\n> 2.\r\n> Note that it must be done after the final pg_resetwal command during the upgrade\r\n> because pg_resetwal will remove WALs that are required by the slots. Due to the\r\n> restriction, the timing of restoring replication slots is different from other\r\n> objects.\r\n> \r\n> ~\r\n> \r\n> 2a.\r\n> /it must/slot restoration/\r\n\r\nYou meant to say s/it must/slot restoration must/, right? Fixed.\r\n\r\n> 2b.\r\n> /the restriction/this restriction/\r\n> \r\n> ======\r\n> doc/src/sgml/ref/pgupgrade.sgml\r\n> \r\n> 3.\r\n> + <para>\r\n> + <application>pg_upgrade</application> attempts to migrate logical\r\n> + replication slots. This helps avoid the need for manually defining the\r\n> + same replication slot on the new publisher.\r\n> + </para>\r\n> \r\n> /same replication slot/same replication slots/\r\n\r\nFixed.\r\n\r\n> 4.\r\n> + <para>\r\n> + Before you start upgrading the publisher cluster, ensure that the\r\n> + subscription is temporarily disabled, by executing\r\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\r\n> SUBSCRIPTION ... DISABLE</command></link>.\r\n> + After the upgrade is complete, execute the\r\n> + <command>ALTER SUBSCRIPTION ... CONNECTION</command>\r\n> command to update the\r\n> + connection string, and then re-enable the subscription.\r\n> + </para>\r\n> \r\n> On the rendered page, it looks a bit strange that DISABLE has a link\r\n> but COMMENTION does not have a link.\r\n\r\nAdded.\r\n\r\n> 5.\r\n> + <para>\r\n> + There are some prerequisites for\r\n> <application>pg_upgrade</application> to\r\n> + be able to upgrade the replication slots. If these are not met an error\r\n> + will be reported.\r\n> + </para>\r\n> +\r\n> + <itemizedlist>\r\n> \r\n> +1 to use all the itemizedlist changes that Amit suggested [1] in his\r\n> attachment.\r\n\r\nYeah, I agreed it is nice. Applied.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 6.\r\n> +static void check_for_logical_replication_slots(ClusterInfo *new_cluster);\r\n> \r\n> IMO the arg name should not shadow a global with the same name. See\r\n> other review comment for this function signature.\r\n\r\nOK, fixed.\r\n\r\n> 7.\r\n> + /* Extract a list of logical replication slots */\r\n> + get_logical_slot_infos(&old_cluster, live_check);\r\n> \r\n> But 'live_check' is never used?\r\n\r\nIt is needed for 0003, moved.\r\n\r\n> 8. check_for_logical_replication_slots\r\n> +\r\n> +/*\r\n> + * Verify the parameter settings necessary for creating logical replication\r\n> + * slots.\r\n> + */\r\n> +static void\r\n> +check_for_logical_replication_slots(ClusterInfo *new_cluster)\r\n> \r\n> IMO the arg name should not shadow a global with the same name. If\r\n> this is never going to be called with any param other than\r\n> &new_cluster then probably it is better not even to pass have that\r\n> argument at all. Just refer to the global new_cluster inside the\r\n> function.\r\n> \r\n> You can't say that 'check_for_new_tablespace_dir' does it already so\r\n> it must be OK -- I think that the existing function has the same issue\r\n> and it also ought to be fixed to avoid shadowing!\r\n\r\nFixed.\r\n\r\n> 9. check_for_logical_replication_slots\r\n> \r\n> + /* logical replication slots can be migrated since PG17. */\r\n> + if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1600)\r\n> + return;\r\n> \r\n> IMO the code matches the comment better if you say < 1700 instead of <= 1600.\r\n\r\nChanged.\r\n\r\n> src/bin/pg_upgrade/function.c\r\n> \r\n> 10. get_loadable_libraries\r\n> /*\r\n> - * Fetch all libraries containing non-built-in C functions in this DB.\r\n> + * Fetch all libraries containing non-built-in C functions or referred\r\n> + * by logical replication slots in this DB.\r\n> */\r\n> ress[dbnum] = executeQueryOrDie(conn,\r\n> ~\r\n> \r\n> /referred by/referred to by/\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 11.\r\n> +/*\r\n> + * get_logical_slot_infos()\r\n> + *\r\n> + * Higher level routine to generate LogicalSlotInfoArr for all databases.\r\n> + */\r\n> +void\r\n> +get_logical_slot_infos(ClusterInfo *cluster, bool live_check)\r\n> +{\r\n> + int dbnum;\r\n> + int slot_count = 0;\r\n> +\r\n> + if (cluster == &old_cluster)\r\n> + pg_log(PG_VERBOSE, \"\\nsource databases:\");\r\n> + else\r\n> + pg_log(PG_VERBOSE, \"\\ntarget databases:\");\r\n> +\r\n> + for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> + {\r\n> + DbInfo *pDbInfo = &cluster->dbarr.dbs[dbnum];\r\n> +\r\n> + get_logical_slot_infos_per_db(cluster, pDbInfo);\r\n> + slot_count += pDbInfo->slot_arr.nslots;\r\n> +\r\n> + if (log_opts.verbose)\r\n> + {\r\n> + pg_log(PG_VERBOSE, \"Database: \\\"%s\\\"\", pDbInfo->db_name);\r\n> + print_slot_infos(&pDbInfo->slot_arr);\r\n> + }\r\n> + }\r\n> +}\r\n> +\r\n> \r\n> 11a.\r\n> Now the variable 'slot_count' is no longer being returned so it seems redundant.\r\n> \r\n> ~\r\n> \r\n> 11b.\r\n> What is the 'live_check' parameter for? Nobody is using it.\r\n\r\nThese are needed for 0003, moved.\r\n\r\n> 12. count_logical_slots\r\n> \r\n> +int\r\n> +count_logical_slots(ClusterInfo *cluster)\r\n> +{\r\n> + int dbnum;\r\n> + int slotnum = 0;\r\n> +\r\n> + for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n> + slotnum += cluster->dbarr.dbs[dbnum].slot_arr.nslots;\r\n> +\r\n> + return slotnum;\r\n> +}\r\n> \r\n> IMO this variable should be called something like 'slot_count'. This\r\n> is the same review comment also made in a previous review. (See [2]\r\n> comment#12).\r\n\r\nChanged.\r\n\r\n> 13. print_slot_infos\r\n> \r\n> +\r\n> +static void\r\n> +print_slot_infos(LogicalSlotInfoArr *slot_arr)\r\n> +{\r\n> + int slotnum;\r\n> +\r\n> + for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\r\n> + slot_arr->slots[slotnum].slotname,\r\n> + slot_arr->slots[slotnum].plugin,\r\n> + slot_arr->slots[slotnum].two_phase);\r\n> +}\r\n> \r\n> It might be nicer to introduce a variable, instead of all those array\r\n> dereferences:\r\n> \r\n> LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\r\n\r\nChanged.\r\n\r\n> 14.\r\n> + for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + {\r\n> + /*\r\n> + * Constructs query for creating logical replication slots.\r\n> + *\r\n> + * XXX: For simplification, pg_create_logical_replication_slot() is\r\n> + * used. Is it sufficient?\r\n> + */\r\n> + appendPQExpBuffer(query, \"SELECT\r\n> pg_catalog.pg_create_logical_replication_slot(\");\r\n> + appendStringLiteralConn(query, slot_arr->slots[slotnum].slotname,\r\n> + conn);\r\n> + appendPQExpBuffer(query, \", \");\r\n> + appendStringLiteralConn(query, slot_arr->slots[slotnum].plugin,\r\n> + conn);\r\n> + appendPQExpBuffer(query, \", false, %s);\",\r\n> + slot_arr->slots[slotnum].two_phase ? \"true\" : \"false\");\r\n> +\r\n> + PQclear(executeQueryOrDie(conn, \"%s\", query->data));\r\n> +\r\n> + resetPQExpBuffer(query);\r\n> + }\r\n> +\r\n> + PQfinish(conn);\r\n> +\r\n> + destroyPQExpBuffer(query);\r\n> + }\r\n> +\r\n> + end_progress_output();\r\n> + check_ok();\r\n> \r\n> 14a\r\n> Similar to the previous comment (#13). It might be nicer to introduce\r\n> a variable, instead of all those array dereferences:\r\n> \r\n> LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\r\n\r\nChanged.\r\n\r\n> 14b.\r\n> It was not clear to me why this command is not being built using\r\n> executeQueryOrDie directly instead of using the query buffer. Is there\r\n> some reason?\r\n\r\nI wanted to take care the encoding, that was the reason I used PQExpBuffer\r\nfunctions, especially appendStringLiteralConn(). IIUC executeQueryOrDie() could\r\nnot take care it.\r\n\r\n> 14c.\r\n> I think it would be cleaner to have a separate res variable like you\r\n> used elsewhere:\r\n> res = executeQueryOrDie(...)\r\n> \r\n> instead of doing PQclear(executeQueryOrDie(conn, \"%s\", query->data));\r\n> in one line\r\n\r\nHmm, there are some use cases for PQclear(executeQueryOrDie(...)) style, e.g.,\r\nset_locale_and_encoding() and set_frozenxids(). I do not think your style is good\r\nif the result of the query is not used. please tell me if you find a case that\r\nres = executeQueryOrDie(...) is used but result is not checked.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.\r\n> \r\n> 15.\r\n> +void get_logical_slot_infos(ClusterInfo *cluster, bool live_check);\r\n> \r\n> I didn't see a reason for that 'live_check' parameter.\r\n\r\nIt was needed for 0003, moved.\r\n\r\n> .../pg_upgrade/t/003_logical_replication_slots.pl\r\n> \r\n> 16.\r\n> IMO this would be much easier to read if there were BIG comments\r\n> between the actual TEST parts\r\n> \r\n> For example\r\n> \r\n> # ------------------------------\r\n> # TEST: Confirm pg_upgrade fails is new node wal_level is not 'logical'\r\n> <preparation>\r\n> <test>\r\n> <cleanup>\r\n> \r\n> # ------------------------------\r\n> # TEST: Confirm pg_upgrade fails max_replication_slots on new node is too low\r\n> <preparation>\r\n> <test>\r\n> <cleanup>\r\n> \r\n> # ------------------------------\r\n> # TEST: Successful upgrade\r\n> <preparation>\r\n> <test>\r\n> <cleanup>\r\n\r\nAdded. 0003 also followed the style.\r\n\r\n> 17.\r\n> +# Cause a failure at the start of pg_upgrade because wal_level is replica\r\n> +command_fails(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync',\r\n> + '-d', $old_publisher->data_dir,\r\n> + '-D', $new_publisher->data_dir,\r\n> + '-b', $bindir,\r\n> + '-B', $bindir,\r\n> + '-s', $new_publisher->host,\r\n> + '-p', $old_publisher->port,\r\n> + '-P', $new_publisher->port,\r\n> + $mode,\r\n> + ],\r\n> + 'run of pg_upgrade of old node with wrong wal_level');\r\n> +ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\r\n> + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\r\n> \r\n> The message is ambiguous\r\n> \r\n> BEFORE\r\n> 'run of pg_upgrade of old node with wrong wal_level'\r\n> \r\n> SUGGESTION\r\n> 'run of pg_upgrade where the new node has the wrong wal_level'\r\n\r\nChanged.\r\n\r\n> 18.\r\n> +# Create an unnecessary slot on old node\r\n> +$old_publisher->start;\r\n> +$old_publisher->safe_psql(\r\n> + 'postgres', qq[\r\n> + SELECT pg_create_logical_replication_slot('test_slot2',\r\n> 'test_decoding', false, true);\r\n> +]);\r\n> +\r\n> +$old_publisher->stop;\r\n> +\r\n> +# Preparations for the subsequent test. max_replication_slots is set to\r\n> +# smaller than existing slots on old node\r\n> +$new_publisher->append_conf('postgresql.conf', \"wal_level = 'logical'\");\r\n> +$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\r\n> \r\n> \r\n> IMO the comment is misleading. It is not an \"unnecessary slot\", it is\r\n> just a 2nd slot. And this is all part of the preparation for the next\r\n> test so it should be under the other comment.\r\n> \r\n> For example SUGGESTION changes like this:\r\n> \r\n> # Preparations for the subsequent test.\r\n> # 1. Create an unnecessary slot on the old node\r\n> $old_publisher->start;\r\n> $old_publisher->safe_psql(\r\n> 'postgres', qq[\r\n> SELECT pg_create_logical_replication_slot('test_slot2',\r\n> 'test_decoding', false, true);\r\n> ]);\r\n> $old_publisher->stop;\r\n> # 2. max_replication_slots is set to smaller than the number of slots\r\n> (2) present on the old node\r\n> $new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\r\n> # 3. new node wal_level is set correctly\r\n> $new_publisher->append_conf('postgresql.conf', \"wal_level = 'logical'\");\r\n\r\nFollowed the style.\r\n\r\n> 19.\r\n> +# Remove an unnecessary slot and consume WAL records\r\n> +$old_publisher->start;\r\n> +$old_publisher->safe_psql(\r\n> + 'postgres', qq[\r\n> + SELECT pg_drop_replication_slot('test_slot2');\r\n> + SELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL,\r\n> NULL)\r\n> +]);\r\n> +$old_publisher->stop;\r\n> +\r\n> \r\n> This comment should say more like:\r\n> \r\n> # Preparations for the subsequent test.\r\n\r\nFollowed above style.\r\n\r\n> 20.\r\n> +# Actual run, pg_upgrade_output.d is removed at the end\r\n> \r\n> This comment should mention that \"successful upgrade is expected\"\r\n> because all the other prerequisites are now satisfied.\r\n\r\nThe suggestion was added to the comment\r\n\r\n> 21.\r\n> +$new_publisher->start;\r\n> +my $result = $new_publisher->safe_psql('postgres',\r\n> + \"SELECT slot_name, two_phase FROM pg_replication_slots\");\r\n> +is($result, qq(test_slot1|t), 'check the slot exists on new node');\r\n> \r\n> Should there be a matching new_pulisher->stop;?\r\n\r\nNot sure it is really needed, but added.\r\nAlso, the word \"node\" was replaced to \"cluster\" because the later word is used\r\nin the doc.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB5870B5C0FE0C61CD04CBD719F51EA%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 21 Aug 2023 13:04:58 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\nThank you for reviewing! The patch can be available in [1].\r\n\r\n> 1.\r\n> \r\n> +check_for_lost_slots(ClusterInfo *cluster)\r\n> +{\r\n> +\tint\t\t\ti,\r\n> +\t\t\t\tntups,\r\n> +\t\t\t\ti_slotname;\r\n> +\tPGresult *res;\r\n> +\tDbInfo\t *active_db = &cluster->dbarr.dbs[0];\r\n> +\tPGconn\t *conn = connectToServer(cluster, active_db->db_name);\r\n> +\r\n> +\t/* logical slots can be migrated since PG17. */\r\n> +\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n> +\t\treturn;\r\n> \r\n> I think we should build connection after this check, otherwise the connection\r\n> may be left open after returning.\r\n\r\nFixed.\r\n\r\n> 2.\r\n> +check_for_confirmed_flush_lsn(ClusterInfo *cluster)\r\n> +{\r\n> +\tint\t\t\ti,\r\n> +\t\t\t\tntups,\r\n> +\t\t\t\ti_slotname;\r\n> +\tPGresult *res;\r\n> +\tDbInfo\t *active_db = &cluster->dbarr.dbs[0];\r\n> +\tPGconn\t *conn = connectToServer(cluster, active_db->db_name);\r\n> +\r\n> +\t/* logical slots can be migrated since PG17. */\r\n> +\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n> +\t\treturn;\r\n> \r\n> Same as above.\r\n\r\nFixed.\r\n\r\n> 3.\r\n> +\t\t\t\tif\r\n> (GET_MAJOR_VERSION(cluster->major_version) >= 17)\r\n> +\t\t\t\t{\r\n> \r\n> I think you mean 1700 here.\r\n\r\nRight, fixed.\r\n\r\n> 4.\r\n> +\t\t\t\t\tp = strpbrk(p,\r\n> \"01234567890ABCDEF\");\r\n> +\r\n> +\t\t\t\t\t/*\r\n> +\t\t\t\t\t * Upper and lower part of LSN must\r\n> be read separately\r\n> +\t\t\t\t\t * because it is reported as %X/%X\r\n> format.\r\n> +\t\t\t\t\t */\r\n> +\t\t\t\t\tupper_lsn = strtoul(p, &slash, 16);\r\n> +\t\t\t\t\tlower_lsn = strtoul(++slash, NULL,\r\n> 16);\r\n> \r\n> Maybe we'd better add a sanity check after strpbrk like \"if (p == NULL ||\r\n> strlen(p) <= 1)\" to be consistent with other similar code.\r\n\r\nAdded.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB5870B5C0FE0C61CD04CBD719F51EA%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 21 Aug 2023 13:05:57 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san,\n\nHere are some review comments for v22-0003.\n\n(FYI, I was already mid-way through this review before you posted new v23*\npatches, so I am posting it anyway in case some comments still apply.)\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_for_lost_slots\n\n+ /* logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\n+ return;\n\n1a\nMaybe the comment should start uppercase for consistency with others.\n\n~\n\n1b.\nIMO if you check < 1700 instead of <= 1600 it will be a better match with\nthe comment.\n\n~~~\n\n2. check_for_lost_slots\n+ for (i = 0; i < ntups; i++)\n+ {\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" is in 'lost' state.\",\n+ PQgetvalue(res, i, i_slotname));\n+ }\n+\n+\n\nThe braces {} are not needed anymore\n\n~~~\n\n3. check_for_confirmed_flush_lsn\n\n+ /* logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\n+ return;\n\n3a.\nMaybe the comment should start uppercase for consistency with others.\n\n~\n\n3b.\nIMO if you check < 1700 instead of <= 1600 it will be a better match with\nthe comment.\n\n~~~\n\n4. check_for_confirmed_flush_lsn\n+ for (i = 0; i < ntups; i++)\n+ {\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",\n+ PQgetvalue(res, i, i_slotname));\n+ }\n+\n\nThe braces {} are not needed anymore\n\n======\nsrc/bin/pg_upgrade/controldata.c\n\n5. get_control_data\n+ /*\n+ * Gather latest checkpoint location if the cluster is newer or\n+ * equal to 17. This is used for upgrading logical replication\n+ * slots.\n+ */\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 17)\n\n5a.\n/newer or equal to 17/PG17 or later/\n\n~~~\n\n5b.\n>= 17 should be >= 1700\n\n~~~\n\n6. get_control_data\n+ {\n+ char *slash = NULL;\n+ uint64 upper_lsn, lower_lsn;\n+\n+ p = strchr(p, ':');\n+\n+ if (p == NULL || strlen(p) <= 1)\n+ pg_fatal(\"%d: controldata retrieval problem\", __LINE__);\n+\n+ p++; /* remove ':' char */\n+\n+ p = strpbrk(p, \"01234567890ABCDEF\");\n+\n+ /*\n+ * Upper and lower part of LSN must be read separately\n+ * because it is reported as %X/%X format.\n+ */\n+ upper_lsn = strtoul(p, &slash, 16);\n+ lower_lsn = strtoul(++slash, NULL, 16);\n+\n+ /* And combine them */\n+ cluster->controldata.chkpnt_latest =\n+ (upper_lsn << 32) | lower_lsn;\n+ }\n\nShould 'upper_lsn' and 'lower_lsn' be declared as uint32? That seems a\nbetter mirror for LSN_FORMAT_ARGS.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n7. get_logical_slot_infos\n+\n+ /*\n+ * Do additional checks if slots are found on the old node. If something is\n+ * found on the new node, a subsequent function\n+ * check_new_cluster_is_empty() would report the name of slots and raise a\n+ * fatal error.\n+ */\n+ if (cluster == &old_cluster && slot_count)\n+ {\n+ check_for_lost_slots(cluster);\n+\n+ if (!live_check)\n+ check_for_confirmed_flush_lsn(cluster);\n+ }\n\nIt somehow doesn't feel right for these extra checks to be jammed into this\nfunction, just because you conveniently have the slot_count available.\n\nOn the NEW cluster side, there was extra checking in the\ncheck_new_cluster() function.\n\nFor consistency, I think this OLD cluster checking should be done in the\ncheck_and_dump_old_cluster() function -- see the \"Check for various failure\ncases\" comment -- IMO this new fragment belongs there with the other checks.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n8.\n bool date_is_int;\n bool float8_pass_by_value;\n uint32 data_checksum_version;\n+\n+ XLogRecPtr chkpnt_latest;\n } ControlData;\n\nI don't think the new field is particularly different from all the others\nthat it needs a blank line separator.\n\n======\n.../t/003_logical_replication_slots.pl\n\n9.\n # Initialize old node\n my $old_publisher = PostgreSQL::Test::Cluster->new('old_publisher');\n $old_publisher->init(allows_streaming => 'logical');\n-$old_publisher->start;\n\n # Initialize new node\n my $new_publisher = PostgreSQL::Test::Cluster->new('new_publisher');\n $new_publisher->init(allows_streaming => 'replica');\n\n-my $bindir = $new_publisher->config_data('--bindir');\n+# Initialize subscriber node\n+my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n+$subscriber->init(allows_streaming => 'logical');\n\n-$old_publisher->stop;\n+my $bindir = $new_publisher->config_data('--bindir');\n\n~\n\nAre those removal of the old_publisher start/stop changes that actually\nshould be done in the 0002 patch?\n\n~~~\n\n10.\n $old_publisher->safe_psql(\n 'postgres', qq[\n SELECT pg_create_logical_replication_slot('test_slot2', 'test_decoding',\nfalse, true);\n+ SELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL,\nNULL);\n ]);\n\n~\n\nWhat is the purpose of the added SELECT? It doesn't seem covered by the\ncomment.\n\n~~~\n\n11.\n# Remove an unnecessary slot and generate WALs. These records would not be\n# consumed before doing pg_upgrade, so that the upcoming test would fail.\n$old_publisher->start;\n$old_publisher->safe_psql(\n'postgres', qq[\nSELECT pg_drop_replication_slot('test_slot2');\nCREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\n]);\n$old_publisher->stop;\n\nMinor rewording of comment sentence.\n\nSUGGESTION\nBecause these WAL records do not get consumed it will cause the upcoming\npg_upgrade test to fail.\n\n~~~\n\n12.\n# Cause a failure at the start of pg_upgrade because the slot still have\n# unconsumed WAL records\n\n~\n\n/still have/still has/\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nHi Kuroda-san,Here are some review comments for v22-0003.(FYI, I was already mid-way through this review before you posted new v23* patches, so I am posting it anyway in case some comments still apply.)======src/bin/pg_upgrade/check.c1. check_for_lost_slots+\t/* logical slots can be migrated since PG17. */+\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)+\t\treturn;1aMaybe the comment should start uppercase for consistency with others.~1b.IMO if you check < 1700 instead of <= 1600 it will be a better match with the comment.~~~2. check_for_lost_slots+\tfor (i = 0; i < ntups; i++)+\t{+\t\tpg_log(PG_WARNING,+\t\t\t \"\\nWARNING: logical replication slot \\\"%s\\\" is in 'lost' state.\",+\t\t\t PQgetvalue(res, i, i_slotname));+\t}++The braces {} are not needed anymore~~~3. check_for_confirmed_flush_lsn+\t/* logical slots can be migrated since PG17. */+\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)+\t\treturn;3a.Maybe the comment should start uppercase for consistency with others.~3b.IMO if you check < 1700 instead of <= 1600 it will be a better match with the comment.~~~4. check_for_confirmed_flush_lsn+\tfor (i = 0; i < ntups; i++)+\t{+\t\tpg_log(PG_WARNING,+\t\t\t\t\"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",+\t\t\t\tPQgetvalue(res, i, i_slotname));+\t}+The braces {} are not needed anymore======src/bin/pg_upgrade/controldata.c5. get_control_data+\t\t\t\t/*+\t\t\t\t * Gather latest checkpoint location if the cluster is newer or+\t\t\t\t * equal to 17. This is used for upgrading logical replication+\t\t\t\t * slots.+\t\t\t\t */+\t\t\t\tif (GET_MAJOR_VERSION(cluster->major_version) >= 17)5a./newer or equal to 17/PG17 or later/~~~5b.>= 17 should be >= 1700~~~6. get_control_data+\t\t\t\t{+\t\t\t\t\tchar *slash = NULL;+\t\t\t\t\tuint64 upper_lsn, lower_lsn;++\t\t\t\t\tp = strchr(p, ':');++\t\t\t\t\tif (p == NULL || strlen(p) <= 1)+\t\t\t\t\t\tpg_fatal(\"%d: controldata retrieval problem\", __LINE__);++\t\t\t\t\tp++;\t\t\t/* remove ':' char */++\t\t\t\t\tp = strpbrk(p, \"01234567890ABCDEF\");++\t\t\t\t\t/*+\t\t\t\t\t * Upper and lower part of LSN must be read separately+\t\t\t\t\t * because it is reported as %X/%X format.+\t\t\t\t\t */+\t\t\t\t\tupper_lsn = strtoul(p, &slash, 16);+\t\t\t\t\tlower_lsn = strtoul(++slash, NULL, 16);++\t\t\t\t\t/* And combine them */+\t\t\t\t\tcluster->controldata.chkpnt_latest =+\t\t\t\t\t\t\t\t\t\t(upper_lsn << 32) | lower_lsn;+\t\t\t\t}Should 'upper_lsn' and 'lower_lsn' be declared as uint32? That seems a better mirror for LSN_FORMAT_ARGS.======src/bin/pg_upgrade/info.c7. get_logical_slot_infos++\t/*+\t * Do additional checks if slots are found on the old node. If something is+\t * found on the new node, a subsequent function+\t * check_new_cluster_is_empty() would report the name of slots and raise a+\t * fatal error.+\t */+\tif (cluster == &old_cluster && slot_count)+\t{+\t\tcheck_for_lost_slots(cluster);++\t\tif (!live_check)+\t\t\tcheck_for_confirmed_flush_lsn(cluster);+\t}It somehow doesn't feel right for these extra checks to be jammed into this function, just because you conveniently have the slot_count available.On the NEW cluster side, there was extra checking in the check_new_cluster() function.For consistency, I think this OLD cluster checking should be done in the check_and_dump_old_cluster() function -- see the \"Check for various failure cases\" comment -- IMO this new fragment belongs there with the other checks.======src/bin/pg_upgrade/pg_upgrade.h8. \tbool\t\tdate_is_int; \tbool\t\tfloat8_pass_by_value; \tuint32\t\tdata_checksum_version;++\tXLogRecPtr\tchkpnt_latest; } ControlData; I don't think the new field is particularly different from all the others that it needs a blank line separator.======.../t/003_logical_replication_slots.pl9. # Initialize old node my $old_publisher = PostgreSQL::Test::Cluster->new('old_publisher'); $old_publisher->init(allows_streaming => 'logical');-$old_publisher->start; # Initialize new node my $new_publisher = PostgreSQL::Test::Cluster->new('new_publisher'); $new_publisher->init(allows_streaming => 'replica'); -my $bindir = $new_publisher->config_data('--bindir');+# Initialize subscriber node+my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');+$subscriber->init(allows_streaming => 'logical'); -$old_publisher->stop;+my $bindir = $new_publisher->config_data('--bindir');~Are those removal of the old_publisher start/stop changes that actually should be done in the 0002 patch?~~~10. $old_publisher->safe_psql( \t'postgres', qq[ \tSELECT pg_create_logical_replication_slot('test_slot2', 'test_decoding', false, true);+\tSELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL); ]); ~What is the purpose of the added SELECT? It doesn't seem covered by the comment.~~~11.# Remove an unnecessary slot and generate WALs. These records would not be# consumed before doing pg_upgrade, so that the upcoming test would fail.$old_publisher->start;$old_publisher->safe_psql(\t'postgres', qq[\tSELECT pg_drop_replication_slot('test_slot2');\tCREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;]);$old_publisher->stop;Minor rewording of comment sentence.SUGGESTIONBecause these WAL records do not get consumed it will cause the upcoming pg_upgrade test to fail.~~~12.# Cause a failure at the start of pg_upgrade because the slot still have# unconsumed WAL records~/still have/still has/------Kind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Tue, 22 Aug 2023 10:31:25 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for v23-0001\n\n======\n1. GENERAL -- git apply\n\nThe patch fails to apply cleanly. There are whitespace warnings.\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ git apply\n../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch\n../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch:102:\ntrailing whitespace.\n# SHUTDOWN_CHECKPOINT record.\nwarning: 1 line adds whitespace errors.\n\n~~~\n\n2. GENERAL -- which patch is the real one and which is the copy?\n\nIMO this patch has become muddled.\n\nAmit recently created a new thread [1] \"persist logical slots to disk\nduring shutdown checkpoint\", which I thought was dedicated to the\ndiscussion/implementation of this 0001 patch. Therefore, I expected any\n0001 patch changes to would be made only in that new thread from now on,\n(and maybe you would mirror them here in this thread).\n\nBut now I see there are v23-0001 patch changes here again. So, now the same\npatch is in 2 places and they are different. It is no longer clear to me\nwhich 0001 (\"Always persist...\") patch is the definitive one, and which one\nis the copy.\n\n??\n\n======\ncontrib/test_decoding/t/002_always_persist.pl\n\n3.\n+\n+# Copyright (c) 2023, PostgreSQL Global Development Group\n+\n+# Test logical replication slots are always persist to disk during a\nshutdown\n+# checkpoint.\n+\n+use strict;\n+use warnings;\n+\n+use PostgreSQL::Test::Cluster;\n+use PostgreSQL::Test::Utils;\n+use Test::More;\n\n\n/always persist/always persisted/\n\n~~~\n\n4.\n+\n+# Test set-up\n+my $node = PostgreSQL::Test::Cluster->new('test');\n+$node->init(allows_streaming => 'logical');\n+$node->append_conf('postgresql.conf', q{\n+autovacuum = off\n+checkpoint_timeout = 1h\n+});\n+\n+$node->start;\n+\n+# Create table\n+$node->safe_psql('postgres', \"CREATE TABLE test (id int)\");\n\nMaybe it is better to call the table something different instead of the\nsame name as the cluster. e.g. 'test_tbl' would be better.\n\n~~~\n\n5.\n+# Shutdown the node once to do shutdown checkpoint\n+$node->stop();\n+\n\nSUGGESTION\n# Stop the node to cause a shutdown checkpoint\n\n~~~\n\n6.\n+# Fetch checkPoint from the control file itself\n+my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);\n+my @control_data = split(\"\\n\", $stdout);\n+my $latest_checkpoint = undef;\n+foreach (@control_data)\n+{\n+ if ($_ =~ /^Latest checkpoint location:\\s*(.*)$/mg)\n+ {\n+ $latest_checkpoint = $1;\n+ last;\n+ }\n+}\n+die \"No checkPoint in control file found\\n\"\n+ unless defined($latest_checkpoint);\n+\n\n6a.\n/checkPoint/checkpoint/ (2x)\n\n~\n\n6b.\n+die \"No checkPoint in control file found\\n\"\n\nSUGGESTION\n\"No checkpoint found in control file\\n\"\n\n------\n[1]\nhttps://www.postgresql.org/message-id/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nHere are some review comments for v23-0001======1. GENERAL -- git applyThe patch fails to apply cleanly. There are whitespace warnings.[postgres@CentOS7-x64 oss_postgres_misc]$ git apply ../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch:102: trailing whitespace.# SHUTDOWN_CHECKPOINT record. warning: 1 line adds whitespace errors.~~~2. GENERAL -- which patch is the real one and which is the copy?IMO this patch has become muddled.Amit recently created a new thread [1] \"persist logical slots to disk during shutdown checkpoint\", which I thought was dedicated to the discussion/implementation of this 0001 patch. Therefore, I expected any 0001 patch changes to would be made only in that new thread from now on, (and maybe you would mirror them here in this thread).But now I see there are v23-0001 patch changes here again. So, now the same patch is in 2 places and they are different. It is no longer clear to me which 0001 (\"Always persist...\") patch is the definitive one, and which one is the copy.??======contrib/test_decoding/t/002_always_persist.pl3.++# Copyright (c) 2023, PostgreSQL Global Development Group++# Test logical replication slots are always persist to disk during a shutdown+# checkpoint.++use strict;+use warnings;++use PostgreSQL::Test::Cluster;+use PostgreSQL::Test::Utils;+use Test::More;/always persist/always persisted/~~~4.++# Test set-up+my $node = PostgreSQL::Test::Cluster->new('test');+$node->init(allows_streaming => 'logical');+$node->append_conf('postgresql.conf', q{+autovacuum = off+checkpoint_timeout = 1h+});++$node->start;++# Create table+$node->safe_psql('postgres', \"CREATE TABLE test (id int)\");Maybe it is better to call the table something different instead of the same name as the cluster. e.g. 'test_tbl' would be better.~~~5.+# Shutdown the node once to do shutdown checkpoint+$node->stop();+SUGGESTION# Stop the node to cause a shutdown checkpoint~~~6.+# Fetch checkPoint from the control file itself+my ($stdout, $stderr) = run_command([ 'pg_controldata', $node->data_dir ]);+my @control_data = split(\"\\n\", $stdout);+my $latest_checkpoint = undef;+foreach (@control_data)+{+\tif ($_ =~ /^Latest checkpoint location:\\s*(.*)$/mg)+\t{+\t\t$latest_checkpoint = $1;+\t\tlast;+\t}+}+die \"No checkPoint in control file found\\n\"+ unless defined($latest_checkpoint);+6a./checkPoint/checkpoint/ (2x)~6b.+die \"No checkPoint in control file found\\n\"SUGGESTION\"No checkpoint found in control file\\n\"------[1] https://www.postgresql.org/message-id/CAA4eK1JzJagMmb_E8D4au=GYQkxox0AfNBm1FbP7sy7t4YWXPQ@mail.gmail.comKind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Tue, 22 Aug 2023 11:49:09 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 7:19 AM Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for v23-0001\n>\n> ======\n> 1. GENERAL -- git apply\n>\n> The patch fails to apply cleanly. There are whitespace warnings.\n>\n> [postgres@CentOS7-x64 oss_postgres_misc]$ git apply ../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch\n> ../patches_misc/v23-0001-Always-persist-to-disk-logical-slots-during-a-sh.patch:102: trailing whitespace.\n> # SHUTDOWN_CHECKPOINT record.\n> warning: 1 line adds whitespace errors.\n>\n> ~~~\n>\n> 2. GENERAL -- which patch is the real one and which is the copy?\n>\n> IMO this patch has become muddled.\n>\n> Amit recently created a new thread [1] \"persist logical slots to disk during shutdown checkpoint\", which I thought was dedicated to the discussion/implementation of this 0001 patch.\n>\n\nRight, I feel it would be good to discuss 0001 on the new thread.\nHere, we can just include it for the sake of completeness and testing\npurposes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 10:15:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 6:35 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 9. check_for_logical_replication_slots\n> >\n> > + /* logical replication slots can be migrated since PG17. */\n> > + if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1600)\n> > + return;\n> >\n> > IMO the code matches the comment better if you say < 1700 instead of <= 1600.\n>\n> Changed.\n>\n\nI think it is better to be consistent with the existing code. There\nare a few other checks in pg_upgrade.c that uses <=, so it is better\nto use it in the same way here.\n\nAnother minor comment:\nNote that\n+ if the new cluser uses different port number from old one,\n+ <link linkend=\"sql-altersubscription\"><command>ALTER\nSUBSCRIPTION ... CONNECTION</command></link>\n+ command must be also executed on subscriber.\n\nI think this is true in general as well and not specific to\npg_upgrade. So, we can avoid adding anything about connection change\nhere.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 11:31:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san,\n\nHere are some review comments for patch v23-0002\n\n======\n1. GENERAL\n\nPlease try to run a spell/grammar check on all the text like commit message\nand docs changes before posting (e.g. cut/paste the rendered text into some\ntool like MSWord or Grammarly or ChatGPT or whatever tool you like and\ncross-check). There are lots of small typos etc but one up-front check\ncould avoid long cycles of\nreviewing/reporting/fixing/re-posting/confirming...\n\n======\nCommit message\n\n2.\nNote that slot restoration must be done after the final pg_resetwal command\nduring the upgrade because pg_resetwal will remove WALs that are required by\nthe slots. Due to ths restriction, the timing of restoring replication\nslots is\ndifferent from other objects.\n\n~\n\n/ths/this/\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n3.\n+ <para>\n+ Before you start upgrading the publisher cluster, ensure that the\n+ subscription is temporarily disabled, by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\nDISABLE</command></link>.\n+ After the upgrade is complete, then re-enable the subscription. Note\nthat\n+ if the new cluser uses different port number from old one,\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\nCONNECTION</command></link>\n+ command must be also executed on subscriber.\n+ </para>\n\n3a.\nBEFORE\nAfter the upgrade is complete, then re-enable the subscription.\n\nSUGGESTION\nRe-enable the subscription after the upgrade.\n\n~\n\n3b.\n/cluser/cluster/\n\n~\n\n3c.\nNote that\n+ if the new cluser uses different port number from old one,\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ...\nCONNECTION</command></link>\n+ command must be also executed on subscriber.\n\nSUGGESTION\nNote that if the new cluster uses a different port number ALTER\nSUBSCRIPTION ... CONNECTION command must be also executed on the subscriber.\n\n~~~\n\n4.\n+ <listitem>\n+ <para>\n+ <structfield>confirmed_flush_lsn</structfield> (see <xref\nlinkend=\"view-pg-replication-slots\"/>)\n+ of all slots on old cluster must be same as latest checkpoint\nlocation.\n+ </para>\n+ </listitem>\n\n4a.\n/on old cluster/on the old cluster/\n\n~\n\n4b.\n/as latest/as the latest/\n~~\n\n5.\n+ <listitem>\n+ <para>\n+ The output plugins referenced by the slots on the old cluster must\nbe\n+ installed on the new PostgreSQL executable directory.\n+ </para>\n+ </listitem>\n\n/installed on/installed in/ ??\n\n~~\n\n6.\n+ <listitem>\n+ <para>\n+ The new cluster must have\n+ <link\nlinkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varname></link>\n+ configured to value larger than the existing slots on the old\ncluster.\n+ </para>\n+ </listitem>\n\nBEFORE\n...to value larger than the existing slots on the old cluster.\n\nSUGGESTION\n...to a value greater than or equal to the number of slots present on the\nold cluster.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n7. GENERAL - check_for_logical_replication_slots\n\nAFAICT this function is called *only* for the new_cluster, yet there is no\nAssert and no checking inside this function to ensure that is the case or\nnot. It seems strange that the *cluster is passed as an argument but then\nthe whole function body and messages assume it can only be a new cluster\nanyway.\n\nIMO it would be better to rename this function to something like\ncheck_new_cluster_logical_replication_slots() and DO NOT pass any parameter\nbut just use the global new_cluster within the function body.\n\n~~~\n\n8. check_for_logical_replication_slots\n\n+ /* logical replication slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) < 1700)\n+ return;\n\nStart comment with uppercase for consistency.\n\n~~~\n\n9. check_for_logical_replication_slots\n\n+ res = executeQueryOrDie(conn, \"SELECT slot_name \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"temporary IS FALSE;\");\n+\n+ if (PQntuples(res))\n+ pg_fatal(\"New cluster must not have logical replication slot, but found\n\\\"%s\\\"\",\n+ PQgetvalue(res, 0, 0));\n\n/replication slot/replication slots/\n\n~\n\n10. check_for_logical_replication_slots\n\n+ /*\n+ * Do additional checks when the logical replication slots have on the old\n+ * cluster.\n+ */\n+ if (nslots)\n\nSUGGESTION\nDo additional checks when there are logical replication slots on the old\ncluster.\n\n~~~\n\n11.\n+ if (nslots > max_replication_slots)\n+ pg_fatal(\"max_replication_slots must be greater than or equal to existing\nlogical \"\n+ \"replication slots on old cluster.\");\n\n11a.\nSUGGESTION\nmax_replication_slots (%d) must be greater than or equal to the number of\nlogical replication slots (%d) on the old cluster.\n\n~\n\n11b.\nI think it would be helpful for the current values to be displayed in the\nfatal message so the user will know more about what value to set. Notice\nthat my above suggestion has some substitution markers.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n12.\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ int slotnum;\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\n+ slot_info->slotname,\n+ slot_info->plugin,\n+ slot_info->two_phase);\n+ }\n+}\n\nBetter to have a blank line after the 'slot_info' declaration.\n\n======\n.../pg_upgrade/t/003_logical_replication_slots.pl\n\n13.\n+# ------------------------------\n+# TEST: Confirm pg_upgrade fails when new cluster wal_level is not\n'logical'\n+\n+# Create a slot on old cluster\n+$old_publisher->start;\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT pg_create_logical_replication_slot('test_slot1', 'test_decoding',\nfalse, true);\"\n+);\n+$old_publisher->stop;\n\n13a.\nIt would be nicer if all the test parts have identical formats. So here it\nshould also say\n\n# Preparations for the subsequent test:\n# 1. Create a slot on the old cluster\n\n~\n\n13b.\nNotice the colon (:) at the end of that comment \"Preparations for the\nsubsequent test:\". All the other preparation comments in this file should\nalso have a colon.\n\n~\n\n14.\n+# Cause a failure at the start of pg_upgrade because wal_level is replica\n\nSUGGESTION\n# pg_upgrade will fail because the new cluster wal_level is 'replica'\n\n~~~\n\n15.\n+# 1. Create an unnecessary slot on the old cluster\n\n(but it is not unnecessary -- it is necessary for this test!)\n\nSUGGESTION\n+# 1. Create a second slot on the old cluster\n\n~~~\n\n16.\n+# Cause a failure at the start of pg_upgrade because the new cluster has\n+# insufficient max_replication_slots\n\nSUGGESTION\n# pg_upgrade will fail because the new cluster has insufficient\nmax_replication_slots\n\n~~~\n\n17.\n+# Preparations for the subsequent test.\n+# 1. Remove an unnecessary slot\n\nSUGGESTION\n+# 1. Remove the slot 'test_slot2', leaving only 1 slot remaining on the\nold cluster, so the new cluster config max_replication_slots=1 will now be\nenough.\n\n~~~\n\n18.\n+$new_publisher->start;\n+my $result = $new_publisher->safe_psql('postgres',\n+ \"SELECT slot_name, two_phase FROM pg_replication_slots\");\n+is($result, qq(test_slot1|t), 'check the slot exists on new cluster');\n+$new_publisher->stop;\n+\n+done_testing();\n\nMaybe should be some added comments like:\n# Check that the slot 'test_slot1' has migrated to the new cluster.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nHi Kuroda-san,Here are some review comments for patch v23-0002======1. GENERALPlease try to run a spell/grammar check on all the text like commit message and docs changes before posting (e.g. cut/paste the rendered text into some tool like MSWord or Grammarly or ChatGPT or whatever tool you like and cross-check). There are lots of small typos etc but one up-front check could avoid long cycles of reviewing/reporting/fixing/re-posting/confirming...======Commit message2.Note that slot restoration must be done after the final pg_resetwal commandduring the upgrade because pg_resetwal will remove WALs that are required bythe slots. Due to ths restriction, the timing of restoring replication slots isdifferent from other objects.~/ths/this/======doc/src/sgml/ref/pgupgrade.sgml 3.+ <para>+ Before you start upgrading the publisher cluster, ensure that the+ subscription is temporarily disabled, by executing+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... DISABLE</command></link>.+ After the upgrade is complete, then re-enable the subscription. Note that+ if the new cluser uses different port number from old one,+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... CONNECTION</command></link>+ command must be also executed on subscriber.+ </para>3a.BEFOREAfter the upgrade is complete, then re-enable the subscription.SUGGESTIONRe-enable the subscription after the upgrade.~3b./cluser/cluster/~3c.Note that+ if the new cluser uses different port number from old one,+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... CONNECTION</command></link>+ command must be also executed on subscriber.SUGGESTIONNote that if the new cluster uses a different port number ALTER SUBSCRIPTION ... CONNECTION command must be also executed on the subscriber.~~~4.+ <listitem>+ <para>+ <structfield>confirmed_flush_lsn</structfield> (see <xref linkend=\"view-pg-replication-slots\"/>)+ of all slots on old cluster must be same as latest checkpoint location.+ </para>+ </listitem>4a./on old cluster/on the old cluster/~4b./as latest/as the latest/~~5.+ <listitem>+ <para>+ The output plugins referenced by the slots on the old cluster must be+ installed on the new PostgreSQL executable directory.+ </para>+ </listitem>/installed on/installed in/ ??~~6.+ <listitem>+ <para>+ The new cluster must have+ <link linkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varname></link>+ configured to value larger than the existing slots on the old cluster.+ </para>+ </listitem>BEFORE...to value larger than the existing slots on the old cluster.SUGGESTION...to a value greater than or equal to the number of slots present on the old cluster.======src/bin/pg_upgrade/check.c 7. GENERAL - check_for_logical_replication_slotsAFAICT this function is called *only* for the new_cluster, yet there is no Assert and no checking inside this function to ensure that is the case or not. It seems strange that the *cluster is passed as an argument but then the whole function body and messages assume it can only be a new cluster anyway.IMO it would be better to rename this function to something like check_new_cluster_logical_replication_slots() and DO NOT pass any parameter but just use the global new_cluster within the function body.~~~8. check_for_logical_replication_slots+\t/* logical replication slots can be migrated since PG17. */+\tif (GET_MAJOR_VERSION(cluster->major_version) < 1700)+\t\treturn;Start comment with uppercase for consistency.~~~9. check_for_logical_replication_slots+\tres = executeQueryOrDie(conn, \"SELECT slot_name \"+\t\t\t\t\t\t\t\t \"FROM pg_catalog.pg_replication_slots \"+\t\t\t\t\t\t\t\t \"WHERE slot_type = 'logical' AND \"+\t\t\t\t\t\t\t\t \"temporary IS FALSE;\");++\tif (PQntuples(res))+\t\tpg_fatal(\"New cluster must not have logical replication slot, but found \\\"%s\\\"\",+\t\t\t\t PQgetvalue(res, 0, 0));/replication slot/replication slots/~10. check_for_logical_replication_slots+\t/*+\t * Do additional checks when the logical replication slots have on the old+\t * cluster.+\t */+\tif (nslots)SUGGESTIONDo additional checks when there are logical replication slots on the old cluster.~~~11.+\t\tif (nslots > max_replication_slots)+\t\t\tpg_fatal(\"max_replication_slots must be greater than or equal to existing logical \"+\t\t\t\t\t \"replication slots on old cluster.\");11a.SUGGESTIONmax_replication_slots (%d) must be greater than or equal to the number of logical replication slots (%d) on the old cluster.~11b.I think it would be helpful for the current values to be displayed in the fatal message so the user will know more about what value to set. Notice that my above suggestion has some substitution markers. ======src/bin/pg_upgrade/info.c12.+static void+print_slot_infos(LogicalSlotInfoArr *slot_arr)+{+\tint\t\t\tslotnum;++\tfor (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)+\t{+\t\tLogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];+\t\tpg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",+\t\t\t slot_info->slotname,+\t\t\t slot_info->plugin,+\t\t\t slot_info->two_phase);+\t}+}Better to have a blank line after the 'slot_info' declaration.======.../pg_upgrade/t/003_logical_replication_slots.pl13.+# ------------------------------+# TEST: Confirm pg_upgrade fails when new cluster wal_level is not 'logical'++# Create a slot on old cluster+$old_publisher->start;+$old_publisher->safe_psql('postgres',+\t\"SELECT pg_create_logical_replication_slot('test_slot1', 'test_decoding', false, true);\"+);+$old_publisher->stop;13a.It would be nicer if all the test parts have identical formats. So here it should also say# Preparations for the subsequent test:# 1. Create a slot on the old cluster~13b.Notice the colon (:) at the end of that comment \"Preparations for the subsequent test:\". All the other preparation comments in this file should also have a colon.~14.+# Cause a failure at the start of pg_upgrade because wal_level is replicaSUGGESTION# pg_upgrade will fail because the new cluster wal_level is 'replica'~~~15.+# 1. Create an unnecessary slot on the old cluster(but it is not unnecessary -- it is necessary for this test!)SUGGESTION+# 1. Create a second slot on the old cluster~~~16.+# Cause a failure at the start of pg_upgrade because the new cluster has+# insufficient max_replication_slotsSUGGESTION# pg_upgrade will fail because the new cluster has insufficient max_replication_slots~~~17.+# Preparations for the subsequent test.+# 1. Remove an unnecessary slotSUGGESTION+# 1. Remove the slot 'test_slot2', leaving only 1 slot remaining on the old cluster, so the new cluster config max_replication_slots=1 will now be enough.~~~18.+$new_publisher->start;+my $result = $new_publisher->safe_psql('postgres',+\t\"SELECT slot_name, two_phase FROM pg_replication_slots\");+is($result, qq(test_slot1|t), 'check the slot exists on new cluster');+$new_publisher->stop;++done_testing();Maybe should be some added comments like:# Check that the slot 'test_slot1' has migrated to the new cluster.------Kind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Tue, 22 Aug 2023 16:01:41 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 6:32 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 2.\n> > + /*\n> > + * Checking for logical slots must be done before\n> > + * check_new_cluster_is_empty() because the slot_arr attribute of the\n> > + * new_cluster will be checked in that function.\n> > + */\n> > + if (count_logical_slots(&old_cluster))\n> > + {\n> > + get_logical_slot_infos(&new_cluster, false);\n> > + check_for_logical_replication_slots(&new_cluster);\n> > + }\n> > +\n> > check_new_cluster_is_empty();\n> >\n> > Can't we simplify this checking by simply querying\n> > pg_replication_slots for any usable slot something similar to what we\n> > are doing in check_for_prepared_transactions()? We can add this check\n> > in the function check_for_logical_replication_slots().\n>\n> Some checks were included to check_for_logical_replication_slots(), and\n> get_logical_slot_infos() for new_cluster was removed as you said.\n>\n\n+ res = executeQueryOrDie(conn, \"SELECT slot_name \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"temporary IS FALSE;\");\n+\n+ if (PQntuples(res))\n+ pg_fatal(\"New cluster must not have logical replication slot, but\nfound \\\"%s\\\"\",\n+ PQgetvalue(res, 0, 0));\n+\n+ PQclear(res);\n+\n+ nslots = count_logical_slots(&old_cluster);\n+\n+ /*\n+ * Do additional checks when the logical replication slots have on the old\n+ * cluster.\n+ */\n+ if (nslots)\n\nShouldn't these checks be reversed? I mean it would be better to test\nthe presence of slots on the new cluster if there is any slot present\non the old cluster.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 12:18:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san.\n\nI already posted a review for v22-0003 earlier today, but v23-0003 was\nalready posted so those are not yet addressed.\n\nHere are a few more review comments I noticed when looking at the latest\nv23-0003.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1.\n+#include \"access/xlogdefs.h\"\n #include \"catalog/pg_authid_d.h\"\n\nWas this #include needed here? I noticed you've already included the same\nin the \"pg_upgrade.h\".\n\n~~~\n\n2. check_for_lost_slots\n\n+ /* Check there are no logical replication slots with a 'lost' state. */\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE wal_status = 'lost' AND \"\n+ \"temporary IS FALSE;\");\n\nI can't quite describe my doubts about this, but something seems a bit\nstrange. Didn't we already iterate every single slot in all DBs in the\nearlier function get_logical_slot_infos_per_db()? There we were only\nlooking for wal_status <> 'lost', but we could have got *every* wal_status\nand also detected these 'lost' ones at the same time up-front, instead of\nhaving this extra function with more SQL to do pretty much the same SELECT.\n\nPerhaps coding the current way there is a clear separation of the fetching\ncode and the checking code, and that might be the best approach, but it\nsomehow seems a shame/waste to be executing almost the same slots data with\nthe same SQL 2x, so I wondered if there is a better way to arrange this.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n3. get_logical_slot_infos\n\n+\n+ /* Do additional checks if slots are found */\n+ if (slot_count)\n+ {\n+ check_for_lost_slots(cluster);\n+\n+ if (!live_check)\n+ check_for_confirmed_flush_lsn(cluster);\n+ }\n\nAren't these checks only intended for checking the 'old_cluster'? But\nAFAICT they are not guarded here so they will be executed by both sides.\nPreviously (in my review of v22-0003) I suggested these calls maybe\nbelonged in the calling function check_and_dump_old_cluster(). I think that.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nHi Kuroda-san.I already posted a review for v22-0003 earlier today, but v23-0003 was already posted so those are not yet addressed.Here are a few more review comments I noticed when looking at the latest v23-0003.======src/bin/pg_upgrade/check.c1.+#include \"access/xlogdefs.h\" #include \"catalog/pg_authid_d.h\" Was this #include needed here? I noticed you've already included the same in the \"pg_upgrade.h\".~~~2. check_for_lost_slots+\t/* Check there are no logical replication slots with a 'lost' state. */+\tres = executeQueryOrDie(conn,+\t\t\t\t\t\t\t\"SELECT slot_name FROM pg_catalog.pg_replication_slots \"+\t\t\t\t\t\t\t\"WHERE wal_status = 'lost' AND \"+\t\t\t\t\t\t\t\"temporary IS FALSE;\");I can't quite describe my doubts about this, but something seems a bit strange. Didn't we already iterate every single slot in all DBs in the earlier function get_logical_slot_infos_per_db()? There we were only looking for wal_status <> 'lost', but we could have got *every* wal_status and also detected these 'lost' ones at the same time up-front, instead of having this extra function with more SQL to do pretty much the same SELECT.Perhaps coding the current way there is a clear separation of the fetching code and the checking code, and that might be the best approach, but it somehow seems a shame/waste to be executing almost the same slots data with the same SQL 2x, so I wondered if there is a better way to arrange this. ======src/bin/pg_upgrade/info.c3. get_logical_slot_infos++\t/* Do additional checks if slots are found */+\tif (slot_count)+\t{+\t\tcheck_for_lost_slots(cluster);++\t\tif (!live_check)+\t\t\tcheck_for_confirmed_flush_lsn(cluster);+\t}Aren't these checks only intended for checking the 'old_cluster'? But AFAICT they are not guarded here so they will be executed by both sides. Previously (in my review of v22-0003) I suggested these calls maybe belonged in the calling function check_and_dump_old_cluster(). I think that.------Kind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Tue, 22 Aug 2023 18:52:45 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comments! New version will be available\r\nin the upcoming post.\r\n\r\n>\r\n1. check_for_lost_slots\r\n\r\n+ /* logical slots can be migrated since PG17. */\r\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n+ return;\r\n\r\n1a\r\nMaybe the comment should start uppercase for consistency with others.\r\n>\r\n\r\nSeems right, but I revisit check_and_dump_old_cluster() again and found that\r\nsome version-specific checks are done outside the checking function.\r\nSo I started to follow the style and then the part is moved to\r\ncheck_and_dump_old_cluster(). Also, version checking for new cluster is also\r\nmoved to check_new_cluster(). Is it OK for you?\r\n\r\n>\r\n1b.\r\nIMO if you check < 1700 instead of <= 1600 it will be a better match with the comment.\r\n>\r\n\r\nPer suggestion from Amit, I used < 1700. Some other changes in 0002 were reverted.\r\n\r\n>\r\n2. check_for_lost_slots\r\n+ for (i = 0; i < ntups; i++)\r\n+ {\r\n+ pg_log(PG_WARNING,\r\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" is in 'lost' state.\",\r\n+ PQgetvalue(res, i, i_slotname));\r\n+ }\r\n+\r\n+\r\n\r\nThe braces {} are not needed anymore\r\n>\r\n\r\nFixed. \r\n\r\n>\r\n3. check_for_confirmed_flush_lsn\r\n\r\n+ /* logical slots can be migrated since PG17. */\r\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n+ return;\r\n\r\n3a.\r\nMaybe the comment should start uppercase for consistency with others.\r\n>\r\n\r\nPer reply for comment 1, the part was no longer needed.\r\n\r\n>\r\n3b.\r\nIMO if you check < 1700 instead of <= 1600 it will be a better match with the comment.\r\n>\r\n\r\nPer suggestion from Amit, I used < 1700.\r\n\r\n>\r\n4. check_for_confirmed_flush_lsn\r\n+ for (i = 0; i < ntups; i++)\r\n+ {\r\n+ pg_log(PG_WARNING,\r\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",\r\n+ PQgetvalue(res, i, i_slotname));\r\n+ }\r\n+\r\n\r\nThe braces {} are not needed anymore\r\n>\r\n\r\nFixed.\r\n\r\n>\r\n5. get_control_data\r\n+ /*\r\n+ * Gather latest checkpoint location if the cluster is newer or\r\n+ * equal to 17. This is used for upgrading logical replication\r\n+ * slots.\r\n+ */\r\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 17)\r\n\r\n5a.\r\n/newer or equal to 17/PG17 or later/\r\n>\r\n\r\nFixed.\r\n\r\n>\r\n5b.\r\n>= 17 should be >= 1700\r\n>\r\n\r\nPer suggestion from Amit, I used < 1700.\r\n\r\n>\r\n6. get_control_data\r\n+ {\r\n+ char *slash = NULL;\r\n+ uint64 upper_lsn, lower_lsn;\r\n+\r\n+ p = strchr(p, ':');\r\n+\r\n+ if (p == NULL || strlen(p) <= 1)\r\n+ pg_fatal(\"%d: controldata retrieval problem\", __LINE__);\r\n+\r\n+ p++; /* remove ':' char */\r\n+\r\n+ p = strpbrk(p, \"01234567890ABCDEF\");\r\n+\r\n+ /*\r\n+ * Upper and lower part of LSN must be read separately\r\n+ * because it is reported as %X/%X format.\r\n+ */\r\n+ upper_lsn = strtoul(p, &slash, 16);\r\n+ lower_lsn = strtoul(++slash, NULL, 16);\r\n+\r\n+ /* And combine them */\r\n+ cluster->controldata.chkpnt_latest =\r\n+ (upper_lsn << 32) | lower_lsn;\r\n+ }\r\n\r\nShould 'upper_lsn' and 'lower_lsn' be declared as uint32? That seems a better mirror for LSN_FORMAT_ARGS.\r\n>\r\n\r\nChanged the definition to uint32, and a cast was added.\r\n\r\n>\r\n7. get_logical_slot_infos\r\n+\r\n+ /*\r\n+ * Do additional checks if slots are found on the old node. If something is\r\n+ * found on the new node, a subsequent function\r\n+ * check_new_cluster_is_empty() would report the name of slots and raise a\r\n+ * fatal error.\r\n+ */\r\n+ if (cluster == &old_cluster && slot_count)\r\n+ {\r\n+ check_for_lost_slots(cluster);\r\n+\r\n+ if (!live_check)\r\n+ check_for_confirmed_flush_lsn(cluster);\r\n+ }\r\n\r\nIt somehow doesn't feel right for these extra checks to be jammed into this function, just because you conveniently have the slot_count available.\r\n\r\nOn the NEW cluster side, there was extra checking in the check_new_cluster() function.\r\n\r\nFor consistency, I think this OLD cluster checking should be done in the check_and_dump_old_cluster() function -- see the \"Check for various failure cases\" comment -- IMO this new fragment belongs there with the other checks.\r\n>\r\n\r\nAll the checks were moved to check_and_dump_old_cluster(), and adds a check for its major version.\r\n\r\n>\r\n8.\r\n bool date_is_int;\r\n bool float8_pass_by_value;\r\n uint32 data_checksum_version;\r\n+\r\n+ XLogRecPtr chkpnt_latest;\r\n } ControlData;\r\n\r\nI don't think the new field is particularly different from all the others that it needs a blank line separator.\r\n >\r\n\r\nI removed the blank. Actually I wondered where the attribute should be, but kept at last.\r\n\r\n>\r\n9.\r\n # Initialize old node\r\n my $old_publisher = PostgreSQL::Test::Cluster->new('old_publisher');\r\n $old_publisher->init(allows_streaming => 'logical');\r\n-$old_publisher->start;\r\n \r\n # Initialize new node\r\n my $new_publisher = PostgreSQL::Test::Cluster->new('new_publisher');\r\n $new_publisher->init(allows_streaming => 'replica');\r\n \r\n-my $bindir = $new_publisher->config_data('--bindir');\r\n+# Initialize subscriber node\r\n+my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');\r\n+$subscriber->init(allows_streaming => 'logical');\r\n \r\n-$old_publisher->stop;\r\n+my $bindir = $new_publisher->config_data('--bindir');\r\n\r\n~\r\n\r\nAre those removal of the old_publisher start/stop changes that actually should be done in the 0002 patch?\r\n>\r\n\r\nYes, It should be removed from 0002.\r\n\r\n>\r\n10.\r\n $old_publisher->safe_psql(\r\n 'postgres', qq[\r\n SELECT pg_create_logical_replication_slot('test_slot2', 'test_decoding', false, true);\r\n+ SELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL);\r\n ]);\r\n \r\n~\r\n\r\nWhat is the purpose of the added SELECT? It doesn't seem covered by the comment.\r\n>\r\n\r\nThe SELECT statement is needed to trigger the failure caused by the insufficient\r\nmax_replication_slots. Checking on new cluster is started after old servers are\r\nverified, so if the step is omitted, another error is reported:\r\n\r\n```\r\nChecking confirmed_flush_lsn for logical replication slots \r\nWARNING: logical replication slot \"test_slot1\" has not consumed WALs yet\r\n\r\nOne or more logical replication slots still have unconsumed WAL records.\r\n```\r\n\r\nI added a comment about it.\r\n\r\n>\r\n11.\r\n# Remove an unnecessary slot and generate WALs. These records would not be\r\n# consumed before doing pg_upgrade, so that the upcoming test would fail.\r\n$old_publisher->start;\r\n$old_publisher->safe_psql(\r\n'postgres', qq[\r\nSELECT pg_drop_replication_slot('test_slot2');\r\nCREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\r\n]);\r\n$old_publisher->stop;\r\n\r\nMinor rewording of comment sentence.\r\n\r\nSUGGESTION\r\nBecause these WAL records do not get consumed it will cause the upcoming pg_upgrade test to fail.\r\n>\r\n\r\nAdded.\r\n\r\n\r\n>\r\n12.\r\n# Cause a failure at the start of pg_upgrade because the slot still have\r\n# unconsumed WAL records\r\n\r\n~\r\n\r\n/still have/still has/\r\n>\r\n\r\nFixed.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 23 Aug 2023 02:43:32 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> Here are some review comments for v23-0001\r\n\r\nThanks for the comment! But I did not update 0001 patch in this thread.\r\nIt will be managed in the forked one...\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 23 Aug 2023 02:44:44 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for the comment! Next version will be available in upcoming post.\r\n\r\n> > > + /* logical replication slots can be migrated since PG17. */\r\n> > > + if (GET_MAJOR_VERSION(new_cluster->major_version) <= 1600)\r\n> > > + return;\r\n> > >\r\n> > > IMO the code matches the comment better if you say < 1700 instead of <=\r\n> 1600.\r\n> >\r\n> > Changed.\r\n> >\r\n> \r\n> I think it is better to be consistent with the existing code. There\r\n> are a few other checks in pg_upgrade.c that uses <=, so it is better\r\n> to use it in the same way here.\r\n\r\nOK, reverted.\r\n\r\n> Another minor comment:\r\n> Note that\r\n> + if the new cluser uses different port number from old one,\r\n> + <link linkend=\"sql-altersubscription\"><command>ALTER\r\n> SUBSCRIPTION ... CONNECTION</command></link>\r\n> + command must be also executed on subscriber.\r\n> \r\n> I think this is true in general as well and not specific to\r\n> pg_upgrade. So, we can avoid adding anything about connection change\r\n> here.\r\n\r\nRemoved.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 23 Aug 2023 02:45:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\n\nThanks for giving comment. New version will be available in the upcoming post.\n\n> + res = executeQueryOrDie(conn, \"SELECT slot_name \"\n> + \"FROM pg_catalog.pg_replication_slots \"\n> + \"WHERE slot_type = 'logical' AND \"\n> + \"temporary IS FALSE;\");\n> +\n> + if (PQntuples(res))\n> + pg_fatal(\"New cluster must not have logical replication slot, but\n> found \\\"%s\\\"\",\n> + PQgetvalue(res, 0, 0));\n> +\n> + PQclear(res);\n> +\n> + nslots = count_logical_slots(&old_cluster);\n> +\n> + /*\n> + * Do additional checks when the logical replication slots have on the old\n> + * cluster.\n> + */\n> + if (nslots)\n> \n> Shouldn't these checks be reversed? I mean it would be better to test\n> the presence of slots on the new cluster if there is any slot present\n> on the old cluster.\n\nHmm, I think the later part is meaningful only when the old cluster has logical\nslots. To sum up, any checking should be done when the\ncount_logical_slots(&old_cluster) > 0, right? Fixed like that.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 23 Aug 2023 02:58:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\n\nThanks for giving comments! PSA the new version.\n\n>\n======\n1. GENERAL\n\nPlease try to run a spell/grammar check on all the text like commit message and docs changes before posting (e.g. cut/paste the rendered text into some tool like MSWord or Grammarly or ChatGPT or whatever tool you like and cross-check). There are lots of small typos etc but one up-front check could avoid long cycles of reviewing/reporting/fixing/re-posting/confirming...\n>\n\nI checked all of sentences for Grammarly. Sorry for poor English.\n\n>\n======\nCommit message\n\n2.\nNote that slot restoration must be done after the final pg_resetwal command\nduring the upgrade because pg_resetwal will remove WALs that are required by\nthe slots. Due to ths restriction, the timing of restoring replication slots is\ndifferent from other objects.\n\n~\n\n/ths/this/\n>\n\nFixed.\n\n>\ndoc/src/sgml/ref/pgupgrade.sgml \n\n3.\n+ <para>\n+ Before you start upgrading the publisher cluster, ensure that the\n+ subscription is temporarily disabled, by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... DISABLE</command></link>.\n+ After the upgrade is complete, then re-enable the subscription. Note that\n+ if the new cluser uses different port number from old one,\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... CONNECTION</command></link>\n+ command must be also executed on subscriber.\n+ </para>\n\n3a.\nBEFORE\nAfter the upgrade is complete, then re-enable the subscription.\n\nSUGGESTION\nRe-enable the subscription after the upgrade.\n>\n\nFixed.\n\n\n>\n3b.\n/cluser/cluster/\n\n~\n\n3c.\nNote that\n+ if the new cluser uses different port number from old one,\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION ... CONNECTION</command></link>\n+ command must be also executed on subscriber.\n\nSUGGESTION\nNote that if the new cluster uses a different port number ALTER SUBSCRIPTION ... CONNECTION command must be also executed on the subscriber.\n>\n\nThe part was removed.\n\n>\n4.\n+ <listitem>\n+ <para>\n+ <structfield>confirmed_flush_lsn</structfield> (see <xref linkend=\"view-pg-replication-slots\"/>)\n+ of all slots on old cluster must be same as latest checkpoint location.\n+ </para>\n+ </listitem>\n\n4a.\n/on old cluster/on the old cluster/\n>\n\nFixed.\n\n>\n4b.\n/as latest/as the latest/\n>\nFixed.\n\n>\n5.\n+ <listitem>\n+ <para>\n+ The output plugins referenced by the slots on the old cluster must be\n+ installed on the new PostgreSQL executable directory.\n+ </para>\n+ </listitem>\n\n/installed on/installed in/ ??\n>\n\n\"installed in\" is better, fixed.\n\n>\n6.\n+ <listitem>\n+ <para>\n+ The new cluster must have\n+ <link linkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varname></link>\n+ configured to value larger than the existing slots on the old cluster.\n+ </para>\n+ </listitem>\n\nBEFORE\n...to value larger than the existing slots on the old cluster.\n\nSUGGESTION\n...to a value greater than or equal to the number of slots present on the old cluster.\n>\n\nFixed.\n\n>\nsrc/bin/pg_upgrade/check.c \n\n7. GENERAL - check_for_logical_replication_slots\n\nAFAICT this function is called *only* for the new_cluster, yet there is no Assert and no checking inside this function to ensure that is the case or not. It seems strange that the *cluster is passed as an argument but then the whole function body and messages assume it can only be a new cluster anyway.\n\nIMO it would be better to rename this function to something like check_new_cluster_logical_replication_slots() and DO NOT pass any parameter but just use the global new_cluster within the function body.\n>\n\nHmm, I followed other functions, e.g., check_for_composite_data_type_usage() is\ncalled only for old one but it has an argument *cluster. What is the difference\nbetween them? Moreover, how about check_for_lost_slots() and\ncheck_for_confirmed_flush_lsn()? Fixed for the moment.\n\n>\n8. check_for_logical_replication_slots\n\n+ /* logical replication slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) < 1700)\n+ return;\n\nStart comment with uppercase for consistency.\n>\n\nThe part was removed.\n\n>\n9. check_for_logical_replication_slots\n\n+ res = executeQueryOrDie(conn, \"SELECT slot_name \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"temporary IS FALSE;\");\n+\n+ if (PQntuples(res))\n+ pg_fatal(\"New cluster must not have logical replication slot, but found \\\"%s\\\"\",\n+ PQgetvalue(res, 0, 0));\n\n/replication slot/replication slots/\n>\n\nFixed.\n\n>\n10. check_for_logical_replication_slots\n\n+ /*\n+ * Do additional checks when the logical replication slots have on the old\n+ * cluster.\n+ */\n+ if (nslots)\n\nSUGGESTION\nDo additional checks when there are logical replication slots on the old cluster.\n>\n\nPer suggestion from Amit, the part was removed.\n\n>\n11.\n+ if (nslots > max_replication_slots)\n+ pg_fatal(\"max_replication_slots must be greater than or equal to existing logical \"\n+ \"replication slots on old cluster.\");\n\n11a.\nSUGGESTION\nmax_replication_slots (%d) must be greater than or equal to the number of logical replication slots (%d) on the old cluster.\n\n11b.\nI think it would be helpful for the current values to be displayed in the fatal message so the user will know more about what value to set. Notice that my above suggestion has some substitution markers. \n>\n\nChanged.\n\n>\nsrc/bin/pg_upgrade/info.c\n\n12.\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ int slotnum;\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\n+ slot_info->slotname,\n+ slot_info->plugin,\n+ slot_info->two_phase);\n+ }\n+}\n\nBetter to have a blank line after the 'slot_info' declaration.\n>\n\nAdded.\n\n>\n.../pg_upgrade/t/http://003_logical_replication_slots.pl\n\n13.\n+# ------------------------------\n+# TEST: Confirm pg_upgrade fails when new cluster wal_level is not 'logical'\n+\n+# Create a slot on old cluster\n+$old_publisher->start;\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT pg_create_logical_replication_slot('test_slot1', 'test_decoding', false, true);\"\n+);\n+$old_publisher->stop;\n\n13a.\nIt would be nicer if all the test parts have identical formats. So here it should also say\n\n# Preparations for the subsequent test:\n# 1. Create a slot on the old cluster\n>\n\nI did not use because there was only one step, but followed the style.\n\n>\n13b.\nNotice the colon (:) at the end of that comment \"Preparations for the subsequent test:\". All the other preparation comments in this file should also have a colon.\n>\n\nAdded.\n\n>\n14.\n+# Cause a failure at the start of pg_upgrade because wal_level is replica\n\nSUGGESTION\n# pg_upgrade will fail because the new cluster wal_level is 'replica'\n>\n\nFixed.\n\n>\n15.\n+# 1. Create an unnecessary slot on the old cluster\n\n(but it is not unnecessary -- it is necessary for this test!)\n\nSUGGESTION\n+# 1. Create a second slot on the old cluster\n>\n\nFixed.\n\n>\n16.\n+# Cause a failure at the start of pg_upgrade because the new cluster has\n+# insufficient max_replication_slots\n\nSUGGESTION\n# pg_upgrade will fail because the new cluster has insufficient max_replication_slots\n>\n\nFixed.\n\n>\n17.\n+# Preparations for the subsequent test.\n+# 1. Remove an unnecessary slot\n\nSUGGESTION\n+# 1. Remove the slot 'test_slot2', leaving only 1 slot remaining on the old cluster, so the new cluster config max_replication_slots=1 will now be enough.\n>\n\nFixed.\n\n>\n18.\n+$new_publisher->start;\n+my $result = $new_publisher->safe_psql('postgres',\n+ \"SELECT slot_name, two_phase FROM pg_replication_slots\");\n+is($result, qq(test_slot1|t), 'check the slot exists on new cluster');\n+$new_publisher->stop;\n+\n+done_testing();\n\nMaybe should be some added comments like:\n# Check that the slot 'test_slot1' has migrated to the new cluster.\n>\n\nAdded.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 23 Aug 2023 02:59:00 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comments! New version can be available in [1].\r\n\r\n>\r\n1.\r\n+#include \"access/xlogdefs.h\"\r\n #include \"catalog/pg_authid_d.h\"\r\n \r\nWas this #include needed here? I noticed you've already included the same in the \"pg_upgrade.h\".\r\n>\r\n\r\nIt was needed because the macro LSN_FORMAT_ARGS() was used in the file.\r\nI preferred all the needed file are included even if it has already been done in header, so \r\n#include was written here.\r\n\r\n>\r\n2. check_for_lost_slots\r\n\r\n+ /* Check there are no logical replication slots with a 'lost' state. */\r\n+ res = executeQueryOrDie(conn,\r\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\r\n+ \"WHERE wal_status = 'lost' AND \"\r\n+ \"temporary IS FALSE;\");\r\n\r\nI can't quite describe my doubts about this, but something seems a bit strange. Didn't we already iterate every single slot in all DBs in the earlier function get_logical_slot_infos_per_db()? There we were only looking for wal_status <> 'lost', but we could have got *every* wal_status and also detected these 'lost' ones at the same time up-front, instead of having this extra function with more SQL to do pretty much the same SELECT.\r\n\r\nPerhaps coding the current way there is a clear separation of the fetching code and the checking code, and that might be the best approach, but it somehow seems a shame/waste to be executing almost the same slots data with the same SQL 2x, so I wondered if there is a better way to arrange this.\r\n >\r\n\r\nHmm, but you did not like to do additional checks in the get_logical_slot_infos(),\r\nright? They cannot go together. In case of check_new_cluster(), information for\r\nrelations is extracted in get_db_and_rel_infos() and then checked whether it is\r\nempty or not in check_new_cluster_is_empty(). The phase is also separated.\r\n\r\n>\r\nsrc/bin/pg_upgrade/info.c\r\n\r\n3. get_logical_slot_infos\r\n\r\n+\r\n+ /* Do additional checks if slots are found */\r\n+ if (slot_count)\r\n+ {\r\n+ check_for_lost_slots(cluster);\r\n+\r\n+ if (!live_check)\r\n+ check_for_confirmed_flush_lsn(cluster);\r\n+ }\r\n\r\nAren't these checks only intended for checking the 'old_cluster'? But AFAICT they are not guarded here so they will be executed by both sides. Previously (in my review of v22-0003) I suggested these calls maybe belonged in the calling function check_and_dump_old_cluster(). I think that.\r\n>\r\n\r\nMoved to check_and_dump_old_cluster().\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866DD3348B5224E0A1BFC3EF51CA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 23 Aug 2023 03:00:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Thanks for the updated patches.\n\nHere are some review comments for the patch v24-0002\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n+ <listitem>\n+ <para>\n+ All slots on the old cluster must be usable, i.e., there are no\nslots\n+ whose <structfield>wal_status</structfield> is\n<literal>lost</literal> (see\n+ <xref linkend=\"view-pg-replication-slots\"/>).\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <structfield>confirmed_flush_lsn</structfield> (see <xref\nlinkend=\"view-pg-replication-slots\"/>)\n+ of all slots on the old cluster must be the same as the latest\n+ checkpoint location.\n+ </para>\n+ </listitem>\n\nIt might be more tidy to change the way those links (e.g. \"See section\n54.19\") are presented:\n\n1a.\nSUGGESTION\nAll slots on the old cluster must be usable, i.e., there are no slots whose\n<link\nlinkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>wal_status</structfield>\nis <literal>lost</literal>.\n\n~\n\n1b.\nSUGGESTION\n<link\nlinkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>confirmed_flush_lsn</structfield>\nof all slots on the old cluster must be the same as the latest checkpoint\nlocation.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n2.\n+ /* Logical replication slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(new_cluster.major_version) >= 1700)\n+ check_new_cluster_logical_replication_slots();\n+\n\nDoes it even make sense to check the new_cluster version? IIUC pg_upgrade\n*always* updates to the current PG version, which must be 1700 by\ndefinition, because this only is a PG17 patch, right?\n\nFor example, see check_cluster_versions() function where it does this check:\n\n/* Only current PG version is supported as a target */\nif (GET_MAJOR_VERSION(new_cluster.major_version) !=\nGET_MAJOR_VERSION(PG_VERSION_NUM))\npg_fatal(\"This utility can only upgrade to PostgreSQL version %s.\",\nPG_MAJORVERSION);\n\n======\nsrc/bin/pg_upgrade/function.c\n\n3.\nos_info.libraries = (LibraryInfo *) pg_malloc(totaltups *\nsizeof(LibraryInfo));\ntotaltups = 0;\n\nfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n{\nPGresult *res = ress[dbnum];\nint ntups;\nint rowno;\n\nntups = PQntuples(res);\nfor (rowno = 0; rowno < ntups; rowno++)\n{\nchar *lib = PQgetvalue(res, rowno, 0);\n\nos_info.libraries[totaltups].name = pg_strdup(lib);\nos_info.libraries[totaltups].dbnum = dbnum;\n\ntotaltups++;\n}\nPQclear(res);\n}\n\n~\n\nAlthough this was not introduced by your patch, I do not understand why the\n'totaltups' variable gets reset to zero and then re-incremented in these\nloops.\n\nIn other words, how is it possible for the end result of 'totaltups' to be\nany different from what was already calculated earlier in this function?\n\nIMO totaltups = 0; and totaltups++; is just redundant code.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n4. get_logical_slot_infos\n\n+/*\n+ * get_logical_slot_infos()\n+ *\n+ * Higher level routine to generate LogicalSlotInfoArr for all databases.\n+ */\n+void\n+get_logical_slot_infos(ClusterInfo *cluster)\n+{\n+ int dbnum;\n+\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\n+ return;\n\nIt is no longer clear to me what is the purpose of these version checks.\n\nAs mentioned in comment #2 above, I don't think we need to check the\nnew_cluster >= 1700, because this patch is for PG17 by definition.\n\nOTOH, I also don't recognise the reason why there has to be a PG17\nrestriction on the 'old_cluster' version. Such a restriction seems to\ncripple the usefulness of this patch (eg. cannot even upgrade slots from\nPG16 to PG17), and there is no explanation given for it. If there is some\nvalid incompatibility reason why only PG17 old_cluster slots can be\nupgraded then it ought to be described in detail and probably also\nmentioned in the PG DOCS.\n\n~~~\n\n5. count_logical_slots\n\n+/*\n+ * count_logical_slots()\n+ *\n+ * Sum up and return the number of logical replication slots for all\ndatabases.\n+ */\n+int\n+count_logical_slots(ClusterInfo *cluster)\n+{\n+ int dbnum;\n+ int slot_count = 0;\n+\n+ /* Quick exit if the version is prior to PG17. */\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\n+ return 0;\n+\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\n+ slot_count += cluster->dbarr.dbs[dbnum].slot_arr.nslots;\n+\n+ return slot_count;\n+}\n\nSame as the previous comment #4. I had doubts about the intent/need for\nthis cluster version checking.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\nThanks for the updated patches.Here are some review comments for the patch v24-0002======doc/src/sgml/ref/pgupgrade.sgml 1.+ <listitem>+ <para>+ All slots on the old cluster must be usable, i.e., there are no slots+ whose <structfield>wal_status</structfield> is <literal>lost</literal> (see+ <xref linkend=\"view-pg-replication-slots\"/>).+ </para>+ </listitem>+ <listitem>+ <para>+ <structfield>confirmed_flush_lsn</structfield> (see <xref linkend=\"view-pg-replication-slots\"/>)+ of all slots on the old cluster must be the same as the latest+ checkpoint location.+ </para>+ </listitem>It might be more tidy to change the way those links (e.g. \"See section 54.19\") are presented:1a.SUGGESTIONAll slots on the old cluster must be usable, i.e., there are no slots whose <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>wal_status</structfield> is <literal>lost</literal>.~1b.SUGGESTION<link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>confirmed_flush_lsn</structfield> of all slots on the old cluster must be the same as the latest checkpoint location.======src/bin/pg_upgrade/check.c 2.+\t/* Logical replication slots can be migrated since PG17. */+\tif (GET_MAJOR_VERSION(new_cluster.major_version) >= 1700)+\t\tcheck_new_cluster_logical_replication_slots();+Does it even make sense to check the new_cluster version? IIUC pg_upgrade *always* updates to the current PG version, which must be 1700 by definition, because this only is a PG17 patch, right?For example, see check_cluster_versions() function where it does this check:\t/* Only current PG version is supported as a target */\tif (GET_MAJOR_VERSION(new_cluster.major_version) != GET_MAJOR_VERSION(PG_VERSION_NUM))\t\tpg_fatal(\"This utility can only upgrade to PostgreSQL version %s.\",\t\t\t\t PG_MAJORVERSION);======src/bin/pg_upgrade/function.c3.\tos_info.libraries = (LibraryInfo *) pg_malloc(totaltups * sizeof(LibraryInfo));\ttotaltups = 0;\tfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\t{\t\tPGresult *res = ress[dbnum];\t\tint\t\t\tntups;\t\tint\t\t\trowno;\t\tntups = PQntuples(res);\t\tfor (rowno = 0; rowno < ntups; rowno++)\t\t{\t\t\tchar\t *lib = PQgetvalue(res, rowno, 0);\t\t\tos_info.libraries[totaltups].name = pg_strdup(lib);\t\t\tos_info.libraries[totaltups].dbnum = dbnum;\t\t\ttotaltups++;\t\t}\t\tPQclear(res);\t} ~Although this was not introduced by your patch, I do not understand why the 'totaltups' variable gets reset to zero and then re-incremented in these loops. In other words, how is it possible for the end result of 'totaltups' to be any different from what was already calculated earlier in this function? IMO totaltups = 0; and totaltups++; is just redundant code.======src/bin/pg_upgrade/info.c4. get_logical_slot_infos+/*+ * get_logical_slot_infos()+ *+ * Higher level routine to generate LogicalSlotInfoArr for all databases.+ */+void+get_logical_slot_infos(ClusterInfo *cluster)+{+\tint\t\t\tdbnum;++\t/* Logical slots can be migrated since PG17. */+\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)+\t\treturn;It is no longer clear to me what is the purpose of these version checks.As mentioned in comment #2 above, I don't think we need to check the new_cluster >= 1700, because this patch is for PG17 by definition.OTOH, I also don't recognise the reason why there has to be a PG17 restriction on the 'old_cluster' version. Such a restriction seems to cripple the usefulness of this patch (eg. cannot even upgrade slots from PG16 to PG17), and there is no explanation given for it. If there is some valid incompatibility reason why only PG17 old_cluster slots can be upgraded then it ought to be described in detail and probably also mentioned in the PG DOCS. ~~~5. count_logical_slots+/*+ * count_logical_slots()+ *+ * Sum up and return the number of logical replication slots for all databases.+ */+int+count_logical_slots(ClusterInfo *cluster)+{+\tint\t\t\tdbnum;+\tint\t\t\tslot_count = 0;++\t/* Quick exit if the version is prior to PG17. */+\tif (GET_MAJOR_VERSION(cluster->major_version) <= 1600)+\t\treturn 0;++\tfor (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)+\t\tslot_count += cluster->dbarr.dbs[dbnum].slot_arr.nslots;++\treturn slot_count;+}Same as the previous comment #4. I had doubts about the intent/need for this cluster version checking.------Kind Regards,Peter Smith.Fujitsu Australia",
"msg_date": "Thu, 24 Aug 2023 12:24:48 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 24, 2023 at 7:55 AM Peter Smith <[email protected]> wrote:\n>\n> ======\n> src/bin/pg_upgrade/info.c\n>\n> 4. get_logical_slot_infos\n>\n> +/*\n> + * get_logical_slot_infos()\n> + *\n> + * Higher level routine to generate LogicalSlotInfoArr for all databases.\n> + */\n> +void\n> +get_logical_slot_infos(ClusterInfo *cluster)\n> +{\n> + int dbnum;\n> +\n> + /* Logical slots can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\n> + return;\n>\n> It is no longer clear to me what is the purpose of these version checks.\n>\n> As mentioned in comment #2 above, I don't think we need to check the new_cluster >= 1700, because this patch is for PG17 by definition.\n>\n> OTOH, I also don't recognise the reason why there has to be a PG17 restriction on the 'old_cluster' version. Such a restriction seems to cripple the usefulness of this patch (eg. cannot even upgrade slots from PG16 to PG17), and there is no explanation given for it. If there is some valid incompatibility reason why only PG17 old_cluster slots can be upgraded then it ought to be described in detail and probably also mentioned in the PG DOCS.\n>\n\nOne of the main reasons is that slots prior to v17 won't persist\nconfirm_flush_lsn as discussed in the email thread [1] which means it\nwill always fail even if we allow to upgrade from versions prior to\nv17. Now, there is an argument that let's backpatch what's being\ndiscussed in [1] and then we will be able to upgrade slots from the\nprior version. Normally, we don't backatch new enhancements, so even\nif we want to do that in this case, a separate argument has to be made\nfor it. We have already discussed this point in this thread. We can\nprobably add a comment in the patch where we do version checks so that\nit will be a bit easier to understand the reason.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JzJagMmb_E8D4au%3DGYQkxox0AfNBm1FbP7sy7t4YWXPQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Aug 2023 08:21:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san\n\nFYI, the v24-0003 tests for pg_upgrade did not work for me:\n\n~~~\n\n# +++ tap check in src/bin/pg_upgrade +++\n\nt/001_basic.pl ...................... ok\n\nt/002_pg_upgrade.pl ................. ok\n\nt/003_logical_replication_slots.pl .. 7/?\n\n# Failed test 'run of pg_upgrade of old cluster'\n\n# at t/003_logical_replication_slots.pl line 174.\n\n\n\n# Failed test 'pg_upgrade_output.d/ removed after pg_upgrade success'\n\n# at t/003_logical_replication_slots.pl line 187.\n\n\n\n# Failed test 'check the slot exists on new cluster'\n\n# at t/003_logical_replication_slots.pl line 194.\n\n# got: ''\n\n# expected: 'sub|t'\n\n# Tests were run but no plan was declared and done_testing() was not seen.\n\nt/003_logical_replication_slots.pl .. Dubious, test returned 29 (wstat\n7424, 0x1d00)\n\nFailed 3/9 subtests\n\n\n\nTest Summary Report\n\n-------------------\n\nt/003_logical_replication_slots.pl (Wstat: 7424 Tests: 9 Failed: 3)\n\n Failed tests: 7-9\n\n Non-zero exit status: 29\n\n Parse errors: No plan found in TAP output\n\nFiles=3, Tests=35, 116 wallclock secs ( 0.06 usr 0.01 sys + 18.02\ncusr 6.40 csys = 24.49 CPU)\n\nResult: FAIL\n\nmake: *** [check] Error 1\n\n~~~\n\nI can provide the log files with more details about the errors if you\ncannot reproduce this\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 24 Aug 2023 13:24:04 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Notwithstanding the test errors I am getting for v24-0003, here are\nsome code review comments for this patch anyway.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_for_lost_slots\n\n+\n+/*\n+ * Verify that all logical replication slots are usable.\n+ */\n+void\n+check_for_lost_slots(ClusterInfo *cluster)\n\n1a.\nAFAIK we don't ever need to call this also for 'new_cluster'. So the\nfunction should have no parameter and just access 'old_cluster'\ndirectly.\n\n~\n\n1b.\nCan't this be a static function now?\n\n~\n\n2.\n+ for (i = 0; i < ntups; i++)\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" is in 'lost' state.\",\n+ PQgetvalue(res, i, i_slotname));\n\nIs it correct that this message also includes the word \"WARNING\"?\nOther PG_WARNING messages don't do that.\n\n~~~\n\n3. check_for_confirmed_flush_lsn\n\n+/*\n+ * Verify that all logical replication slots consumed all WALs, except a\n+ * CHECKPOINT_SHUTDOWN record.\n+ */\n+static void\n+check_for_confirmed_flush_lsn(ClusterInfo *cluster)\n\nAFAIK we don't ever need to call this also for 'new_cluster'. So the\nfunction should have no parameter and just access 'old_cluster'\ndirectly.\n\n~\n\n4.\n+ for (i = 0; i < ntups; i++)\n+ pg_log(PG_WARNING,\n+ \"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",\n+ PQgetvalue(res, i, i_slotname));\n\nIs it correct that this message also includes the word \"WARNING\"?\nOther PG_WARNING messages don't do that.\n\n======\nsrc/bin/pg_upgrade/controldata.c\n\n5. get_control_data\n\n+ else if ((p = strstr(bufin, \"Latest checkpoint location:\")) != NULL)\n+ {\n+ /*\n+ * Gather the latest checkpoint location if the cluster is PG17\n+ * or later. This is used for upgrading logical replication\n+ * slots.\n+ */\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n\nBut we are not \"gathering\" anything. It's just one LSN. I think this\nought to just say \"Read the latest...\"\n\n~\n\n6.\n+ /*\n+ * The upper and lower part of LSN must be read separately\n+ * because it is reported in %X/%X format.\n+ */\n\n/reported/stored as/\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n7.\n+void check_for_lost_slots(ClusterInfo *cluster);\\\n\nWhy is this needed here? Can't this be a static function?\n\n======\n.../t/003_logical_replication_slots.pl\n\n8.\n+# 2. Consume WAL records to avoid another type of upgrade failure. It will be\n+# tested in subsequent cases.\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL, NULL);\"\n+);\n\nI wondered if that step really needed. Why will there be WAL records to consume?\n\nIIUC we haven't published anything yet.\n\n~~~\n\n9.\n+# ------------------------------\n+# TEST: Successful upgrade\n+\n+# Preparations for the subsequent test:\n+# 1. Remove the remained slot\n+$old_publisher->start;\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT * FROM pg_drop_replication_slot('test_slot1');\"\n+);\n\nShould removal of the slot be done as part of the cleanup of the\nprevious test, instead of preparing for this one?\n\n~~~\n\n10.\n# 3. Disable the subscription once\n$subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub DISABLE\");\n$old_publisher->stop;\n\n10a.\nWhat do you mean by \"once\"?\n\n~\n\n10b.\nThat old_publisher->stop; seems strangely placed. Why is it here?\n\n~~~\n\n11.\n# Check that the slot 'test_slot1' has migrated to the new cluster\n$new_publisher->start;\nmy $result = $new_publisher->safe_psql('postgres',\n\"SELECT slot_name, two_phase FROM pg_replication_slots\");\nis($result, qq(sub|t), 'check the slot exists on new cluster');\n\n~\n\nThat comment now seems wrong. That slot was previously removed, right?\n\n~~~\n\n\n12.\n# Update the connection\nmy $new_connstr = $new_publisher->connstr . ' dbname=postgres';\n$subscriber->safe_psql('postgres',\n\"ALTER SUBSCRIPTION sub CONNECTION '$new_connstr'\");\n$subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub ENABLE\");\n\n~\n\nMaybe better to combine both SQL.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 24 Aug 2023 19:50:39 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> FYI, the v24-0003 tests for pg_upgrade did not work for me:\r\n\r\nHmm, I ran tests more than 1hr but could not reproduce the failure.\r\ncfbot also said OK multiple times...\r\n\r\nCould you please check source codes again and send log files\r\nif it is still problem?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 25 Aug 2023 02:09:38 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! PSA new version patch set.\r\nNote again that 0001 patch was replaced to new one[1], but you do not have to\r\ndiscuss that - it should be done in forked thread.\r\n\r\n>\r\n1.\r\n+ <listitem>\r\n+ <para>\r\n+ All slots on the old cluster must be usable, i.e., there are no slots\r\n+ whose <structfield>wal_status</structfield> is <literal>lost</literal> (see\r\n+ <xref linkend=\"view-pg-replication-slots\"/>).\r\n+ </para>\r\n+ </listitem>\r\n+ <listitem>\r\n+ <para>\r\n+ <structfield>confirmed_flush_lsn</structfield> (see <xref linkend=\"view-pg-replication-slots\"/>)\r\n+ of all slots on the old cluster must be the same as the latest\r\n+ checkpoint location.\r\n+ </para>\r\n+ </listitem>\r\n\r\nIt might be more tidy to change the way those links (e.g. \"See section 54.19\") are presented:\r\n\r\n1a.\r\nSUGGESTION\r\nAll slots on the old cluster must be usable, i.e., there are no slots whose <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>wal_status</structfield> is <literal>lost</literal>.\r\n>\r\n\r\nFixed.\r\n\r\n>\r\n1b.\r\nSUGGESTION\r\n<link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>confirmed_flush_lsn</structfield> of all slots on the old cluster must be the same as the latest checkpoint location.\r\n>\r\n\r\nFixed.\r\n\r\n>\r\n2.\r\n+ /* Logical replication slots can be migrated since PG17. */\r\n+ if (GET_MAJOR_VERSION(new_cluster.major_version) >= 1700)\r\n+ check_new_cluster_logical_replication_slots();\r\n+\r\n\r\nDoes it even make sense to check the new_cluster version? IIUC pg_upgrade *always* updates to the current PG version, which must be 1700 by definition, because this only is a PG17 patch, right?\r\n\r\nFor example, see check_cluster_versions() function where it does this check:\r\n\r\n/* Only current PG version is supported as a target */\r\nif (GET_MAJOR_VERSION(new_cluster.major_version) != GET_MAJOR_VERSION(PG_VERSION_NUM))\r\npg_fatal(\"This utility can only upgrade to PostgreSQL version %s.\",\r\nPG_MAJORVERSION);\r\n>\r\n\r\nYou are right, the new_cluster always has the same version as pg_upgrade.\r\nRemoved.\r\n\r\n>\r\nos_info.libraries = (LibraryInfo *) pg_malloc(totaltups * sizeof(LibraryInfo));\r\ntotaltups = 0;\r\n\r\nfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n{\r\nPGresult *res = ress[dbnum];\r\nint ntups;\r\nint rowno;\r\n\r\nntups = PQntuples(res);\r\nfor (rowno = 0; rowno < ntups; rowno++)\r\n{\r\nchar *lib = PQgetvalue(res, rowno, 0);\r\n\r\nos_info.libraries[totaltups].name = pg_strdup(lib);\r\nos_info.libraries[totaltups].dbnum = dbnum;\r\n\r\ntotaltups++;\r\n}\r\nPQclear(res);\r\n}\r\n\r\n~\r\n\r\nAlthough this was not introduced by your patch, I do not understand why the 'totaltups' variable gets reset to zero and then re-incremented in these loops. \r\n\r\nIn other words, how is it possible for the end result of 'totaltups' to be any different from what was already calculated earlier in this function? \r\n\r\nIMO totaltups = 0; and totaltups++; is just redundant code.\r\n>\r\n\r\nFirst of all, I will not fix that in this thread, it should be done in another\r\nplace. I do not want to expand the thread anymore. Personally, it seemed that\r\ntotaltups was just reused as index for the array.\r\n\r\n\r\n>\r\n4. get_logical_slot_infos\r\n\r\n+/*\r\n+ * get_logical_slot_infos()\r\n+ *\r\n+ * Higher level routine to generate LogicalSlotInfoArr for all databases.\r\n+ */\r\n+void\r\n+get_logical_slot_infos(ClusterInfo *cluster)\r\n+{\r\n+ int dbnum;\r\n+\r\n+ /* Logical slots can be migrated since PG17. */\r\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n+ return;\r\n\r\nIt is no longer clear to me what is the purpose of these version checks.\r\n\r\nAs mentioned in comment #2 above, I don't think we need to check the new_cluster >= 1700, because this patch is for PG17 by definition.\r\n\r\nOTOH, I also don't recognise the reason why there has to be a PG17 restriction on the 'old_cluster' version. Such a restriction seems to cripple the usefulness of this patch (eg. cannot even upgrade slots from PG16 to PG17), and there is no explanation given for it. If there is some valid incompatibility reason why only PG17 old_cluster slots can be upgraded then it ought to be described in detail and probably also mentioned in the PG DOCS. \r\n>\r\n\r\nUpgrading logical slots with verifications requires that they surely saved to\r\ndisk while shutting down (0001 patch). Currently we do not have a plan to\r\nbackpatch it, so I think the checking must be needed. Instead, I added\r\ndescriptions in the doc and code comments.\r\n\r\n>\r\n5. count_logical_slots\r\n\r\n+/*\r\n+ * count_logical_slots()\r\n+ *\r\n+ * Sum up and return the number of logical replication slots for all databases.\r\n+ */\r\n+int\r\n+count_logical_slots(ClusterInfo *cluster)\r\n+{\r\n+ int dbnum;\r\n+ int slot_count = 0;\r\n+\r\n+ /* Quick exit if the version is prior to PG17. */\r\n+ if (GET_MAJOR_VERSION(cluster->major_version) <= 1600)\r\n+ return 0;\r\n+\r\n+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)\r\n+ slot_count += cluster->dbarr.dbs[dbnum].slot_arr.nslots;\r\n+\r\n+ return slot_count;\r\n+}\r\n\r\nSame as the previous comment #4. I had doubts about the intent/need for this cluster version checking.\r\n>\r\n\r\nAs I said above, this is needed.\r\n\r\n[1]: https://www.postgresql.org/message-id/CALDaNm0VrAt24e2FxbOX6eJQ-G_tZ0gVpsFBjzQM99NxG0hZfg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 25 Aug 2023 02:10:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! New patch could be available in [1].\r\n\r\n> 1. check_for_lost_slots\r\n> \r\n> +\r\n> +/*\r\n> + * Verify that all logical replication slots are usable.\r\n> + */\r\n> +void\r\n> +check_for_lost_slots(ClusterInfo *cluster)\r\n> \r\n> 1a.\r\n> AFAIK we don't ever need to call this also for 'new_cluster'. So the\r\n> function should have no parameter and just access 'old_cluster'\r\n> directly.\r\n\r\nActually I have asked in previous post, and I understood you like the style.\r\nFixed. Also, get_logical_slot_infos() and count_logical_slots() are also called\r\nonly for old_cluster, then removed the argument.\r\n\r\n> 1b.\r\n> Can't this be a static function now?\r\n\r\nYeah, changed to static.\r\n\r\n> 2.\r\n> + for (i = 0; i < ntups; i++)\r\n> + pg_log(PG_WARNING,\r\n> + \"\\nWARNING: logical replication slot \\\"%s\\\" is in 'lost' state.\",\r\n> + PQgetvalue(res, i, i_slotname));\r\n> \r\n> Is it correct that this message also includes the word \"WARNING\"?\r\n> Other PG_WARNING messages don't do that.\r\n\r\ncreate_script_for_old_cluster_deletion() has the word and I followed that:\r\n\r\n```\r\npg_log(PG_WARNING,\r\n\t \"\\nWARNING: new data directory should not be inside the old data directory, i.e. %s\", old_cluster_pgdata);\r\n```\r\n\r\n> 3. check_for_confirmed_flush_lsn\r\n> \r\n> +/*\r\n> + * Verify that all logical replication slots consumed all WALs, except a\r\n> + * CHECKPOINT_SHUTDOWN record.\r\n> + */\r\n> +static void\r\n> +check_for_confirmed_flush_lsn(ClusterInfo *cluster)\r\n> \r\n> AFAIK we don't ever need to call this also for 'new_cluster'. So the\r\n> function should have no parameter and just access 'old_cluster'\r\n> directly.\r\n\r\nRemoved.\r\n\r\n> 4.\r\n> + for (i = 0; i < ntups; i++)\r\n> + pg_log(PG_WARNING,\r\n> + \"\\nWARNING: logical replication slot \\\"%s\\\" has not consumed WALs yet\",\r\n> + PQgetvalue(res, i, i_slotname));\r\n> \r\n> Is it correct that this message also includes the word \"WARNING\"?\r\n> Other PG_WARNING messages don't do that.\r\n\r\nSee above reply, create_script_for_old_cluster_deletion() has that.\r\n\r\n> src/bin/pg_upgrade/controldata.c\r\n> \r\n> 5. get_control_data\r\n> \r\n> + else if ((p = strstr(bufin, \"Latest checkpoint location:\")) != NULL)\r\n> + {\r\n> + /*\r\n> + * Gather the latest checkpoint location if the cluster is PG17\r\n> + * or later. This is used for upgrading logical replication\r\n> + * slots.\r\n> + */\r\n> + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\r\n> \r\n> But we are not \"gathering\" anything. It's just one LSN. I think this\r\n> ought to just say \"Read the latest...\"\r\n\r\nChanged.\r\n\r\n> 6.\r\n> + /*\r\n> + * The upper and lower part of LSN must be read separately\r\n> + * because it is reported in %X/%X format.\r\n> + */\r\n> \r\n> /reported/stored as/\r\n\r\nChanged.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 7.\r\n> +void check_for_lost_slots(ClusterInfo *cluster);\\\r\n> \r\n> Why is this needed here? Can't this be a static function?\r\n\r\nRemoved.\r\n\r\n> .../t/003_logical_replication_slots.pl\r\n> \r\n> 8.\r\n> +# 2. Consume WAL records to avoid another type of upgrade failure. It will be\r\n> +# tested in subsequent cases.\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_logical_slot_get_changes('test_slot1', NULL,\r\n> NULL);\"\r\n> +);\r\n> \r\n> I wondered if that step really needed. Why will there be WAL records to consume?\r\n> \r\n> IIUC we haven't published anything yet.\r\n\r\nThe primal reason was described in [2], the reply for comment 10.\r\nAfter creating 'test_slot1', another 'test_slot2' is also created, and the\r\nfunction generates the RUNNING_XLOG record. The backtrace is as follows:\r\n\r\npg_create_logical_replication_slot\r\ncreate_logical_replication_slot\r\nCreateInitDecodingContext\r\nReplicationSlotReserveWal\r\nLogStandbySnapshot\r\nLogCurrentRunningXacts\r\nXLogInsert(RM_STANDBY_ID, XLOG_RUNNING_XACTS);\r\n\r\ncheck_for_confirmed_flush_lsn() detects the record and raises FATAL error before\r\nchecking GUC on new cluster.\r\n\r\n> 9.\r\n> +# ------------------------------\r\n> +# TEST: Successful upgrade\r\n> +\r\n> +# Preparations for the subsequent test:\r\n> +# 1. Remove the remained slot\r\n> +$old_publisher->start;\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT * FROM pg_drop_replication_slot('test_slot1');\"\r\n> +);\r\n> \r\n> Should removal of the slot be done as part of the cleanup of the\r\n> previous test, instead of preparing for this one?\r\n\r\nMoved to cleanup part.\r\n\r\n> 10.\r\n> # 3. Disable the subscription once\r\n> $subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub DISABLE\");\r\n> $old_publisher->stop;\r\n> \r\n> 10a.\r\n> What do you mean by \"once\"?\r\n\r\nI added the word because the subscription would be enabled again.\r\nBut after considering more, I thought \"Temporarily\" seems better. Fixed.\r\n\r\n> 10b.\r\n> That old_publisher->stop; seems strangely placed. Why is it here?\r\n\r\nWe must shut down the cluster before doing pg_upgrade. Isn't it same as line 124?\r\n\r\n```\r\n# 2. Generate extra WAL records. Because these WAL records do not get consumed\r\n#\t it will cause the upcoming pg_upgrade test to fail.\r\n$old_publisher->safe_psql('postgres',\r\n\t\"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\"\r\n);\r\n$old_publisher->stop;\r\n```\r\n\r\n> 11.\r\n> # Check that the slot 'test_slot1' has migrated to the new cluster\r\n> $new_publisher->start;\r\n> my $result = $new_publisher->safe_psql('postgres',\r\n> \"SELECT slot_name, two_phase FROM pg_replication_slots\");\r\n> is($result, qq(sub|t), 'check the slot exists on new cluster');\r\n> \r\n> ~\r\n> \r\n> That comment now seems wrong. That slot was previously removed, right?\r\n\r\nYeah, it should be 'sub'. Changed.\r\n\r\n> 12.\r\n> # Update the connection\r\n> my $new_connstr = $new_publisher->connstr . ' dbname=postgres';\r\n> $subscriber->safe_psql('postgres',\r\n> \"ALTER SUBSCRIPTION sub CONNECTION '$new_connstr'\");\r\n> $subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub ENABLE\");\r\n> \r\n> ~\r\n> \r\n> Maybe better to combine both SQL.\r\n\r\nCombined. \r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866D7677BAE6F66839570FCF5E3A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB58668021BB233D129B466122F51CA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n",
"msg_date": "Fri, 25 Aug 2023 02:11:37 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 12:09 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> > FYI, the v24-0003 tests for pg_upgrade did not work for me:\n>\n> Hmm, I ran tests more than 1hr but could not reproduce the failure.\n> cfbot also said OK multiple times...\n>\n\nToday I rebuilt everything clean from the ground up and applied all\nv24*. But this time everything passes. (I have repeated the test 3x\nand 3x it passes)\n\nI don't know what is different, but I have a theory that perhaps\nyesterday the v24-0001 patch did not apply correctly for me (due to\nthere being a pre-existing\ncontrib/test_decoding/t/002_always_persist.pl even after a make\nclean), but that I did not notice the error (due to it being hidden\namong the other whitespace warnings) when applying that first patch.\n\nI think we can assume this was a problem with my environment. Of\ncourse, if I ever see it happen again I will let you know.\n\nSorry for the false alarm.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 25 Aug 2023 13:46:45 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are my review comments for patch v25-0002.\n\nIn general, I feel where possible the version checking is best done in\nthe \"check.c\" file (the filename is a hint). Most of the review\ncomments below are repeating this point.\n\n======\nCommit message.\n\n1.\nI felt this should mention the limitation that the slot upgrade\nfeature is only supported from PG17 slots upwards.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n2.\n+ <para>\n+ <application>pg_upgrade</application> attempts to migrate logical\n+ replication slots. This helps avoid the need for manually defining the\n+ same replication slots on the new publisher. Currently,\n+ <application>pg_upgrade</application> supports migrate logical replication\n+ slots when the old cluster is 17.X and later.\n+ </para>\n\nCurrently, <application>pg_upgrade</application> supports migrate\nlogical replication slots when the old cluster is 17.X and later.\n\nSUGGESTION\nMigration of logical replication slots is only supported when the old\ncluster is version 17.0 or later.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n3. GENERAL\n\nIMO all version checking for this feature should only be done within\nthis \"check.c\" file as much as possible.\n\nThe detailed reason for this PG17 limitation can be in the file header\ncomment of \"pg_upgrade.c\", and then all the version checks can simply\nsay something like:\n\"Logical slot migration is only support for slots in PostgreSQL 17.0\nand later. See atop file pg_upgrade.c for an explanation of this\nlimitation \"\n\n~~~\n\n4. check_and_dump_old_cluster\n\n+ /* Extract a list of logical replication slots */\n+ get_logical_slot_infos();\n+\n\nIMO the version checking should only be done in the \"checking\"\nfunctions, so it should be removed from the within\nget_logical_slot_infos() and put here in the caller.\n\nSUGGESTION\n\n/* Logical slots can be migrated since PG17. */\nif (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n{\n/* Extract a list of logical replication slots */\nget_logical_slot_infos();\n}\n\n~~~\n\n5. check_new_cluster_logical_replication_slots\n\n+check_new_cluster_logical_replication_slots(void)\n+{\n+ PGresult *res;\n+ PGconn *conn;\n+ int nslots = count_logical_slots();\n+ int max_replication_slots;\n+ char *wal_level;\n+\n+ /* Quick exit if there are no logical slots on the old cluster */\n+ if (nslots == 0)\n+ return;\n\nIMO the version checking should only be done in the \"checking\"\nfunctions, so it should be removed from the count_logical_slots() and\nthen this code should be written more like this:\n\nSUGGESTION (notice the quick return comment change too)\n\nint nslots = 0;\n\n/* Logical slots can be migrated since PG17. */\nif (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n nslots = count_logical_slots();\n\n/* Quick return if there are no logical slots to be migrated. */\nif (nslots == 0)\n return;\n\n======\nsrc/bin/pg_upgrade/info.c\n\n6. GENERAL\n\nFor the sake of readability it might be better to make the function\nnames more explicit:\n\nget_logical_slot_infos() -> get_old_cluster_logical_slot_infos()\ncount_logical_slots() -> count_old_cluster_logical_slots()\n\n~~~\n\n7. get_logical_slot_infos\n\n+/*\n+ * get_logical_slot_infos()\n+ *\n+ * Higher level routine to generate LogicalSlotInfoArr for all databases.\n+ *\n+ * Note: This function will not do anything if the old cluster is pre-PG 17.\n+ * The logical slots are not saved at shutdown, and the confirmed_flush_lsn is\n+ * always behind the SHUTDOWN_CHECKPOINT record. Subsequent checks done in\n+ * check_for_confirmed_flush_lsn() would raise a FATAL error if such slots are\n+ * included.\n+ */\n+void\n+get_logical_slot_infos(void)\n\nMove all this detailed explanation about the limitation to the\nfile-level comment in \"pg_upgrade.c\". See also review comment #3.\n\n~~~\n\n8. get_logical_slot_infos\n\n+void\n+get_logical_slot_infos(void)\n+{\n+ int dbnum;\n+\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n\nIMO the version checking is best done in the \"checking\" functions. See\nprevious review comments about the caller of this. If you want to put\nsomething here, then just have an Assert:\n\nAssert(GET_MAJOR_VERSION(old_cluster.major_version) >= 1700);\n\n~~~\n\n9. count_logical_slots\n\n+/*\n+ * count_logical_slots()\n+ *\n+ * Sum up and return the number of logical replication slots for all databases.\n+ */\n+int\n+count_logical_slots(void)\n+{\n+ int dbnum;\n+ int slot_count = 0;\n+\n+ /* Quick exit if the version is prior to PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return 0;\n+\n+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\n+\n+ return slot_count;\n+}\n\nIMO it is better to remove the version-checking side-effect here. Do\nthe version checks from the \"check\" functions where this is called\nfrom. Also removing the check from here gives the ability to output\nmore useful messages -- e.g. review comment #11\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.c\n\n10. File-level comment\n\nAdd a detailed explanation about the limitation in the file-level\ncomment. See review comment #3 for details.\n\n~~~\n\n11.\n+ /*\n+ * Create logical replication slots.\n+ *\n+ * Note: This must be done after doing the pg_resetwal command because\n+ * pg_resetwal would remove required WALs.\n+ */\n+ if (count_logical_slots())\n+ {\n+ start_postmaster(&new_cluster, true);\n+ create_logical_replication_slots();\n+ stop_postmaster(false);\n+ }\n+\n\nIMO it is better to do the explicit version checking here, instead of\nrelying on a side-effect within the count_logical_slots() function.\n\nSUGGESTION #1\n\n/* Logical replication slot upgrade only supported for old_cluster >= PG17 */\nif (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n{\nif (count_logical_slots())\n{\nstart_postmaster(&new_cluster, true);\ncreate_logical_replication_slots();\nstop_postmaster(false);\n}\n}\n\nAND...\n\nBy doing this, you will be able to provide more useful output here like this:\n\nSUGGESTION #2 (my preferred)\n\nif (count_logical_slots())\n{\n if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n {\n pg_log(PG_WARNING,\n \"\\nWARNING: This utility can only upgrade logical\nreplication slots present in PostgreSQL version %s and later.\",\n \"17.0\");\n }\n else\n {\n start_postmaster(&new_cluster, true);\n create_logical_replication_slots();\n stop_postmaster(false);\n }\n}\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 25 Aug 2023 18:43:47 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san,\n\nHere are my review comments for patch v25-0003.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. GENERAL\n\n+static void check_for_confirmed_flush_lsn(void);\n+static void check_for_lost_slots(void);\n\nFor more clarity, I wonder if it is better to rename some functions:\n\ncheck_for_confirmed_flush_lsn() -> check_old_cluster_for_confirmed_flush_lsn()\ncheck_for_lost_slots() -> check_old_cluster_for_lost_slots()\n\n~~~\n\n2.\n+ /*\n+ * Logical replication slots can be migrated since PG17. See comments atop\n+ * get_logical_slot_infos().\n+ */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n+ {\n+ check_for_lost_slots();\n+\n+ /*\n+ * Do additional checks if a live check is not required. This requires\n+ * that confirmed_flush_lsn of all the slots is the same as the latest\n+ * checkpoint location, but it would be satisfied only when the server\n+ * has been shut down.\n+ */\n+ if (!live_check)\n+ check_for_confirmed_flush_lsn();\n+ }\n+\n\n2a.\nIf my suggestions from v25-0002 [1] are adopted then this comment\nneeds to change to say like \"See atop file pg_upgrade.c...\"\n\n~\n\n2b.\nHmm. If my suggestions from v25-0002 [1] are adopted then the version\nchecking and the slot counting would *already* be in this calling\nfunction. In that case, why can't this whole fragment be put in the\nsame place? E.g. IIUC there is no reason to call these at checks all\nwhen the old_cluster slot count is already known to be 0. Similarly,\nthere is no reason that both these functions need to be independently\nchecking count_logical_slots again since we have already done that\n(again, assuming my suggestions from v25-0002 [1] are adopted).\n\n~~~\n\n3. check_for_lost_slots\n\n+/*\n+ * Verify that all logical replication slots are usable.\n+ */\n+void\n+check_for_lost_slots(void)\n\nThis was forward-declared to be static, but the static function\nmodifier is absent here.\n\n~\n\n4. check_for_lost_slots\n\n+ /* Quick exit if the cluster does not have logical slots. */\n+ if (count_logical_slots() == 0)\n+ return;\n+\n\nAFAICT this quick exit can be removed. See my comment #2b.\n\n~~~\n\n5. check_for_confirmed_flush_lsn\n\n+check_for_confirmed_flush_lsn(void)\n+{\n+ int i,\n+ ntups,\n+ i_slotname;\n+ PGresult *res;\n+ DbInfo *active_db = &old_cluster.dbarr.dbs[0];\n+ PGconn *conn;\n+\n+ /* Quick exit if the cluster does not have logical slots. */\n+ if (count_logical_slots() == 0)\n+ return;\n\nAFAICT this quick exit can be removed. See my comment #2b.\n\n======\n.../t/003_logical_replication_slots.pl\n\n6.\n+# 2. Temporarily disable the subscription\n+$subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub DISABLE\");\n $old_publisher->stop;\n\nIn my previous 0003 review ([2] #10b) I was not questioning the need\nfor the $old_publisher->stop; before the pg_upgrade. I was only asking\nwhy it was done at this location (after the DISABLE) instead of\nearlier.\n\n~~~\n\n7.\n+# Check whether changes on the new publisher get replicated to the subscriber\n+$new_publisher->safe_psql('postgres',\n+ \"INSERT INTO tbl VALUES (generate_series(11, 20))\");\n+$new_publisher->wait_for_catchup('sub');\n+$result = $subscriber->safe_psql('postgres', \"SELECT count(*) FROM tbl\");\n+is($result, qq(20), 'check changes are shipped to the subscriber');\n\n/shipped/replicated/\n\n------\n[1] My review of patch v25-0002 -\nhttps://www.postgresql.org/message-id/CAHut%2BPtQcou3Bfm9A5SbhFuo2uKK-6u4_j_59so3skAi8Ns03A%40mail.gmail.com\n[2] My review of v24-0003 -\nhttps://www.postgresql.org/message-id/CAHut%2BPs5%3D9q1CCyrrytyv-8oUBqE6rv-%3DYFSRuuQwVf%2BsmC-Kw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 25 Aug 2023 20:20:18 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Aug 25, 2023 at 2:14 PM Peter Smith <[email protected]> wrote:\n>\n> Here are my review comments for patch v25-0002.\n>\n> In general, I feel where possible the version checking is best done in\n> the \"check.c\" file (the filename is a hint). Most of the review\n> comments below are repeating this point.\n>\n> ======\n> Commit message.\n>\n> 1.\n> I felt this should mention the limitation that the slot upgrade\n> feature is only supported from PG17 slots upwards.\n>\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 2.\n> + <para>\n> + <application>pg_upgrade</application> attempts to migrate logical\n> + replication slots. This helps avoid the need for manually defining the\n> + same replication slots on the new publisher. Currently,\n> + <application>pg_upgrade</application> supports migrate logical replication\n> + slots when the old cluster is 17.X and later.\n> + </para>\n>\n> Currently, <application>pg_upgrade</application> supports migrate\n> logical replication slots when the old cluster is 17.X and later.\n>\n> SUGGESTION\n> Migration of logical replication slots is only supported when the old\n> cluster is version 17.0 or later.\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 3. GENERAL\n>\n> IMO all version checking for this feature should only be done within\n> this \"check.c\" file as much as possible.\n>\n> The detailed reason for this PG17 limitation can be in the file header\n> comment of \"pg_upgrade.c\", and then all the version checks can simply\n> say something like:\n> \"Logical slot migration is only support for slots in PostgreSQL 17.0\n> and later. See atop file pg_upgrade.c for an explanation of this\n> limitation \"\n>\n\nI don't think it is a good idea to move these comments atop\npg_upgrade.c as it is specific to slots. To me, the current place\nproposed by the patch appears reasonable.\n\n> ~~~\n>\n> 4. check_and_dump_old_cluster\n>\n> + /* Extract a list of logical replication slots */\n> + get_logical_slot_infos();\n> +\n>\n> IMO the version checking should only be done in the \"checking\"\n> functions, so it should be removed from the within\n> get_logical_slot_infos() and put here in the caller.\n>\n\nI think we should do it where it makes more sense. As far as I can see\ncurrently there is no such rule.\n\n> SUGGESTION\n>\n> /* Logical slots can be migrated since PG17. */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> {\n> /* Extract a list of logical replication slots */\n> get_logical_slot_infos();\n> }\n>\n\nI find the current place better than this suggestion.\n\n> ~~~\n>\n> 5. check_new_cluster_logical_replication_slots\n>\n> +check_new_cluster_logical_replication_slots(void)\n> +{\n> + PGresult *res;\n> + PGconn *conn;\n> + int nslots = count_logical_slots();\n> + int max_replication_slots;\n> + char *wal_level;\n> +\n> + /* Quick exit if there are no logical slots on the old cluster */\n> + if (nslots == 0)\n> + return;\n>\n> IMO the version checking should only be done in the \"checking\"\n> functions, so it should be removed from the count_logical_slots() and\n> then this code should be written more like this:\n>\n> SUGGESTION (notice the quick return comment change too)\n>\n> int nslots = 0;\n>\n> /* Logical slots can be migrated since PG17. */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> nslots = count_logical_slots();\n>\n> /* Quick return if there are no logical slots to be migrated. */\n> if (nslots == 0)\n> return;\n>\n\n+1.\n\n> ======\n> src/bin/pg_upgrade/info.c\n>\n> 6. GENERAL\n>\n> For the sake of readability it might be better to make the function\n> names more explicit:\n>\n> get_logical_slot_infos() -> get_old_cluster_logical_slot_infos()\n> count_logical_slots() -> count_old_cluster_logical_slots()\n>\n> ~~~\n>\n> 7. get_logical_slot_infos\n>\n> +/*\n> + * get_logical_slot_infos()\n> + *\n> + * Higher level routine to generate LogicalSlotInfoArr for all databases.\n> + *\n> + * Note: This function will not do anything if the old cluster is pre-PG 17.\n> + * The logical slots are not saved at shutdown, and the confirmed_flush_lsn is\n> + * always behind the SHUTDOWN_CHECKPOINT record. Subsequent checks done in\n> + * check_for_confirmed_flush_lsn() would raise a FATAL error if such slots are\n> + * included.\n> + */\n> +void\n> +get_logical_slot_infos(void)\n>\n> Move all this detailed explanation about the limitation to the\n> file-level comment in \"pg_upgrade.c\". See also review comment #3.\n>\n\n-1. This is not generic enough to be moved to pg_upgrade.c.\n\n>\n> 11.\n> + /*\n> + * Create logical replication slots.\n> + *\n> + * Note: This must be done after doing the pg_resetwal command because\n> + * pg_resetwal would remove required WALs.\n> + */\n> + if (count_logical_slots())\n> + {\n> + start_postmaster(&new_cluster, true);\n> + create_logical_replication_slots();\n> + stop_postmaster(false);\n> + }\n> +\n>\n> IMO it is better to do the explicit version checking here, instead of\n> relying on a side-effect within the count_logical_slots() function.\n>\n> SUGGESTION #1\n>\n> /* Logical replication slot upgrade only supported for old_cluster >= PG17 */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> {\n> if (count_logical_slots())\n> {\n> start_postmaster(&new_cluster, true);\n> create_logical_replication_slots();\n> stop_postmaster(false);\n> }\n> }\n>\n> AND...\n>\n> By doing this, you will be able to provide more useful output here like this:\n>\n> SUGGESTION #2 (my preferred)\n>\n> if (count_logical_slots())\n> {\n> if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> {\n> pg_log(PG_WARNING,\n> \"\\nWARNING: This utility can only upgrade logical\n> replication slots present in PostgreSQL version %s and later.\",\n> \"17.0\");\n> }\n> else\n> {\n> start_postmaster(&new_cluster, true);\n> create_logical_replication_slots();\n> stop_postmaster(false);\n> }\n> }\n>\n\nI don't like suggestion#2 much. I don't feel the need for such a WARNING.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Aug 2023 16:46:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version patch set.\r\n \r\n> ======\r\n> Commit message.\r\n> \r\n> 1.\r\n> I felt this should mention the limitation that the slot upgrade\r\n> feature is only supported from PG17 slots upwards.\r\n\r\nAdded. The same sentence as doc was used.\r\n\r\n> doc/src/sgml/ref/pgupgrade.sgml\r\n> \r\n> 2.\r\n> + <para>\r\n> + <application>pg_upgrade</application> attempts to migrate logical\r\n> + replication slots. This helps avoid the need for manually defining the\r\n> + same replication slots on the new publisher. Currently,\r\n> + <application>pg_upgrade</application> supports migrate logical\r\n> replication\r\n> + slots when the old cluster is 17.X and later.\r\n> + </para>\r\n> \r\n> Currently, <application>pg_upgrade</application> supports migrate\r\n> logical replication slots when the old cluster is 17.X and later.\r\n> \r\n> SUGGESTION\r\n> Migration of logical replication slots is only supported when the old\r\n> cluster is version 17.0 or later.\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 3. GENERAL\r\n> \r\n> IMO all version checking for this feature should only be done within\r\n> this \"check.c\" file as much as possible.\r\n> \r\n> The detailed reason for this PG17 limitation can be in the file header\r\n> comment of \"pg_upgrade.c\", and then all the version checks can simply\r\n> say something like:\r\n> \"Logical slot migration is only support for slots in PostgreSQL 17.0\r\n> and later. See atop file pg_upgrade.c for an explanation of this\r\n> limitation \"\r\n\r\nHmm, I'm not sure it should be and Amit disagreed [1].\r\nI did not address this one.\r\n\r\n> 4. check_and_dump_old_cluster\r\n> \r\n> + /* Extract a list of logical replication slots */\r\n> + get_logical_slot_infos();\r\n> +\r\n> \r\n> IMO the version checking should only be done in the \"checking\"\r\n> functions, so it should be removed from the within\r\n> get_logical_slot_infos() and put here in the caller.\r\n> \r\n> SUGGESTION\r\n> \r\n> /* Logical slots can be migrated since PG17. */\r\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\r\n> {\r\n> /* Extract a list of logical replication slots */\r\n> get_logical_slot_infos();\r\n> }\r\n\r\nPer discussion [1], I did not address the comment.\r\n\r\n> 5. check_new_cluster_logical_replication_slots\r\n> \r\n> +check_new_cluster_logical_replication_slots(void)\r\n> +{\r\n> + PGresult *res;\r\n> + PGconn *conn;\r\n> + int nslots = count_logical_slots();\r\n> + int max_replication_slots;\r\n> + char *wal_level;\r\n> +\r\n> + /* Quick exit if there are no logical slots on the old cluster */\r\n> + if (nslots == 0)\r\n> + return;\r\n> \r\n> IMO the version checking should only be done in the \"checking\"\r\n> functions, so it should be removed from the count_logical_slots() and\r\n> then this code should be written more like this:\r\n> \r\n> SUGGESTION (notice the quick return comment change too)\r\n> \r\n> int nslots = 0;\r\n> \r\n> /* Logical slots can be migrated since PG17. */\r\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\r\n> nslots = count_logical_slots();\r\n> \r\n> /* Quick return if there are no logical slots to be migrated. */\r\n> if (nslots == 0)\r\n> return;\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 6. GENERAL\r\n> \r\n> For the sake of readability it might be better to make the function\r\n> names more explicit:\r\n> \r\n> get_logical_slot_infos() -> get_old_cluster_logical_slot_infos()\r\n> count_logical_slots() -> count_old_cluster_logical_slots()\r\n\r\nFixed. Moreover, get_logical_slot_infos_per_db() also followed the style.\r\n\r\n> 7. get_logical_slot_infos\r\n> \r\n> +/*\r\n> + * get_logical_slot_infos()\r\n> + *\r\n> + * Higher level routine to generate LogicalSlotInfoArr for all databases.\r\n> + *\r\n> + * Note: This function will not do anything if the old cluster is pre-PG 17.\r\n> + * The logical slots are not saved at shutdown, and the confirmed_flush_lsn is\r\n> + * always behind the SHUTDOWN_CHECKPOINT record. Subsequent checks\r\n> done in\r\n> + * check_for_confirmed_flush_lsn() would raise a FATAL error if such slots are\r\n> + * included.\r\n> + */\r\n> +void\r\n> +get_logical_slot_infos(void)\r\n> \r\n> Move all this detailed explanation about the limitation to the\r\n> file-level comment in \"pg_upgrade.c\". See also review comment #3.\r\n\r\nPer discussion [1], I did not address the comment.\r\n\r\n> 8. get_logical_slot_infos\r\n> \r\n> +void\r\n> +get_logical_slot_infos(void)\r\n> +{\r\n> + int dbnum;\r\n> +\r\n> + /* Logical slots can be migrated since PG17. */\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> + return;\r\n> \r\n> IMO the version checking is best done in the \"checking\" functions. See\r\n> previous review comments about the caller of this. If you want to put\r\n> something here, then just have an Assert:\r\n> \r\n> Assert(GET_MAJOR_VERSION(old_cluster.major_version) >= 1700);\r\n\r\nAs I said above, check_and_dump_old_cluster() still does not check major version\r\nbefore calling get_old_cluster_logical_slot_infos(). So I kept current style.\r\n\r\n> 9. count_logical_slots\r\n> \r\n> +/*\r\n> + * count_logical_slots()\r\n> + *\r\n> + * Sum up and return the number of logical replication slots for all databases.\r\n> + */\r\n> +int\r\n> +count_logical_slots(void)\r\n> +{\r\n> + int dbnum;\r\n> + int slot_count = 0;\r\n> +\r\n> + /* Quick exit if the version is prior to PG17. */\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> + return 0;\r\n> +\r\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> + slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\r\n> +\r\n> + return slot_count;\r\n> +}\r\n> \r\n> IMO it is better to remove the version-checking side-effect here. Do\r\n> the version checks from the \"check\" functions where this is called\r\n> from. Also removing the check from here gives the ability to output\r\n> more useful messages -- e.g. review comment #11\r\n\r\nApart from this, count_old_cluster_logical_slots() are called after checking\r\nmajor version. Assert() was added instead.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.c\r\n> \r\n> 10. File-level comment\r\n> \r\n> Add a detailed explanation about the limitation in the file-level\r\n> comment. See review comment #3 for details.\r\n\r\nPer discussion [1], I did not address the comment.\r\n\r\n> 11.\r\n> + /*\r\n> + * Create logical replication slots.\r\n> + *\r\n> + * Note: This must be done after doing the pg_resetwal command because\r\n> + * pg_resetwal would remove required WALs.\r\n> + */\r\n> + if (count_logical_slots())\r\n> + {\r\n> + start_postmaster(&new_cluster, true);\r\n> + create_logical_replication_slots();\r\n> + stop_postmaster(false);\r\n> + }\r\n> +\r\n> \r\n> IMO it is better to do the explicit version checking here, instead of\r\n> relying on a side-effect within the count_logical_slots() function.\r\n> \r\n> SUGGESTION #1\r\n> \r\n> /* Logical replication slot upgrade only supported for old_cluster >= PG17 \r\n*/\r\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\r\n> {\r\n> if (count_logical_slots())\r\n> {\r\n> start_postmaster(&new_cluster, true);\r\n> create_logical_replication_slots();\r\n> stop_postmaster(false);\r\n> }\r\n> }\r\n> \r\n> AND...\r\n> \r\n> By doing this, you will be able to provide more useful output here like this:\r\n> \r\n> SUGGESTION #2 (my preferred)\r\n> \r\n> if (count_logical_slots())\r\n> {\r\n> if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> {\r\n> pg_log(PG_WARNING,\r\n> \"\\nWARNING: This utility can only upgrade logical\r\n> replication slots present in PostgreSQL version %s and later.\",\r\n> \"17.0\");\r\n> }\r\n> else\r\n> {\r\n> start_postmaster(&new_cluster, true);\r\n> create_logical_replication_slots();\r\n> stop_postmaster(false);\r\n> }\r\n> }\r\n>\r\n\r\nPer discussion [1], SUGGESTION #1 was chosen.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1Jfk6eQSpasg+GoJVjtkQ3tFSihurbCFwnL3oV75BoUgQ@mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Sat, 26 Aug 2023 04:23:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! The patch can be available in [1].\r\n\r\n> Here are my review comments for patch v25-0003.\r\n> \r\n> ======\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 1. GENERAL\r\n> \r\n> +static void check_for_confirmed_flush_lsn(void);\r\n> +static void check_for_lost_slots(void);\r\n> \r\n> For more clarity, I wonder if it is better to rename some functions:\r\n> \r\n> check_for_confirmed_flush_lsn() -> check_old_cluster_for_confirmed_flush_lsn()\r\n> check_for_lost_slots() -> check_old_cluster_for_lost_slots()\r\n\r\nReplaced.\r\n\r\n> 2.\r\n> + /*\r\n> + * Logical replication slots can be migrated since PG17. See comments atop\r\n> + * get_logical_slot_infos().\r\n> + */\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\r\n> + {\r\n> + check_for_lost_slots();\r\n> +\r\n> + /*\r\n> + * Do additional checks if a live check is not required. This requires\r\n> + * that confirmed_flush_lsn of all the slots is the same as the latest\r\n> + * checkpoint location, but it would be satisfied only when the server\r\n> + * has been shut down.\r\n> + */\r\n> + if (!live_check)\r\n> + check_for_confirmed_flush_lsn();\r\n> + }\r\n> +\r\n> \r\n> 2a.\r\n> If my suggestions from v25-0002 [1] are adopted then this comment\r\n> needs to change to say like \"See atop file pg_upgrade.c...\"\r\n>\r\n> 2b.\r\n> Hmm. If my suggestions from v25-0002 [1] are adopted then the version\r\n> checking and the slot counting would *already* be in this calling\r\n> function. In that case, why can't this whole fragment be put in the\r\n> same place? E.g. IIUC there is no reason to call these at checks all\r\n> when the old_cluster slot count is already known to be 0. Similarly,\r\n> there is no reason that both these functions need to be independently\r\n> checking count_logical_slots again since we have already done that\r\n> (again, assuming my suggestions from v25-0002 [1] are adopted).\r\n\r\nCurrently I did not accept the comment, so they were ignored.\r\n\r\n> 3. check_for_lost_slots\r\n> \r\n> +/*\r\n> + * Verify that all logical replication slots are usable.\r\n> + */\r\n> +void\r\n> +check_for_lost_slots(void)\r\n> \r\n> This was forward-declared to be static, but the static function\r\n> modifier is absent here.\r\n\r\nFixed.\r\n\r\n> 4. check_for_lost_slots\r\n> \r\n> + /* Quick exit if the cluster does not have logical slots. */\r\n> + if (count_logical_slots() == 0)\r\n> + return;\r\n> +\r\n> \r\n> AFAICT this quick exit can be removed. See my comment #2b.\r\n\r\n2b was skipped, so IIUC this is still needed.\r\n\r\n> 5. check_for_confirmed_flush_lsn\r\n> \r\n> +check_for_confirmed_flush_lsn(void)\r\n> +{\r\n> + int i,\r\n> + ntups,\r\n> + i_slotname;\r\n> + PGresult *res;\r\n> + DbInfo *active_db = &old_cluster.dbarr.dbs[0];\r\n> + PGconn *conn;\r\n> +\r\n> + /* Quick exit if the cluster does not have logical slots. */\r\n> + if (count_logical_slots() == 0)\r\n> + return;\r\n> \r\n> AFAICT this quick exit can be removed. See my comment #2b.\r\n\r\nI kept the style.\r\n\r\n> .../t/003_logical_replication_slots.pl\r\n> \r\n> 6.\r\n> +# 2. Temporarily disable the subscription\r\n> +$subscriber->safe_psql('postgres', \"ALTER SUBSCRIPTION sub DISABLE\");\r\n> $old_publisher->stop;\r\n> \r\n> In my previous 0003 review ([2] #10b) I was not questioning the need\r\n> for the $old_publisher->stop; before the pg_upgrade. I was only asking\r\n> why it was done at this location (after the DISABLE) instead of\r\n> earlier.\r\n\r\nI see. The reason was to avoid unnecessary error by apply worker.\r\n\r\nAs the premise, the position of shutting down (before or after the DISABLE) does\r\nnot affect the result. But if it puts earlier than DISABLE, the apply worker will\r\nexit with below error because the walsender exits ealier than worker:\r\n\r\n```\r\nERROR: could not send end-of-streaming message to primary: server closed the connection unexpectedly\r\n This probably means the server terminated abnormally\r\n before or while processing the request.\r\n no COPY in progress\r\n```\r\n\r\nIt is not problematic but future readers may be confused if it find.\r\nSo I avoided it.\r\n\r\n> 7.\r\n> +# Check whether changes on the new publisher get replicated to the subscriber\r\n> +$new_publisher->safe_psql('postgres',\r\n> + \"INSERT INTO tbl VALUES (generate_series(11, 20))\");\r\n> +$new_publisher->wait_for_catchup('sub');\r\n> +$result = $subscriber->safe_psql('postgres', \"SELECT count(*) FROM tbl\");\r\n> +is($result, qq(20), 'check changes are shipped to the subscriber');\r\n> \r\n> /shipped/replicated/\r\n\r\nYou meant to say s/replicated/shipped/, right? Fixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866C6DE11EBC96752CEB7DEF5E2A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Sat, 26 Aug 2023 04:25:34 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Aug 26, 2023 at 9:54 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> Thank you for reviewing! PSA new version patch set.\n\nI haven't read this thread in detail, but I have one high-level design\nquestion. The upgrade of the replication slot is default or is it\nunder some GUC? because if it is by default then some of the users\nmight experience failure in some cases e.g. a) wal_level in the new\ncluster is not logical b) If this new check\ncheck_old_cluster_for_confirmed_flush_lsn() fails due to confirm flush\nLSN is not at the latest shutdown checkpoint. I am not sure whether this\nis a problem or could be just handled by documenting this behavior.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Aug 2023 09:53:09 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Dilip,\r\n\r\nThank you for reading the thread!\r\n\r\n> I haven't read this thread in detail, but I have one high-level design\r\n> question. The upgrade of the replication slot is default or is it\r\n> under some GUC?\r\n\r\nI designed that logical slots were upgraded by default.\r\n\r\n> because if it is by default then some of the users\r\n> might experience failure in some cases e.g. a) wal_level in the new\r\n> cluster is not logical b) If this new check\r\n> check_old_cluster_for_confirmed_flush_lsn() fails due to confirm flush\r\n> LSN is not at the latest shutdown checkpoint. I am not sure whether this\r\n> is a problem or could be just handled by documenting this behavior.\r\n\r\nI think it should be done by default to avoid WAL hole. If we do not provide the\r\nupgrading by default, users may forget to specify the option. After sometime\r\nhe/she would notice that slots are not migrated and would create slots at that time,\r\nbut this leads data loss of subscriber. The inconsistency between nodes is really\r\nbad. Developers requested to enable by default [1].\r\n\r\nMoreover, checking related with logical slots are skipped when slots are not defined\r\non the old cluster. So it do not affect when users do not use logical slots.\r\n\r\nAlso, we are considering that an option for excluding slots is introduced after\r\ncommitted once [2].\r\n\r\n[1]: https://www.postgresql.org/message-id/ad83b9f2-ced3-c51c-342a-cc281ff562fc%40postgresql.org\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1KxP%2BgogYOsTHbZVPO7Pp38gcRjEWUxv%2B4X3dFept3z3A%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Mon, 28 Aug 2023 05:23:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi, here are my comments for patch v26-0002.\n\n======\n1. About the PG17 limitation\n\nIn my previous review of v25-0002, I suggested that the PG17\nlimitation should be documented atop one of the source files. See\n[1]#3, [1]#7, [1]#10\n\nI just wanted to explain the reason for that suggestion.\n\nCurrently, all the new version checks have a comment like \"/* Logical\nslots can be migrated since PG17. */\". I felt that it would be better\nif those comments said something more like \"/* Logical slots can be\nmigrated since PG17. See XYZ for details. */\". I don't really care\n*where* the main explanation lives, but I thought since it is\nreferenced from multiple places it might be easier to find if it was\natop some file instead of just in a function comment. YMMV.\n\n======\n2. Do version checking in check_and_dump_old_cluster instead of inside\nget_old_cluster_logical_slot_infos\n\ncheck_and_dump_old_cluster - Should check version before calling\nget_old_cluster_logical_slot_infos\nget_old_cluster_logical_slot_infos - Keep a sanity check Assert if you\nwish (or do nothing -- e.g. see #3 below)\n\nRefer to [1]#4, [1]#8\n\nIsn't it self-evident from the file/function names what kind of logic\nthey are intended to have in them? Sure, there may be some exceptions\nbut unless it is difficult to implement I think most people would\nreasonably assume:\n\n- checking code should be in file \"check.c\"\n-- e.g. a function called 'check_and_dump_old_cluster' ought to be\n*checking* stuff\n\n- info fetching code should be in file \"info.c\"\n\n~~\n\nAnother motivation for this suggestion becomes more obvious later with\npatch 0003. By checking at the \"higher\" level (in check.c) it means\nmultiple related functions can all be called under one version check.\nLess checking means less code and/or simpler code. For example,\nmultiple redundant calls to get_old_cluster_count_slots() can be\navoided in patch 0003 by writing *less* code, than v26* currently has.\n\n======\n3. count_old_cluster_logical_slots\n\nI think there is nothing special in this logic that will crash if PG\nversion <= 1600. Keep the Assert for sanity checking if you wish, but\nthis is already guarded by the call in pg_upgrade.c so perhaps it is\noverkill.\n\n------\n[1] My review of v25-0002 -\nhttps://www.postgresql.org/message-id/CAHut%2BPtQcou3Bfm9A5SbhFuo2uKK-6u4_j_59so3skAi8Ns03A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:27:27 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi, here are my review comments for v26-0003\n\nIt seems I must defend some of my previous suggestions from v25* [1],\nso here goes...\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_and_dump_old_cluster\n\nCURRENT CODE (with v26-0003 patch applied)\n\n/* Extract a list of logical replication slots */\nget_old_cluster_logical_slot_infos();\n\n...\n\n/*\n* Logical replication slots can be migrated since PG17. See comments atop\n* get_old_cluster_logical_slot_infos().\n*/\nif (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n{\ncheck_old_cluster_for_lost_slots();\n\n/*\n* Do additional checks if a live check is not required. This requires\n* that confirmed_flush_lsn of all the slots is the same as the latest\n* checkpoint location, but it would be satisfied only when the server\n* has been shut down.\n*/\nif (!live_check)\ncheck_old_cluster_for_confirmed_flush_lsn();\n}\n\n\nSUGGESTION\n\n/*\n * Logical replication slots can be migrated since PG17. See comments atop\n * get_old_cluster_logical_slot_infos().\n */\nif (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700) // NOTE 1a.\n{\n /* Extract a list of logical replication slots */\n get_old_cluster_logical_slot_infos();\n\n if (count_old_cluster_slots()) // NOTE 1b.\n {\n check_old_cluster_for_lost_slots();\n\n /*\n * Do additional checks if a live check is not required. This requires\n * that confirmed_flush_lsn of all the slots is the same as the latest\n * checkpoint location, but it would be satisfied only when the server\n * has been shut down.\n */\n if (!live_check)\n check_old_cluster_for_confirmed_flush_lsn();\n }\n}\n\n~~\n\nBenefits:\n\n1a.\nOne version check instead of multiple.\n\n~\n\n1b.\nUpfront slot counting means\n- only call 1 time to count_old_cluster_slots().\n- unnecessary calls to other check* functions are avoided\n\n~\n\n1c.\nget_old_cluster_logical_slot_infos\n- No version check is needed.\n\ncheck_old_cluster_for_lost_slots\n- Call to count_old_cluster_slots is not needed\n- Quick exit not needed.\n\ncheck_old_cluster_for_confirmed_flush_lsn\n- Call to count_old_cluster_slots is not needed\n- Quick exit not needed.\n\n~~~\n\n2. check_old_cluster_for_lost_slots\n\n+ /* Quick exit if the cluster does not have logical slots. */\n+ if (count_old_cluster_logical_slots() == 0)\n+ return;\n\nRefer [1]#4. Can remove this because #1b above.\n\n~~~\n\n3. check_old_cluster_for_confirmed_flush_lsn\n\n+ /* Quick exit if the cluster does not have logical slots. */\n+ if (count_old_cluster_logical_slots() == 0)\n+ return;\n\nRefer [1]#5. Can remove this because #1b above.\n\n~~~\n\n4. .../t/003_logical_replication_slots.pl\n\n/shipped/replicated/\n\nKuroda-san 26/8 wrote:\nYou meant to say s/replicated/shipped/, right? Fixed.\n\nNo, I meant what I wrote for [1]#7. I was referring to the word\n\"shipped\" in the message 'check changes are shipped to the\nsubscriber'. Now there are 2 places to change instead of one.\n\n------\n[1] my review of v25-0003.\nhttps://www.postgresql.org/message-id/CAHut%2BPsdkhcVG5GY4ZW0DMUF8FG%3DWvjaGN%2BNA4XFLrzxWSQXVA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:31:16 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\n\nThank you for reviewing! PSA new version patch set.\n\n> ======\n> 1. About the PG17 limitation\n> \n> In my previous review of v25-0002, I suggested that the PG17\n> limitation should be documented atop one of the source files. See\n> [1]#3, [1]#7, [1]#10\n> \n> I just wanted to explain the reason for that suggestion.\n> \n> Currently, all the new version checks have a comment like \"/* Logical\n> slots can be migrated since PG17. */\". I felt that it would be better\n> if those comments said something more like \"/* Logical slots can be\n> migrated since PG17. See XYZ for details. */\". I don't really care\n> *where* the main explanation lives, but I thought since it is\n> referenced from multiple places it might be easier to find if it was\n> atop some file instead of just in a function comment. YMMV.\n> \n> ======\n> 2. Do version checking in check_and_dump_old_cluster instead of inside\n> get_old_cluster_logical_slot_infos\n> \n> check_and_dump_old_cluster - Should check version before calling\n> get_old_cluster_logical_slot_infos\n> get_old_cluster_logical_slot_infos - Keep a sanity check Assert if you\n> wish (or do nothing -- e.g. see #3 below)\n> \n> Refer to [1]#4, [1]#8\n> \n> Isn't it self-evident from the file/function names what kind of logic\n> they are intended to have in them? Sure, there may be some exceptions\n> but unless it is difficult to implement I think most people would\n> reasonably assume:\n> \n> - checking code should be in file \"check.c\"\n> -- e.g. a function called 'check_and_dump_old_cluster' ought to be\n> *checking* stuff\n> \n> - info fetching code should be in file \"info.c\"\n> \n> ~~\n> \n> Another motivation for this suggestion becomes more obvious later with\n> patch 0003. By checking at the \"higher\" level (in check.c) it means\n> multiple related functions can all be called under one version check.\n> Less checking means less code and/or simpler code. For example,\n> multiple redundant calls to get_old_cluster_count_slots() can be\n> avoided in patch 0003 by writing *less* code, than v26* currently has.\n\nIIUC these points were disagreed by Amit, so I would keep my code until he posts\nopinions.\n\n> 3. count_old_cluster_logical_slots\n> \n> I think there is nothing special in this logic that will crash if PG\n> version <= 1600. Keep the Assert for sanity checking if you wish, but\n> this is already guarded by the call in pg_upgrade.c so perhaps it is\n> overkill.\n\nYour point is right.\nI have checked some version-specific functions like check_for_aclitem_data_type_usage()\nand check_for_user_defined_encoding_conversions(), they do not have assert(). So\nremoved from it. As for free_db_and_rel_infos(), the Assert() ensures that new\ncluster does not have logical slots, so I kept it.\n\nAlso, I found that get_loadable_libraries() always read pg_replication_slots,\neven if the old cluster is older than PG17. This let additional checks for logical\ndecoding output plugins. Moreover, prior than PG12 could not be upgrade because\nthey do not have an attribute wal_status.\n\nI think the checking should be done only when old_cluster is >= PG17, so fixed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 28 Aug 2023 13:01:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\n\nThank you for reviewing!\n\n> 1. check_and_dump_old_cluster\n> \n> CURRENT CODE (with v26-0003 patch applied)\n> \n> /* Extract a list of logical replication slots */\n> get_old_cluster_logical_slot_infos();\n> \n> ...\n> \n> /*\n> * Logical replication slots can be migrated since PG17. See comments atop\n> * get_old_cluster_logical_slot_infos().\n> */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> {\n> check_old_cluster_for_lost_slots();\n> \n> /*\n> * Do additional checks if a live check is not required. This requires\n> * that confirmed_flush_lsn of all the slots is the same as the latest\n> * checkpoint location, but it would be satisfied only when the server\n> * has been shut down.\n> */\n> if (!live_check)\n> check_old_cluster_for_confirmed_flush_lsn();\n> }\n> \n> \n> SUGGESTION\n> \n> /*\n> * Logical replication slots can be migrated since PG17. See comments atop\n> * get_old_cluster_logical_slot_infos().\n> */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700) // NOTE 1a.\n> {\n> /* Extract a list of logical replication slots */\n> get_old_cluster_logical_slot_infos();\n> \n> if (count_old_cluster_slots()) // NOTE 1b.\n> {\n> check_old_cluster_for_lost_slots();\n> \n> /*\n> * Do additional checks if a live check is not required. This requires\n> * that confirmed_flush_lsn of all the slots is the same as the latest\n> * checkpoint location, but it would be satisfied only when the server\n> * has been shut down.\n> */\n> if (!live_check)\n> check_old_cluster_for_confirmed_flush_lsn();\n> }\n> }\n> \n> ~~\n> \n> Benefits:\n> \n> 1a.\n> One version check instead of multiple.\n> \n> ~\n> \n> 1b.\n> Upfront slot counting means\n> - only call 1 time to count_old_cluster_slots().\n> - unnecessary calls to other check* functions are avoided\n> \n> ~\n> \n> 1c.\n> get_old_cluster_logical_slot_infos\n> - No version check is needed.\n> \n> check_old_cluster_for_lost_slots\n> - Call to count_old_cluster_slots is not needed\n> - Quick exit not needed.\n> \n> check_old_cluster_for_confirmed_flush_lsn\n> - Call to count_old_cluster_slots is not needed\n> - Quick exit not needed.\n> \n> ~~~\n> \n> 2. check_old_cluster_for_lost_slots\n> \n> + /* Quick exit if the cluster does not have logical slots. */\n> + if (count_old_cluster_logical_slots() == 0)\n> + return;\n> \n> Refer [1]#4. Can remove this because #1b above.\n>\n> 3. check_old_cluster_for_confirmed_flush_lsn\n> \n> + /* Quick exit if the cluster does not have logical slots. */\n> + if (count_old_cluster_logical_slots() == 0)\n> + return;\n> \n> Refer [1]#5. Can remove this because #1b above.\n\nIIUC these points were disagreed by Amit, so I would keep my code until he posts\nopinions.\n\n> 4. .../t/003_logical_replication_slots.pl\n> \n> /shipped/replicated/\n> \n> Kuroda-san 26/8 wrote:\n> You meant to say s/replicated/shipped/, right? Fixed.\n> \n> No, I meant what I wrote for [1]#7. I was referring to the word\n> \"shipped\" in the message 'check changes are shipped to the\n> subscriber'. Now there are 2 places to change instead of one.\n>\n\nOh, sorry for that. Both places was fixed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 13:01:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Aug 28, 2023 at 1:01 PM Peter Smith <[email protected]> wrote:\n>\n> Hi, here are my review comments for v26-0003\n>\n> It seems I must defend some of my previous suggestions from v25* [1],\n> so here goes...\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 1. check_and_dump_old_cluster\n>\n> CURRENT CODE (with v26-0003 patch applied)\n>\n> /* Extract a list of logical replication slots */\n> get_old_cluster_logical_slot_infos();\n>\n> ...\n>\n> /*\n> * Logical replication slots can be migrated since PG17. See comments atop\n> * get_old_cluster_logical_slot_infos().\n> */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> {\n> check_old_cluster_for_lost_slots();\n>\n> /*\n> * Do additional checks if a live check is not required. This requires\n> * that confirmed_flush_lsn of all the slots is the same as the latest\n> * checkpoint location, but it would be satisfied only when the server\n> * has been shut down.\n> */\n> if (!live_check)\n> check_old_cluster_for_confirmed_flush_lsn();\n> }\n>\n>\n> SUGGESTION\n>\n> /*\n> * Logical replication slots can be migrated since PG17. See comments atop\n> * get_old_cluster_logical_slot_infos().\n> */\n> if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700) // NOTE 1a.\n> {\n> /* Extract a list of logical replication slots */\n> get_old_cluster_logical_slot_infos();\n>\n> if (count_old_cluster_slots()) // NOTE 1b.\n> {\n> check_old_cluster_for_lost_slots();\n>\n> /*\n> * Do additional checks if a live check is not required. This requires\n> * that confirmed_flush_lsn of all the slots is the same as the latest\n> * checkpoint location, but it would be satisfied only when the server\n> * has been shut down.\n> */\n> if (!live_check)\n> check_old_cluster_for_confirmed_flush_lsn();\n> }\n> }\n>\n\nI think a slightly better way to achieve this is to combine the code\nfrom check_old_cluster_for_lost_slots() and\ncheck_old_cluster_for_confirmed_flush_lsn() into\ncheck_old_cluster_for_valid_slots(). That will even save us a new\nconnection for the second check.\n\nAlso, I think we can simplify another check in the patch:\n@@ -1446,8 +1446,10 @@ check_new_cluster_logical_replication_slots(void)\n char *wal_level;\n\n /* Logical slots can be migrated since PG17. */\n- if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n- nslots = count_old_cluster_logical_slots();\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n+\n+ nslots = count_old_cluster_logical_slots();\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Aug 2023 11:54:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving comments! PSA new version. I ran the pgindent.\r\n\r\n> > 1. check_and_dump_old_cluster\r\n> >\r\n> > CURRENT CODE (with v26-0003 patch applied)\r\n> >\r\n> > /* Extract a list of logical replication slots */\r\n> > get_old_cluster_logical_slot_infos();\r\n> >\r\n> > ...\r\n> >\r\n> > /*\r\n> > * Logical replication slots can be migrated since PG17. See comments atop\r\n> > * get_old_cluster_logical_slot_infos().\r\n> > */\r\n> > if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\r\n> > {\r\n> > check_old_cluster_for_lost_slots();\r\n> >\r\n> > /*\r\n> > * Do additional checks if a live check is not required. This requires\r\n> > * that confirmed_flush_lsn of all the slots is the same as the latest\r\n> > * checkpoint location, but it would be satisfied only when the server\r\n> > * has been shut down.\r\n> > */\r\n> > if (!live_check)\r\n> > check_old_cluster_for_confirmed_flush_lsn();\r\n> > }\r\n> >\r\n> >\r\n> > SUGGESTION\r\n> >\r\n> > /*\r\n> > * Logical replication slots can be migrated since PG17. See comments atop\r\n> > * get_old_cluster_logical_slot_infos().\r\n> > */\r\n> > if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700) // NOTE 1a.\r\n> > {\r\n> > /* Extract a list of logical replication slots */\r\n> > get_old_cluster_logical_slot_infos();\r\n> >\r\n> > if (count_old_cluster_slots()) // NOTE 1b.\r\n> > {\r\n> > check_old_cluster_for_lost_slots();\r\n> >\r\n> > /*\r\n> > * Do additional checks if a live check is not required. This requires\r\n> > * that confirmed_flush_lsn of all the slots is the same as the latest\r\n> > * checkpoint location, but it would be satisfied only when the server\r\n> > * has been shut down.\r\n> > */\r\n> > if (!live_check)\r\n> > check_old_cluster_for_confirmed_flush_lsn();\r\n> > }\r\n> > }\r\n> >\r\n> \r\n> I think a slightly better way to achieve this is to combine the code\r\n> from check_old_cluster_for_lost_slots() and\r\n> check_old_cluster_for_confirmed_flush_lsn() into\r\n> check_old_cluster_for_valid_slots(). That will even save us a new\r\n> connection for the second check.\r\n\r\nThey are combined into one function.\r\n\r\n> Also, I think we can simplify another check in the patch:\r\n> @@ -1446,8 +1446,10 @@ check_new_cluster_logical_replication_slots(void)\r\n> char *wal_level;\r\n> \r\n> /* Logical slots can be migrated since PG17. */\r\n> - if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\r\n> - nslots = count_old_cluster_logical_slots();\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> + return;\r\n> +\r\n> + nslots = count_old_cluster_logical_slots();\r\n>\r\n\r\nFixed.\r\n\r\nAlso, I have tested the combination of this patch and the physical standby.\r\n\r\n1. Logical slots defined on old physical standby *cannot be upgraded*\r\n2. Logical slots defined on physical primary *are migrated* to new physical standby\r\n\r\nThe primal reason is that pg_upgrade cannot be used for physical standby. If\r\nusers want to upgrade standby, rsync command is used instead. The command\r\ncreates the cluster based on the based on the new primary, hence they are\r\nreplicated to new standby. In contrast, the old cluster is basically ignored so\r\nthat slots on old cluster is not upgraded. I updated the doc accordingly.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 29 Aug 2023 11:58:31 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some minor review comments for patch v28-0002\n\n======\nsrc/sgml/ref/pgupgrade.sgml\n\n1.\n- with the primary.) Replication slots are not copied and must\n- be recreated.\n+ with the primary.) Replication slots on old standby are not copied.\n+ Only logical slots on the primary are migrated to the new standby,\n+ and other slots must be recreated.\n </para>\n\n/on old standby/on the old standby/\n\n======\nsrc/bin/pg_upgrade/info.c\n\n2. get_old_cluster_logical_slot_infos\n\n+void\n+get_old_cluster_logical_slot_infos(void)\n+{\n+ int dbnum;\n+\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n+\n+ pg_log(PG_VERBOSE, \"\\nsource databases:\");\n+\n+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ {\n+ DbInfo *pDbInfo = &old_cluster.dbarr.dbs[dbnum];\n+\n+ get_old_cluster_logical_slot_infos_per_db(pDbInfo);\n+\n+ if (log_opts.verbose)\n+ {\n+ pg_log(PG_VERBOSE, \"Database: \\\"%s\\\"\", pDbInfo->db_name);\n+ print_slot_infos(&pDbInfo->slot_arr);\n+ }\n+ }\n+}\n\nIt might be worth putting an Assert before calling the\nget_old_cluster_logical_slot_infos_per_db(...) just as a sanity check:\nAssert(pDbInfo->slot_arr.nslots == 0);\n\nThis also helps to better document the \"Note\" of the\ncount_old_cluster_logical_slots() function comment.\n\n~~~\n\n3. count_old_cluster_logical_slots\n\n+/*\n+ * count_old_cluster_logical_slots()\n+ *\n+ * Sum up and return the number of logical replication slots for all databases.\n+ *\n+ * Note: this function always returns 0 if the old_cluster is PG16 and prior\n+ * because old_cluster.dbarr.dbs[dbnum].slot_arr is set only for PG17 and\n+ * later.\n+ */\n+int\n+count_old_cluster_logical_slots(void)\n\nMaybe that \"Note\" should be expanded a bit to say who does this:\n\nSUGGESTION\n\nNote: This function always returns 0 if the old_cluster is PG16 and\nprior because old_cluster.dbarr.dbs[dbnum].slot_arr is set only for\nPG17 and later. See where get_old_cluster_logical_slot_infos_per_db()\nis called.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.c\n\n4.\n+ /*\n+ * Logical replication slot upgrade only supported for old_cluster >=\n+ * PG17.\n+ *\n+ * Note: This must be done after doing the pg_resetwal command because\n+ * pg_resetwal would remove required WALs.\n+ */\n+ if (count_old_cluster_logical_slots())\n+ {\n+ start_postmaster(&new_cluster, true);\n+ create_logical_replication_slots();\n+ stop_postmaster(false);\n+ }\n+\n\n4a.\nI felt this comment needs a bit more detail otherwise you can't tell\nhow the >= PG17 version check works.\n\n4b.\n/slot upgrade only supported/slot upgrade is only supported/\n\n~\n\nSUGGESTION\n\nLogical replication slot upgrade is only supported for old_cluster >=\nPG17. An explicit version check is not necessary here because function\ncount_old_cluster_logical_slots() will always return 0 for old_cluster\n<= PG16.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 30 Aug 2023 12:24:43 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san.\n\nHere are some review comments for v28-0003.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_and_dump_old_cluster\n+ /*\n+ * Logical replication slots can be migrated since PG17. See comments atop\n+ * get_old_cluster_logical_slot_infos().\n+ */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n+ check_old_cluster_for_valid_slots(live_check);\n+\n\nIIUC we are preferring to use the <= 1600 style of version check\ninstead of >= 1700 where possible.\n\nSo this comment and version check ought to be removed from here, and\ndone inside check_old_cluster_for_valid_slots() instead.\n\n~~~\n\n2. check_old_cluster_for_valid_slots\n\n+/*\n+ * check_old_cluster_for_valid_slots()\n+ *\n+ * Make sure logical replication slots can be migrated to new cluster.\n+ * Following points are checked:\n+ *\n+ * - All logical replication slots are usable.\n+ * - All logical replication slots consumed all WALs, except a\n+ * CHECKPOINT_SHUTDOWN record.\n+ */\n+static void\n+check_old_cluster_for_valid_slots(bool live_check)\n\nI suggested in the previous comment above (#1) that the version check\nshould be moved into this function.\n\nTherefore, this function comment now should also mention slot upgrade\nis only allowed for >= PG17\n\n~~~\n\n3.\n+static void\n+check_old_cluster_for_valid_slots(bool live_check)\n+{\n+ int i,\n+ ntups,\n+ i_slotname;\n+ PGresult *res;\n+ DbInfo *active_db = &old_cluster.dbarr.dbs[0];\n+ PGconn *conn;\n+\n+ /* Quick exit if the cluster does not have logical slots. */\n+ if (count_old_cluster_logical_slots() == 0)\n+ return;\n\n3a.\nSee comment #1. At the top of this function body there should be a\nversion check like:\n\nif (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\nreturn;\n\n~\n\n3b.\n/Quick exit/Quick return/\n\n~\n\n4.\n+ prep_status(\"Checking for logical replication slots\");\n\nI felt that should add the word \"valid\" like:\n\"Checking for valid logical replication slots\"\n\n~~~\n\n5.\n+ /* Check there are no logical replication slots with a 'lost' state. */\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE wal_status = 'lost' AND \"\n+ \"temporary IS FALSE;\");\n\nSince the SQL is checking if there *are* lost slots I felt it would be\nmore natural to reverse that comment.\n\nSUGGESTION\n/* Check and reject if there are any logical replication slots with a\n'lost' state. */\n\n~~~\n\n6.\n+ /*\n+ * Do additional checks if a live check is not required. This requires\n+ * that confirmed_flush_lsn of all the slots is the same as the latest\n+ * checkpoint location, but it would be satisfied only when the server has\n+ * been shut down.\n+ */\n+ if (!live_check)\n\nI think the comment can be rearranged slightly:\n\nSUGGESTION\nDo additional checks to ensure that 'confirmed_flush_lsn' of all the\nslots is the same as the latest checkpoint location.\nNote: This can be satisfied only when the old_cluster has been shut\ndown, so we skip this for \"live\" checks.\n\n======\nsrc/bin/pg_upgrade/controldata.c\n\n7.\n+ /*\n+ * Read the latest checkpoint location if the cluster is PG17\n+ * or later. This is used for upgrading logical replication\n+ * slots.\n+ */\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n+ {\n\nFetching this \"Latest checkpoint location:\" value is only needed for\nthe check_old_cluster_for_valid_slots validation check, isn't it? But\nAFAICT this code is common for both old_cluster and new_cluster.\n\nI am not sure what is best to do:\n- Do only the minimal logic needed?\n- Read the value redundantly even for new_cluster just to keep code simpler?\n\nEither way, maybe the comment should say something about this.\n\n======\n.../t/003_logical_replication_slots.pl\n\n8. Consider adding one more test\n\nMaybe there should also be some \"live check\" test performed (e.g.\nusing --check, and a running old_cluster).\n\nThis would demonstrate pg_upgrade working successfully even when the\nWAL records are not consumed (because LSN checks would be skipped in\ncheck_old_cluster_for_valid_slots function).\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 30 Aug 2023 15:27:55 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 7:55 AM Peter Smith <[email protected]> wrote:\n>\n> Here are some minor review comments for patch v28-0002\n>\n> ======\n> src/sgml/ref/pgupgrade.sgml\n>\n> 1.\n> - with the primary.) Replication slots are not copied and must\n> - be recreated.\n> + with the primary.) Replication slots on old standby are not copied.\n> + Only logical slots on the primary are migrated to the new standby,\n> + and other slots must be recreated.\n> </para>\n>\n> /on old standby/on the old standby/\n>\n\nFixed.\n\n> ======\n> src/bin/pg_upgrade/info.c\n>\n> 2. get_old_cluster_logical_slot_infos\n>\n> +void\n> +get_old_cluster_logical_slot_infos(void)\n> +{\n> + int dbnum;\n> +\n> + /* Logical slots can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> + return;\n> +\n> + pg_log(PG_VERBOSE, \"\\nsource databases:\");\n> +\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n> + {\n> + DbInfo *pDbInfo = &old_cluster.dbarr.dbs[dbnum];\n> +\n> + get_old_cluster_logical_slot_infos_per_db(pDbInfo);\n> +\n> + if (log_opts.verbose)\n> + {\n> + pg_log(PG_VERBOSE, \"Database: \\\"%s\\\"\", pDbInfo->db_name);\n> + print_slot_infos(&pDbInfo->slot_arr);\n> + }\n> + }\n> +}\n>\n> It might be worth putting an Assert before calling the\n> get_old_cluster_logical_slot_infos_per_db(...) just as a sanity check:\n> Assert(pDbInfo->slot_arr.nslots == 0);\n>\n> This also helps to better document the \"Note\" of the\n> count_old_cluster_logical_slots() function comment.\n>\n\nI have changed the comments atop count_old_cluster_logical_slots() and\nalso I don't see the need for this Assert.\n\n> ~~~\n>\n> 3. count_old_cluster_logical_slots\n>\n> +/*\n> + * count_old_cluster_logical_slots()\n> + *\n> + * Sum up and return the number of logical replication slots for all databases.\n> + *\n> + * Note: this function always returns 0 if the old_cluster is PG16 and prior\n> + * because old_cluster.dbarr.dbs[dbnum].slot_arr is set only for PG17 and\n> + * later.\n> + */\n> +int\n> +count_old_cluster_logical_slots(void)\n>\n> Maybe that \"Note\" should be expanded a bit to say who does this:\n>\n> SUGGESTION\n>\n> Note: This function always returns 0 if the old_cluster is PG16 and\n> prior because old_cluster.dbarr.dbs[dbnum].slot_arr is set only for\n> PG17 and later. See where get_old_cluster_logical_slot_infos_per_db()\n> is called.\n>\n\nChanged, but written differently because saying in terms of variable\nname doesn't sound good to me.\n\n> ======\n> src/bin/pg_upgrade/pg_upgrade.c\n>\n> 4.\n> + /*\n> + * Logical replication slot upgrade only supported for old_cluster >=\n> + * PG17.\n> + *\n> + * Note: This must be done after doing the pg_resetwal command because\n> + * pg_resetwal would remove required WALs.\n> + */\n> + if (count_old_cluster_logical_slots())\n> + {\n> + start_postmaster(&new_cluster, true);\n> + create_logical_replication_slots();\n> + stop_postmaster(false);\n> + }\n> +\n>\n> 4a.\n> I felt this comment needs a bit more detail otherwise you can't tell\n> how the >= PG17 version check works.\n>\n> 4b.\n> /slot upgrade only supported/slot upgrade is only supported/\n>\n> ~\n>\n> SUGGESTION\n>\n> Logical replication slot upgrade is only supported for old_cluster >=\n> PG17. An explicit version check is not necessary here because function\n> count_old_cluster_logical_slots() will always return 0 for old_cluster\n> <= PG16.\n>\n\nI don't see the need to explain anything about version check here, so\nremoved that part of the comment.\n\nApart from this, I have addressed some of the comments raised by you\nfor the 0003 patch. Please find the diff patch attached. I think we\nshould combine 0002 and 0003 patches.\n\nI have another comment on the patch:\n+ /* Check there are no logical replication slots with a 'lost' state. */\n+ res = executeQueryOrDie(conn,\n+ \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE wal_status = 'lost' AND \"\n+ \"temporary IS FALSE;\");\n\nIn this place, shouldn't we explicitly check for slot_type as logical?\nI think we should consistently check for slot_type in all the queries\nused in this patch.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 31 Aug 2023 16:04:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 10:58 AM Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for v28-0003.\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 1. check_and_dump_old_cluster\n> + /*\n> + * Logical replication slots can be migrated since PG17. See comments atop\n> + * get_old_cluster_logical_slot_infos().\n> + */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) >= 1700)\n> + check_old_cluster_for_valid_slots(live_check);\n> +\n>\n> IIUC we are preferring to use the <= 1600 style of version check\n> instead of >= 1700 where possible.\n>\n\nYeah, but in this case, following the nearby code style, I think it is\nokay to keep it as it is.\n\n> ~\n>\n> 3b.\n> /Quick exit/Quick return/\n>\n\nHmm, either way should be okay.\n\n> ~\n>\n> 4.\n> + prep_status(\"Checking for logical replication slots\");\n>\n> I felt that should add the word \"valid\" like:\n> \"Checking for valid logical replication slots\"\n>\n\nAgreed and fixed.\n\n> ~~~\n>\n> 5.\n> + /* Check there are no logical replication slots with a 'lost' state. */\n> + res = executeQueryOrDie(conn,\n> + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\n> + \"WHERE wal_status = 'lost' AND \"\n> + \"temporary IS FALSE;\");\n>\n> Since the SQL is checking if there *are* lost slots I felt it would be\n> more natural to reverse that comment.\n>\n> SUGGESTION\n> /* Check and reject if there are any logical replication slots with a\n> 'lost' state. */\n>\n\nI changed the comments but differently.\n\n> ~~~\n>\n> 6.\n> + /*\n> + * Do additional checks if a live check is not required. This requires\n> + * that confirmed_flush_lsn of all the slots is the same as the latest\n> + * checkpoint location, but it would be satisfied only when the server has\n> + * been shut down.\n> + */\n> + if (!live_check)\n>\n> I think the comment can be rearranged slightly:\n>\n> SUGGESTION\n> Do additional checks to ensure that 'confirmed_flush_lsn' of all the\n> slots is the same as the latest checkpoint location.\n> Note: This can be satisfied only when the old_cluster has been shut\n> down, so we skip this for \"live\" checks.\n>\n\nChanged as per suggestion.\n\n> ======\n> src/bin/pg_upgrade/controldata.c\n>\n> 7.\n> + /*\n> + * Read the latest checkpoint location if the cluster is PG17\n> + * or later. This is used for upgrading logical replication\n> + * slots.\n> + */\n> + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n> + {\n>\n> Fetching this \"Latest checkpoint location:\" value is only needed for\n> the check_old_cluster_for_valid_slots validation check, isn't it? But\n> AFAICT this code is common for both old_cluster and new_cluster.\n>\n> I am not sure what is best to do:\n> - Do only the minimal logic needed?\n> - Read the value redundantly even for new_cluster just to keep code simpler?\n>\n> Either way, maybe the comment should say something about this.\n>\n\nAdded the comment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 31 Aug 2023 16:17:41 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 5:28 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n\nSome comments in 0002\n\n1.\n+ res = executeQueryOrDie(conn, \"SELECT slot_name \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"temporary IS FALSE;\");\n\nWhat is the reason we are ignoring temporary slots here? I think we\nbetter explain in the comments.\n\n2.\n+ res = executeQueryOrDie(conn, \"SELECT slot_name \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"temporary IS FALSE;\");\n+\n+ if (PQntuples(res))\n+ pg_fatal(\"New cluster must not have logical replication slots but\nfound \\\"%s\\\"\",\n+ PQgetvalue(res, 0, 0));\n\nIt looks a bit odd to me that first it is fetching all the logical\nslots from the new cluster and then printing the name of one of the\nslots. If it is printing the name of the slots then shouldn't it be\nprinting all the slots' names or it should just say that there\nexisting slots on the new cluster without giving any names? And if we\nare planning for option 2 i.e. not printing the name then better to\nput LIMIT 1 at the end of the query.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 31 Aug 2023 18:52:59 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Dilip,\r\n\r\nThanks for giving comments!\r\n\r\n> Some comments in 0002\r\n> \r\n> 1.\r\n> + res = executeQueryOrDie(conn, \"SELECT slot_name \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE slot_type = 'logical' AND \"\r\n> + \"temporary IS FALSE;\");\r\n> \r\n> What is the reason we are ignoring temporary slots here? I think we\r\n> better explain in the comments.\r\n\r\nThe temporary slots were expressly ignored while checking because such slots\r\ncannot exist after the upgrade. Before doing pg_upgrade, both old and new cluster\r\nmust be turned off, and they start/stop several times during the upgrade.\r\n\r\nHow do you think?\r\n\r\n> 2.\r\n> + res = executeQueryOrDie(conn, \"SELECT slot_name \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE slot_type = 'logical' AND \"\r\n> + \"temporary IS FALSE;\");\r\n> +\r\n> + if (PQntuples(res))\r\n> + pg_fatal(\"New cluster must not have logical replication slots but\r\n> found \\\"%s\\\"\",\r\n> + PQgetvalue(res, 0, 0));\r\n> \r\n> It looks a bit odd to me that first it is fetching all the logical\r\n> slots from the new cluster and then printing the name of one of the\r\n> slots. If it is printing the name of the slots then shouldn't it be\r\n> printing all the slots' names or it should just say that there\r\n> existing slots on the new cluster without giving any names? And if we\r\n> are planning for option 2 i.e. not printing the name then better to\r\n> put LIMIT 1 at the end of the query.\r\n\r\nI'm planning to change that the number of slots are reported by using count(*).\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 31 Aug 2023 14:26:21 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 7:56 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n\n> Thanks for giving comments!\n\nThanks\n\n> > Some comments in 0002\n> >\n> > 1.\n> > + res = executeQueryOrDie(conn, \"SELECT slot_name \"\n> > + \"FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE slot_type = 'logical' AND \"\n> > + \"temporary IS FALSE;\");\n> >\n> > What is the reason we are ignoring temporary slots here? I think we\n> > better explain in the comments.\n>\n> The temporary slots were expressly ignored while checking because such slots\n> cannot exist after the upgrade. Before doing pg_upgrade, both old and new cluster\n> must be turned off, and they start/stop several times during the upgrade.\n>\n> How do you think?\n\nLGTM\n\n>\n> > 2.\n> > + res = executeQueryOrDie(conn, \"SELECT slot_name \"\n> > + \"FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE slot_type = 'logical' AND \"\n> > + \"temporary IS FALSE;\");\n> > +\n> > + if (PQntuples(res))\n> > + pg_fatal(\"New cluster must not have logical replication slots but\n> > found \\\"%s\\\"\",\n> > + PQgetvalue(res, 0, 0));\n> >\n> > It looks a bit odd to me that first it is fetching all the logical\n> > slots from the new cluster and then printing the name of one of the\n> > slots. If it is printing the name of the slots then shouldn't it be\n> > printing all the slots' names or it should just say that there\n> > existing slots on the new cluster without giving any names? And if we\n> > are planning for option 2 i.e. not printing the name then better to\n> > put LIMIT 1 at the end of the query.\n>\n> I'm planning to change that the number of slots are reported by using count(*).\n\nYeah, that seems a better option.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 09:47:53 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for giving comments! PSA new version.\r\nI replied only comment 8 because others were replied by Amit.\r\n\r\n> .../t/003_logical_replication_slots.pl\r\n> \r\n> 8. Consider adding one more test\r\n> \r\n> Maybe there should also be some \"live check\" test performed (e.g.\r\n> using --check, and a running old_cluster).\r\n> \r\n> This would demonstrate pg_upgrade working successfully even when the\r\n> WAL records are not consumed (because LSN checks would be skipped in\r\n> check_old_cluster_for_valid_slots function).\r\n\r\nI was ignored the case because it did not improve improve code coverage, but\r\nindeed, no one has checked the feature. I'm still not sure what should be, but\r\nadded. I want to hear your opinions.\r\n\r\n\r\n\r\nFurthermore, based on comments from Dilip [1], added the comment and\r\ncheck_new_cluster_logical_replication_slots() was modified. IIUC pg_upgrade\r\ndoes not have method to handle plural form, so if-statement was used.\r\nIf you have better options, please tell me.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAFiTN-tgm9wCTyG4co%2BVZhyFTnzh-KoPtYbuH9bRFmxroJ34EQ%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 1 Sep 2023 04:46:18 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving suggestions! I think your fixes are good.\r\nNew patch set can be available in [1].\r\n\r\n> Apart from this, I have addressed some of the comments raised by you\r\n> for the 0003 patch. Please find the diff patch attached. I think we\r\n> should combine 0002 and 0003 patches.\r\n\r\nYeah, combined.\r\n\r\n> I have another comment on the patch:\r\n> + /* Check there are no logical replication slots with a 'lost' state. */\r\n> + res = executeQueryOrDie(conn,\r\n> + \"SELECT slot_name FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE wal_status = 'lost' AND \"\r\n> + \"temporary IS FALSE;\");\r\n> \r\n> In this place, shouldn't we explicitly check for slot_type as logical?\r\n> I think we should consistently check for slot_type in all the queries\r\n> used in this patch.\r\n\r\nSeems right, the condition was added to all the place.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866CDC13CA9D6B9F4451606F5E4A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 1 Sep 2023 04:47:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 9:47 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Aug 31, 2023 at 7:56 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n>\nSome more comments on 0002\n\n1.\n+ conn = connectToServer(&new_cluster, \"template1\");\n+\n+ prep_status(\"Checking for logical replication slots\");\n+\n+ res = executeQueryOrDie(conn, \"SELECT slot_name \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"temporary IS FALSE;\");\n\n\nI think we should add some comment saying this query will only fetch\nlogical slots because the database name will always be NULL in the\nphysical slots. Otherwise looking at the query it is very confusing\nhow it is avoiding the physical slots.\n\n2.\n+void\n+get_old_cluster_logical_slot_infos(void)\n+{\n+ int dbnum;\n+\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n+\n+ pg_log(PG_VERBOSE, \"\\nsource databases:\");\n\nI think we need to change some headings like \"slot info source\ndatabases:\" Or add an extra message saying printing slot information.\n\nBefore this patch, we were printing all the relation information so\nmessage ordering was quite clear e.g.\n\nsource databases:\nDatabase: \"template1\"\nrelname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\nrelname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683,\nreltblspace: \"\"\nDatabase: \"postgres\"\nrelname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\nrelname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683,\nreltblspace: \"\"\n\nBut after this patch slot information is also getting printed in a\nsimilar fashion so it's very confusing now. Refer\nget_db_and_rel_infos() for how it is fetching all the relation\ninformation first and then printing them.\n\n\n\n\n3. One more problem is that the slot information and the execute query\nmessages are intermingled so it becomes more confusing, see the below\nexample of the latest messaging. I think ideally we should execute\nthese queries first\nand then print all slot information together instead of intermingling\nthe messages.\n\nsource databases:\nexecuting: SELECT pg_catalog.set_config('search_path', '', false);\nexecuting: SELECT slot_name, plugin, two_phase FROM\npg_catalog.pg_replication_slots WHERE wal_status <> 'lost' AND\ndatabase = current_database() AND temporary IS FALSE;\nDatabase: \"template1\"\nexecuting: SELECT pg_catalog.set_config('search_path', '', false);\nexecuting: SELECT slot_name, plugin, two_phase FROM\npg_catalog.pg_replication_slots WHERE wal_status <> 'lost' AND\ndatabase = current_database() AND temporary IS FALSE;\nDatabase: \"postgres\"\nslotname: \"isolation_slot1\", plugin: \"pgoutput\", two_phase: 0\n\n4. Looking at the above two comments I feel that now the order should be like\n- Fetch all the db infos\nget_db_infos()\n- loop\n get_rel_infos()\n get_old_cluster_logical_slot_infos()\n\n-- and now print relation and slot information per database\n print_db_infos()\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 11:00:48 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 10:16 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n\n+ /*\n+ * Note: This must be done after doing the pg_resetwal command because\n+ * pg_resetwal would remove required WALs.\n+ */\n+ if (count_old_cluster_logical_slots())\n+ {\n+ start_postmaster(&new_cluster, true);\n+ create_logical_replication_slots();\n+ stop_postmaster(false);\n+ }\n\nCan we combine this code with the code in the function\nissue_warnings_and_set_wal_level()? That will avoid starting/stopping\nthe server for creating slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Sep 2023 12:51:10 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for v29-0002\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_old_cluster_for_valid_slots\n\n+ /* Quick exit if the cluster does not have logical slots. */\n+ if (count_old_cluster_logical_slots() == 0)\n+ return;\n\n/Quick exit/Quick return/\n\nI know they are kind of the same, but the reason I previously\nsuggested this change was to keep it consistent with the similar\ncomment that is already in\ncheck_new_cluster_logical_replication_slots().\n\n~~~\n\n2. check_old_cluster_for_valid_slots\n\n+ /*\n+ * Do additional checks to ensure that confirmed_flush LSN of all the slots\n+ * is the same as the latest checkpoint location.\n+ *\n+ * Note: This can be satisfied only when the old cluster has been shut\n+ * down, so we skip this live checks.\n+ */\n+ if (!live_check)\n\nmissing word\n\n/skip this live checks./skip this for live checks./\n\n======\nsrc/bin/pg_upgrade/controldata.c\n\n3.\n+ /*\n+ * Read the latest checkpoint location if the cluster is PG17\n+ * or later. This is used for upgrading logical replication\n+ * slots. Currently, we need it only for the old cluster but\n+ * didn't add additional check for the similicity.\n+ */\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n\n/similicity/simplicity/\n\nSUGGESTION\nCurrently, we need it only for the old cluster but for simplicity\nchose not to have additional checks.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n4. get_old_cluster_logical_slot_infos_per_db\n\n+ /*\n+ * The temporary slots are expressly ignored while checking because such\n+ * slots cannot exist after the upgrade. During the upgrade, clusters are\n+ * started and stopped several times so that temporary slots will be\n+ * removed.\n+ */\n+ res = executeQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase \"\n+ \"FROM pg_catalog.pg_replication_slots \"\n+ \"WHERE slot_type = 'logical' AND \"\n+ \"wal_status <> 'lost' AND \"\n+ \"database = current_database() AND \"\n+ \"temporary IS FALSE;\");\n\nIIUC, the removal of temp slots is just a side-effect of the\nstart/stop; not the *reason* for the start/stop. So, the last sentence\nneeds some modification\n\nBEFORE\nDuring the upgrade, clusters are started and stopped several times so\nthat temporary slots will be removed.\n\nSUGGESTION\nDuring the upgrade, clusters are started and stopped several times\ncausing any temporary slots to be removed.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 1 Sep 2023 17:49:21 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Dilip,\r\n\r\nThank you for reviewing! \r\n\r\n> \r\n> 1.\r\n> + conn = connectToServer(&new_cluster, \"template1\");\r\n> +\r\n> + prep_status(\"Checking for logical replication slots\");\r\n> +\r\n> + res = executeQueryOrDie(conn, \"SELECT slot_name \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE slot_type = 'logical' AND \"\r\n> + \"temporary IS FALSE;\");\r\n> \r\n> \r\n> I think we should add some comment saying this query will only fetch\r\n> logical slots because the database name will always be NULL in the\r\n> physical slots. Otherwise looking at the query it is very confusing\r\n> how it is avoiding the physical slots.\r\n\r\nHmm, the query you pointed out does not check the database of the slot...\r\nWe are fetching only logical slots by the condition \"slot_type = 'logical'\",\r\nI think it is too trivial to describe in the comment.\r\nJust to confirm - pg_replication_slots can see alls the slots even if the database\r\nis not current one.\r\n\r\n```\r\ntmp=# SELECT slot_name, slot_type, database FROM pg_replication_slots where database != current_database();\r\n slot_name | slot_type | database \r\n-----------+-----------+----------\r\n test | logical | postgres\r\n(1 row)\r\n```\r\n\r\nIf I misunderstood something, please tell me...\r\n\r\n> 2.\r\n> +void\r\n> +get_old_cluster_logical_slot_infos(void)\r\n> +{\r\n> + int dbnum;\r\n> +\r\n> + /* Logical slots can be migrated since PG17. */\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> + return;\r\n> +\r\n> + pg_log(PG_VERBOSE, \"\\nsource databases:\");\r\n> \r\n> I think we need to change some headings like \"slot info source\r\n> databases:\" Or add an extra message saying printing slot information.\r\n> \r\n> Before this patch, we were printing all the relation information so\r\n> message ordering was quite clear e.g.\r\n> \r\n> source databases:\r\n> Database: \"template1\"\r\n> relname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\r\n> relname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683,\r\n> reltblspace: \"\"\r\n> Database: \"postgres\"\r\n> relname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\r\n> relname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683,\r\n> reltblspace: \"\"\r\n> \r\n> But after this patch slot information is also getting printed in a\r\n> similar fashion so it's very confusing now. Refer\r\n> get_db_and_rel_infos() for how it is fetching all the relation\r\n> information first and then printing them.\r\n> \r\n> \r\n> \r\n> \r\n> 3. One more problem is that the slot information and the execute query\r\n> messages are intermingled so it becomes more confusing, see the below\r\n> example of the latest messaging. I think ideally we should execute\r\n> these queries first\r\n> and then print all slot information together instead of intermingling\r\n> the messages.\r\n> \r\n> source databases:\r\n> executing: SELECT pg_catalog.set_config('search_path', '', false);\r\n> executing: SELECT slot_name, plugin, two_phase FROM\r\n> pg_catalog.pg_replication_slots WHERE wal_status <> 'lost' AND\r\n> database = current_database() AND temporary IS FALSE;\r\n> Database: \"template1\"\r\n> executing: SELECT pg_catalog.set_config('search_path', '', false);\r\n> executing: SELECT slot_name, plugin, two_phase FROM\r\n> pg_catalog.pg_replication_slots WHERE wal_status <> 'lost' AND\r\n> database = current_database() AND temporary IS FALSE;\r\n> Database: \"postgres\"\r\n> slotname: \"isolation_slot1\", plugin: \"pgoutput\", two_phase: 0\r\n> \r\n> 4. Looking at the above two comments I feel that now the order should be like\r\n> - Fetch all the db infos\r\n> get_db_infos()\r\n> - loop\r\n> get_rel_infos()\r\n> get_old_cluster_logical_slot_infos()\r\n> \r\n> -- and now print relation and slot information per database\r\n> print_db_infos()\r\n\r\nFixed like that. It seems that we go back to old style...\r\nNow the debug prints are like below:\r\n\r\n```\r\nsource databases:\r\nDatabase: \"template1\"\r\nrelname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\r\nrelname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683, reltblspace: \"\"\r\nDatabase: \"postgres\"\r\nrelname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\r\nrelname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683, reltblspace: \"\"\r\nLogical replication slots within the database:\r\nslotname: \"old1\", plugin: \"test_decoding\", two_phase: 0\r\nslotname: \"old2\", plugin: \"test_decoding\", two_phase: 0\r\nslotname: \"old3\", plugin: \"test_decoding\", two_phase: 0\r\n```\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 1 Sep 2023 13:04:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n \r\nThanks for reviewing! New patch can be available in [1].\r\n\r\n> \r\n> ======\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 1. check_old_cluster_for_valid_slots\r\n> \r\n> + /* Quick exit if the cluster does not have logical slots. */\r\n> + if (count_old_cluster_logical_slots() == 0)\r\n> + return;\r\n> \r\n> /Quick exit/Quick return/\r\n> \r\n> I know they are kind of the same, but the reason I previously\r\n> suggested this change was to keep it consistent with the similar\r\n> comment that is already in\r\n> check_new_cluster_logical_replication_slots().\r\n\r\nFixed.\r\n\r\n> 2. check_old_cluster_for_valid_slots\r\n> \r\n> + /*\r\n> + * Do additional checks to ensure that confirmed_flush LSN of all the slots\r\n> + * is the same as the latest checkpoint location.\r\n> + *\r\n> + * Note: This can be satisfied only when the old cluster has been shut\r\n> + * down, so we skip this live checks.\r\n> + */\r\n> + if (!live_check)\r\n> \r\n> missing word\r\n> \r\n> /skip this live checks./skip this for live checks./\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/controldata.c\r\n> \r\n> 3.\r\n> + /*\r\n> + * Read the latest checkpoint location if the cluster is PG17\r\n> + * or later. This is used for upgrading logical replication\r\n> + * slots. Currently, we need it only for the old cluster but\r\n> + * didn't add additional check for the similicity.\r\n> + */\r\n> + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\r\n> \r\n> /similicity/simplicity/\r\n> \r\n> SUGGESTION\r\n> Currently, we need it only for the old cluster but for simplicity\r\n> chose not to have additional checks.\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 4. get_old_cluster_logical_slot_infos_per_db\r\n> \r\n> + /*\r\n> + * The temporary slots are expressly ignored while checking because such\r\n> + * slots cannot exist after the upgrade. During the upgrade, clusters are\r\n> + * started and stopped several times so that temporary slots will be\r\n> + * removed.\r\n> + */\r\n> + res = executeQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase \"\r\n> + \"FROM pg_catalog.pg_replication_slots \"\r\n> + \"WHERE slot_type = 'logical' AND \"\r\n> + \"wal_status <> 'lost' AND \"\r\n> + \"database = current_database() AND \"\r\n> + \"temporary IS FALSE;\");\r\n> \r\n> IIUC, the removal of temp slots is just a side-effect of the\r\n> start/stop; not the *reason* for the start/stop. So, the last sentence\r\n> needs some modification\r\n> \r\n> BEFORE\r\n> During the upgrade, clusters are started and stopped several times so\r\n> that temporary slots will be removed.\r\n> \r\n> SUGGESTION\r\n> During the upgrade, clusters are started and stopped several times\r\n> causing any temporary slots to be removed.\r\n>\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866F7D8ED15BA1E8E4A2AB0F5E4A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 1 Sep 2023 13:05:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! New patch can be available in [1].\r\n\r\n> + /*\r\n> + * Note: This must be done after doing the pg_resetwal command because\r\n> + * pg_resetwal would remove required WALs.\r\n> + */\r\n> + if (count_old_cluster_logical_slots())\r\n> + {\r\n> + start_postmaster(&new_cluster, true);\r\n> + create_logical_replication_slots();\r\n> + stop_postmaster(false);\r\n> + }\r\n> \r\n> Can we combine this code with the code in the function\r\n> issue_warnings_and_set_wal_level()? That will avoid starting/stopping\r\n> the server for creating slots.\r\n\r\nYeah, I can. But create_logical_replication_slots() must be done before doing\r\n\"initdb --sync-only\", so they put before that.\r\nThe name is setup_new_cluster().\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866F7D8ED15BA1E8E4A2AB0F5E4A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 1 Sep 2023 13:06:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 1, 2023 at 6:34 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Dilip,\n>\n> Thank you for reviewing!\n>\n> >\n> > 1.\n> > + conn = connectToServer(&new_cluster, \"template1\");\n> > +\n> > + prep_status(\"Checking for logical replication slots\");\n> > +\n> > + res = executeQueryOrDie(conn, \"SELECT slot_name \"\n> > + \"FROM pg_catalog.pg_replication_slots \"\n> > + \"WHERE slot_type = 'logical' AND \"\n> > + \"temporary IS FALSE;\");\n> >\n> >\n> > I think we should add some comment saying this query will only fetch\n> > logical slots because the database name will always be NULL in the\n> > physical slots. Otherwise looking at the query it is very confusing\n> > how it is avoiding the physical slots.\n>\n> Hmm, the query you pointed out does not check the database of the slot...\n> We are fetching only logical slots by the condition \"slot_type = 'logical'\",\n> I think it is too trivial to describe in the comment.\n> Just to confirm - pg_replication_slots can see alls the slots even if the database\n> is not current one.\n\nI think this is fine. Actually I posted comments based on v28 where\nthe query inside get_old_cluster_logical_slot_infos_per_db() function\nwas missing the condition on the slot_type = logical but while\ncommenting I quoted the wrong hunk from the code. Anyway the other\npart of the code which I intended is also fixed from v29 so all good.\nThanks :)\n\n> > 4. Looking at the above two comments I feel that now the order should be like\n> > - Fetch all the db infos\n> > get_db_infos()\n> > - loop\n> > get_rel_infos()\n> > get_old_cluster_logical_slot_infos()\n> >\n> > -- and now print relation and slot information per database\n> > print_db_infos()\n>\n> Fixed like that. It seems that we go back to old style...\n> Now the debug prints are like below:\n>\n> ```\n> source databases:\n> Database: \"template1\"\n> relname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\n> relname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683, reltblspace: \"\"\n> Database: \"postgres\"\n> relname: \"pg_catalog.pg_largeobject\", reloid: 2613, reltblspace: \"\"\n> relname: \"pg_catalog.pg_largeobject_loid_pn_index\", reloid: 2683, reltblspace: \"\"\n> Logical replication slots within the database:\n> slotname: \"old1\", plugin: \"test_decoding\", two_phase: 0\n> slotname: \"old2\", plugin: \"test_decoding\", two_phase: 0\n> slotname: \"old3\", plugin: \"test_decoding\", two_phase: 0\n\nYeah this looks good now.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Sep 2023 09:33:19 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Friday, September 1, 2023 9:05 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n\r\nHi,\r\n\r\nThanks for updating the patch.\r\nI have a comment about the check related to the wal_status.\r\n\r\nCurrently, there are few places where we check the wal_status of slots. e.g.\r\ncheck_old_cluster_for_valid_slots(),get_loadable_libraries(), and\r\nget_old_cluster_logical_slot_infos().\r\n\r\nBut as discussed in another thread[1]. There are some kind of WALs that will be\r\nwritten when pg_upgrade are checking the old cluster which could cause the wal\r\nsize to exceed the max_slot_wal_keep_size. In this case, checkpoint will remove\r\nthe wals required by slots and invalidate these slots(the wal_status get\r\nchanged as well).\r\n\r\nBased on this, it’s possible that the slots we get each time when checking\r\nwal_status are different, because they may get changed in between these checks.\r\nThis may not cause serious problems for now, because we will either copy all\r\nthe slots including ones invalidated when upgrading or we report ERROR. But I\r\nfeel it's better to get consistent result each time we check the slots to close\r\nthe possibility for problems in the future. So, I feel we could centralize the\r\ncheck for wal_status and slots fetch, so that even if some slots status changed\r\nafter that, it won't have a risk to affect our check. What do you think ?\r\n\r\n[1] https://www.postgresql.org/message-id/flat/CAA4eK1LLik2818uzYqS73O%2BHe5LK_%2B%3DkthyZ6hwT6oe9TuxycA%40mail.gmail.com#16efea0a76d623b1335e73fc1e28f5ef\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 5 Sep 2023 05:50:12 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou-san,\r\n\r\n> Based on this, it’s possible that the slots we get each time when checking\r\n> wal_status are different, because they may get changed in between these checks.\r\n> This may not cause serious problems for now, because we will either copy all\r\n> the slots including ones invalidated when upgrading or we report ERROR. But I\r\n> feel it's better to get consistent result each time we check the slots to close\r\n> the possibility for problems in the future. So, I feel we could centralize the\r\n> check for wal_status and slots fetch, so that even if some slots status changed\r\n> after that, it won't have a risk to affect our check. What do you think ?\r\n\r\nThank you for giving the suggestion! I agreed that to centralize checks, and I\r\nhad already started to modify. Here is the updated patch.\r\n\r\nIn this patch all slot infos are extracted in the get_old_cluster_logical_slot_infos(),\r\nupcoming functions uses them. Based on the change, two attributes confirmed_flush\r\nand wal_status were added in LogicalSlotInfo.\r\n\r\nIIUC we cannot use strcut List in the client codes, so structures and related\r\nfunctions are added in the function.c. These are used for extracting unique\r\nplugins, but it may be overkill because check_loadable_libraries() handle\r\nduplicated entries. If we can ignore duplicated entries, these functions can be\r\nremoved.\r\n\r\nAlso, for simplifying codes, only a first-met invalidated slot is output in the\r\ncheck_old_cluster_for_valid_slots(). Warning messages int the function were\r\nremoved. I think it may be enough because check_new_cluster_is_empty() do\r\nsimilar thing. Please tell me if it should be reverted...\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 5 Sep 2023 07:34:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tuesday, September 5, 2023 3:35 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Dear Hou-san,\r\n> \r\n> > Based on this, it’s possible that the slots we get each time when\r\n> > checking wal_status are different, because they may get changed in between\r\n> these checks.\r\n> > This may not cause serious problems for now, because we will either\r\n> > copy all the slots including ones invalidated when upgrading or we\r\n> > report ERROR. But I feel it's better to get consistent result each\r\n> > time we check the slots to close the possibility for problems in the\r\n> > future. So, I feel we could centralize the check for wal_status and\r\n> > slots fetch, so that even if some slots status changed after that, it won't have\r\n> a risk to affect our check. What do you think ?\r\n> \r\n> Thank you for giving the suggestion! I agreed that to centralize checks, and I\r\n> had already started to modify. Here is the updated patch.\r\n> \r\n> In this patch all slot infos are extracted in the\r\n> get_old_cluster_logical_slot_infos(),\r\n> upcoming functions uses them. Based on the change, two attributes\r\n> confirmed_flush and wal_status were added in LogicalSlotInfo.\r\n> \r\n> IIUC we cannot use strcut List in the client codes, so structures and related\r\n> functions are added in the function.c. These are used for extracting unique\r\n> plugins, but it may be overkill because check_loadable_libraries() handle\r\n> duplicated entries. If we can ignore duplicated entries, these functions can be\r\n> removed.\r\n> \r\n> Also, for simplifying codes, only a first-met invalidated slot is output in the\r\n> check_old_cluster_for_valid_slots(). Warning messages int the function were\r\n> removed. I think it may be enough because check_new_cluster_is_empty() do\r\n> similar thing. Please tell me if it should be reverted...\r\n\r\nThank for updating the patch ! here are few comments.\r\n\r\n1.\r\n\r\n+\tres = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n+\twal_level = PQgetvalue(res, 0, 0);\r\n\r\n+\tres = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n+\twal_level = PQgetvalue(res, 0, 0);\r\n\r\nI think it would be better to do a sanity check using PQntuples() before\r\ncalling PQgetvalue() in above places.\r\n\r\n2.\r\n\r\n+/*\r\n+ * Helper function for get_old_cluster_logical_slot_infos()\r\n+ */\r\n+static WALAvailability\r\n+GetWALAvailabilityByString(const char *str)\r\n+{\r\n+\tWALAvailability status = WALAVAIL_INVALID_LSN;\r\n+\r\n+\tif (strcmp(str, \"reserved\") == 0)\r\n+\t\tstatus = WALAVAIL_RESERVED;\r\n\r\nNot a comment, but I am wondering if we could use conflicting field to do this\r\ncheck, so that we could avoid the new conversion function and structure\r\nmovement. What do you think ?\r\n\r\n\r\n3.\r\n\r\n+\t\t\tcurr->confirmed_flush = strtoLSN(\r\n+\t\t\t\t\t\t\t\t\t\t\t PQgetvalue(res,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tslotnum,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\ti_confirmed_flush),\r\n+\t\t\t\t\t\t\t\t\t\t\t &have_error);\r\n\r\nThe indention looks a bit unusual.\r\n\r\n4.\r\n+\t * XXX: As mentioned in comments atop get_output_plugins(), we may not\r\n+\t * have to consider the uniqueness of entries. If so, we can use\r\n+\t * count_old_cluster_logical_slots() instead of plugin_list_length().\r\n+\t */\r\n\r\nI think check_loadable_libraries() will avoid loading the same library, so it\r\nseems fine to skip duplicating the plugins and we can save some codes.\r\n\r\n----\r\n\t\t/* Did the library name change? Probe it. */\r\n\t\tif (libnum == 0 || strcmp(lib, os_info.libraries[libnum - 1].name) != 0)\r\n----\r\n\r\nBut if we think duplicating them would be better, I feel we could use the\r\nSimpleStringList to store and duplicate the plugin name. get_output_plugins can\r\nreturn an array of the stringlist, each stringlist includes the plugins names\r\nin one db. I shared a rough POC patch to show how it works, the intention is to\r\navoid introducing our new plugin list API.\r\n\r\n5.\r\n\r\n+\tos_info.libraries = (LibraryInfo *) pg_malloc(\r\n+\t\t\t\t\t\t\t\t\t\t\t\t (totaltups + plugin_list_length(output_plugins)) * sizeof(LibraryInfo));\r\n\r\nIf we think this looks too long, maybe using pg_malloc_array can help.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Wed, 6 Sep 2023 03:17:52 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi, here are some comments for patch v31-0002.\n\n======\nsrc/bin/pg_upgrade/controldata.c\n\n1. get_control_data\n\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n+ {\n+ bool have_error = false;\n+\n+ p = strchr(p, ':');\n+\n+ if (p == NULL || strlen(p) <= 1)\n+ pg_fatal(\"%d: controldata retrieval problem\", __LINE__);\n+\n+ p++; /* remove ':' char */\n+\n+ p = strpbrk(p, \"01234567890ABCDEF\");\n+\n+ if (p == NULL || strlen(p) <= 1)\n+ pg_fatal(\"%d: controldata retrieval problem\", __LINE__);\n+\n+ cluster->controldata.chkpnt_latest =\n+ strtoLSN(p, &have_error);\n\n1a.\nThe declaration assignment of 'have_error' is redundant because it\ngets overwritten before it is checked anyhow.\n\n~\n\n1b.\nIMO that first check logic should also be shifted to be *inside* the\nstrtoLSN and it would just return have_error true. This eliminates\nhaving 2x pg_fatal that have the same purpose.\n\n~~~\n\n2. strtoLSN\n\n+/*\n+ * Convert String to XLogRecPtr.\n+ *\n+ * This function is ported from pg_lsn_in_internal(). The function cannot be\n+ * called from client binaries.\n+ */\n+XLogRecPtr\n+strtoLSN(const char *str, bool *have_error)\n\nSUGGESTION (comment wording)\nThis function is ported from pg_lsn_in_internal() which cannot be\ncalled from client binaries.\n\n======\nsrc/bin/pg_upgrade/function.c\n\n3. struct plugin_list\n\n+typedef struct plugin_list\n+{\n+ int dbnum;\n+ char *plugin;\n+ struct plugin_list *next;\n+} plugin_list;\n\nI found that name confusing. IMO should be like 'plugin_list_elem'.\n\ne.g. it gets too strange in subsequent code:\n+ plugin_list *newentry = (plugin_list *) pg_malloc(sizeof(plugin_list));\n\n~~~\n\n4. is_plugin_unique\n\n+/* Has the given plugin already been listed? */\n+static bool\n+is_plugin_unique(plugin_list_head *listhead, const char *plugin)\n+{\n+ plugin_list *point;\n+\n+ /* Quick return if the head is NULL */\n+ if (listhead == NULL)\n+ return true;\n+\n+ /* Seek the plugin list */\n+ for (point = listhead->head; point; point = point->next)\n+ {\n+ if (strcmp(point->plugin, plugin) == 0)\n+ return false;\n+ }\n+\n+ return true;\n+}\n\nWhat's the meaning of the name 'point'? Maybe something generic like\n'cur' or similar is better?\n\n~~~\n\n5. get_output_plugins\n\n+/*\n+ * Load the list of unique output plugins.\n+ *\n+ * XXX: Currently, we extract the list of unique output plugins, but this may\n+ * be overkill. The list is used for two purposes - 1) to allocate the minimal\n+ * memory for the library list and 2) to skip storing duplicated plugin names.\n+ * However, the consumer check_loadable_libraries() can avoid double checks for\n+ * the same library. The above means that we can arrange output plugins without\n+ * considering their uniqueness, so that we can remove this function.\n+ */\n+static plugin_list_head *\n+get_output_plugins(void)\n+{\n+ plugin_list_head *head = NULL;\n+ int dbnum;\n+\n+ /* Quick return if there are no logical slots to be migrated. */\n+ if (count_old_cluster_logical_slots() == 0)\n+ return NULL;\n+\n+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ {\n+ LogicalSlotInfoArr *slot_arr = &old_cluster.dbarr.dbs[dbnum].slot_arr;\n+ int slotnum;\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ LogicalSlotInfo *slot = &slot_arr->slots[slotnum];\n+\n+ /* Add to the list if the plugin has not been listed yet */\n+ if (is_plugin_unique(head, slot->plugin))\n+ add_plugin_list_item(&head, dbnum, slot->plugin);\n+ }\n+ }\n+\n+ return head;\n+}\n\nAbout the XXX. Yeah, since the uniqueness seems checked later anyway\nall this extra code seems overkill. Instead of all the extra code you\njust need a comment to mention how it will be sorted and checked\nlater.\n\nBut even if you prefer to keep it, I thought those 2 functions\n'is_plugin_unique()' and 'add_plugin_list_item()' could have been\ncombined to just have 'add_plugin_list_unique_item()'. Since order\ndoes not matter, such a function would just add items to the end of\nthe list (after finding uniqueness) instead of to the head.\n\n~~~\n\n6. get_loadable_libraries\n\n FirstNormalObjectId);\n+\n totaltups += PQntuples(ress[dbnum]);\n~\n\nThe extra blank line in the existing code is not needed in this patch.\n\n~~~\n\n7. get_loadable_libraries\n\n int rowno;\n+ plugin_list *point;\n\n~\n\nSame as a prior comment #4. What's the meaning of the name 'point'?\n\n~~~\n\n8. get_loadable_libraries\n+\n+ /*\n+ * If the old cluster has logical replication slots, plugins used by\n+ * them must be also stored. It must be done only once, so do it at\n+ * dbnum == 0 case.\n+ */\n+ if (output_plugins == NULL)\n+ continue;\n+\n+ if (dbnum != 0)\n+ continue;\n\nThis logic seems misplaced. If this \"must be done only once\" then why\nis it within the db loop in the first place? Shouldn't this be done\nseperately outside the loop?\n\n======\nsrc/bin/pg_upgrade/info.c\n\n9.\n+/*\n+ * Helper function for get_old_cluster_logical_slot_infos()\n+ */\n+static WALAvailability\n+GetWALAvailabilityByString(const char *str)\n\nShould this be forward declared like the other static functions are?\n\n~~~\n\n10. get_old_cluster_logical_slot_infos\n\n+ for (slotnum = 0; slotnum < num_slots; slotnum++)\n+ {\n+ LogicalSlotInfo *curr = &slotinfos[slotnum];\n+ bool have_error = false;\n\nHere seems an unnecessary assignment to 'have_error' because it will\nalways be assigned again before it is checked.\n\n~~~\n\n11. get_old_cluster_logical_slot_infos\n\n+ curr->confirmed_flush = strtoLSN(\n+ PQgetvalue(res,\n+ slotnum,\n+ i_confirmed_flush),\n+ &have_error);\n+ curr->wal_status = GetWALAvailabilityByString(\n+ PQgetvalue(res,\n+ slotnum,\n+ i_wal_status));\n\nCan this excessive wrapping be improved? Maybe new vars are needed.\n\n~~~\n\n12.\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ int slotnum;\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n+\n+ if (slotnum == 0)\n+ pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\n+\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\n+ slot_info->slotname,\n+ slot_info->plugin,\n+ slot_info->two_phase);\n+ }\n+}\n\nThis seems an odd way to output the heading. Isn't it better to put\nthis outside the loop?\n\nSUGGESTION\nif (slot_arr->nslots > 0)\n pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.c\n\n13.\n+/*\n+ * setup_new_cluster()\n+ *\n+ * Starts a new cluster for updating the wal_level in the control fine, then\n+ * does final setups. Logical slots are also created here.\n+ */\n+static void\n+setup_new_cluster(void)\n\ntypo\n\n/control fine/control file/\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 6 Sep 2023 17:25:08 +1200",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 7:34 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Also, for simplifying codes, only a first-met invalidated slot is output in the\n> check_old_cluster_for_valid_slots(). Warning messages int the function were\n> removed. I think it may be enough because check_new_cluster_is_empty() do\n> similar thing. Please tell me if it should be reverted...\n>\n\nAnother possible idea is to show all the WARNINGS but only when in verbose mode.\n\n-------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 6 Sep 2023 17:30:39 +1200",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wednesday, September 6, 2023 11:18 AM Zhijie Hou (Fujitsu) <[email protected]> wrote:\r\n> \r\n> On Tuesday, September 5, 2023 3:35 PM Kuroda, Hayato/黒田 隼人\r\n> <[email protected]> wrote:\r\n> \r\n> 4.\r\n> +\t * XXX: As mentioned in comments atop get_output_plugins(), we may\r\n> not\r\n> +\t * have to consider the uniqueness of entries. If so, we can use\r\n> +\t * count_old_cluster_logical_slots() instead of plugin_list_length().\r\n> +\t */\r\n> \r\n> I think check_loadable_libraries() will avoid loading the same library, so it seems\r\n> fine to skip duplicating the plugins and we can save some codes.\r\n\r\nSorry, there is a typo, I mean \"deduplicating\" instead of \" duplicating \"\r\n\r\n> \r\n> ----\r\n> \t\t/* Did the library name change? Probe it. */\r\n> \t\tif (libnum == 0 || strcmp(lib, os_info.libraries[libnum -\r\n> 1].name) != 0)\r\n> ----\r\n> \r\n> But if we think duplicating them would be better, I feel we could use the\r\n\r\nHere also \" duplicating \" should be \"deduplicating\".\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n",
"msg_date": "Wed, 6 Sep 2023 05:39:26 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 8:47 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Tuesday, September 5, 2023 3:35 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> >\n> > Dear Hou-san,\n> >\n> > > Based on this, it’s possible that the slots we get each time when\n> > > checking wal_status are different, because they may get changed in between\n> > these checks.\n> > > This may not cause serious problems for now, because we will either\n> > > copy all the slots including ones invalidated when upgrading or we\n> > > report ERROR. But I feel it's better to get consistent result each\n> > > time we check the slots to close the possibility for problems in the\n> > > future. So, I feel we could centralize the check for wal_status and\n> > > slots fetch, so that even if some slots status changed after that, it won't have\n> > a risk to affect our check. What do you think ?\n> >\n> > Thank you for giving the suggestion! I agreed that to centralize checks, and I\n> > had already started to modify. Here is the updated patch.\n> >\n> > In this patch all slot infos are extracted in the\n> > get_old_cluster_logical_slot_infos(),\n> > upcoming functions uses them. Based on the change, two attributes\n> > confirmed_flush and wal_status were added in LogicalSlotInfo.\n> >\n> > IIUC we cannot use strcut List in the client codes, so structures and related\n> > functions are added in the function.c. These are used for extracting unique\n> > plugins, but it may be overkill because check_loadable_libraries() handle\n> > duplicated entries. If we can ignore duplicated entries, these functions can be\n> > removed.\n> >\n> > Also, for simplifying codes, only a first-met invalidated slot is output in the\n> > check_old_cluster_for_valid_slots(). Warning messages int the function were\n> > removed. I think it may be enough because check_new_cluster_is_empty() do\n> > similar thing. Please tell me if it should be reverted...\n>\n> Thank for updating the patch ! here are few comments.\n>\n> 1.\n>\n> + res = executeQueryOrDie(conn, \"SHOW wal_level;\");\n> + wal_level = PQgetvalue(res, 0, 0);\n>\n> + res = executeQueryOrDie(conn, \"SHOW wal_level;\");\n> + wal_level = PQgetvalue(res, 0, 0);\n>\n> I think it would be better to do a sanity check using PQntuples() before\n> calling PQgetvalue() in above places.\n>\n> 2.\n>\n> +/*\n> + * Helper function for get_old_cluster_logical_slot_infos()\n> + */\n> +static WALAvailability\n> +GetWALAvailabilityByString(const char *str)\n> +{\n> + WALAvailability status = WALAVAIL_INVALID_LSN;\n> +\n> + if (strcmp(str, \"reserved\") == 0)\n> + status = WALAVAIL_RESERVED;\n>\n> Not a comment, but I am wondering if we could use conflicting field to do this\n> check, so that we could avoid the new conversion function and structure\n> movement. What do you think ?\n>\n\nI also think referring to the conflicting field would be better not\nonly for the purpose of avoiding extra code but also to give accurate\ninformation about invalidated slots for which we want to give an\nerror.\n\nAdditionally, I think we should try to avoid writing a new function\nstrtoLSN as that adds a maintainability burden. We can probably send\nthe value fetched from pg_controldata in the query for comparison with\nconfirmed_flush LSN.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 14:25:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 11:01 AM Peter Smith <[email protected]> wrote:\n>\n> On Tue, Sep 5, 2023 at 7:34 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Also, for simplifying codes, only a first-met invalidated slot is output in the\n> > check_old_cluster_for_valid_slots(). Warning messages int the function were\n> > removed. I think it may be enough because check_new_cluster_is_empty() do\n> > similar thing. Please tell me if it should be reverted...\n> >\n>\n> Another possible idea is to show all the WARNINGS but only when in verbose mode.\n>\n\nI think it would be better to write problematic slots in the script\nfile like we are doing in the function\ncheck_for_composite_data_type_usage()->check_for_data_types_usage()\nand give a message suggesting what the user can do as we are doing in\ncheck_for_composite_data_type_usage(). That will be helpful for the\nuser to take necessary action.\n\nA few other comments:\n=================\n1.\n@@ -189,6 +199,8 @@ check_new_cluster(void)\n {\n get_db_and_rel_infos(&new_cluster);\n\n+ check_new_cluster_logical_replication_slots();\n+\n check_new_cluster_is_empty();\n\n check_loadable_libraries();\n\nWhy check_new_cluster_logical_replication_slots is done before\ncheck_new_cluster_is_empty? At least check_new_cluster_is_empty()\nwould be much quicker to return an error if any. I think if we don't\nhave a specific reason to position this new check, we can do it at the\nend after check_for_new_tablespace_dir() to avoid breaking the order\nof existing checks.\n\n2. Shall we rename get_db_and_rel_infos() to\nget_db_rel_and_slot_infos() or something like that as that function\nnow fetches the slot information as well?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Sep 2023 14:56:44 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\nThank you for giving comments! PSA new version.\r\n0001 is updated based on the forked thread.\r\n\r\n> \r\n> 1.\r\n> \r\n> +\tres = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n> +\twal_level = PQgetvalue(res, 0, 0);\r\n> \r\n> +\tres = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n> +\twal_level = PQgetvalue(res, 0, 0);\r\n> \r\n> I think it would be better to do a sanity check using PQntuples() before\r\n> calling PQgetvalue() in above places.\r\n\r\nAdded.\r\n\r\n> 2.\r\n> \r\n> +/*\r\n> + * Helper function for get_old_cluster_logical_slot_infos()\r\n> + */\r\n> +static WALAvailability\r\n> +GetWALAvailabilityByString(const char *str)\r\n> +{\r\n> +\tWALAvailability status = WALAVAIL_INVALID_LSN;\r\n> +\r\n> +\tif (strcmp(str, \"reserved\") == 0)\r\n> +\t\tstatus = WALAVAIL_RESERVED;\r\n> \r\n> Not a comment, but I am wondering if we could use conflicting field to do this\r\n> check, so that we could avoid the new conversion function and structure\r\n> movement. What do you think ?\r\n\r\nI checked pg_get_replication_slots() and agreed that pg_replication_slots.conflicting\r\nindicates whether the slot is usable or not. I can use the attribute instead of porting\r\nWALAvailability. Fixed.\r\n\r\n> 3.\r\n> \r\n> +\t\t\tcurr->confirmed_flush = strtoLSN(\r\n> +\r\n> \t\t PQgetvalue(res,\r\n> +\r\n> \t\t\t\t\tslotnum,\r\n> +\r\n> \t\t\t\t\ti_confirmed_flush),\r\n> +\r\n> \t\t &have_error);\r\n> \r\n> The indention looks a bit unusual.\r\n\r\nThe part is not needed anymore.\r\n\r\n> 4.\r\n> +\t * XXX: As mentioned in comments atop get_output_plugins(), we may\r\n> not\r\n> +\t * have to consider the uniqueness of entries. If so, we can use\r\n> +\t * count_old_cluster_logical_slots() instead of plugin_list_length().\r\n> +\t */\r\n> \r\n> I think check_loadable_libraries() will avoid loading the same library, so it\r\n> seems fine to skip duplicating the plugins and we can save some codes.\r\n> \r\n> ----\r\n> \t\t/* Did the library name change? Probe it. */\r\n> \t\tif (libnum == 0 || strcmp(lib, os_info.libraries[libnum -\r\n> 1].name) != 0)\r\n> ----\r\n> \r\n> But if we think duplicating them would be better, I feel we could use the\r\n> SimpleStringList to store and duplicate the plugin name. get_output_plugins can\r\n> return an array of the stringlist, each stringlist includes the plugins names\r\n> in one db. I shared a rough POC patch to show how it works, the intention is to\r\n> avoid introducing our new plugin list API.\r\n\r\nActually I do not like the style neither. Peter also said that we can skip checking the\r\nuniqueness, so removed.\r\n\r\n> 5.\r\n> \r\n> +\tos_info.libraries = (LibraryInfo *) pg_malloc(\r\n> +\r\n> \t\t\t (totaltups + plugin_list_length(output_plugins)) *\r\n> sizeof(LibraryInfo));\r\n> \r\n> If we think this looks too long, maybe using pg_malloc_array can help.\r\n>\r\n\r\nI checked whole of the patch and used these shorten macros if the line exceeded\r\n80 columns.\r\n\r\nAlso, I found a cfbot failure [1] but I could not find any reasons.\r\nI will keep investigating more about it.\r\n\r\n[1]: https://cirrus-ci.com/task/4634769732927488\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 6 Sep 2023 13:35:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing!\r\n\r\n> \r\n> ======\r\n> src/bin/pg_upgrade/controldata.c\r\n> \r\n> 1. get_control_data\r\n> \r\n> + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\r\n> + {\r\n> + bool have_error = false;\r\n> +\r\n> + p = strchr(p, ':');\r\n> +\r\n> + if (p == NULL || strlen(p) <= 1)\r\n> + pg_fatal(\"%d: controldata retrieval problem\", __LINE__);\r\n> +\r\n> + p++; /* remove ':' char */\r\n> +\r\n> + p = strpbrk(p, \"01234567890ABCDEF\");\r\n> +\r\n> + if (p == NULL || strlen(p) <= 1)\r\n> + pg_fatal(\"%d: controldata retrieval problem\", __LINE__);\r\n> +\r\n> + cluster->controldata.chkpnt_latest =\r\n> + strtoLSN(p, &have_error);\r\n> \r\n> 1a.\r\n> The declaration assignment of 'have_error' is redundant because it\r\n> gets overwritten before it is checked anyhow.\r\n> \r\n> ~\r\n> \r\n> 1b.\r\n> IMO that first check logic should also be shifted to be *inside* the\r\n> strtoLSN and it would just return have_error true. This eliminates\r\n> having 2x pg_fatal that have the same purpose.\r\n> \r\n> ~~~\r\n> \r\n> 2. strtoLSN\r\n> \r\n> +/*\r\n> + * Convert String to XLogRecPtr.\r\n> + *\r\n> + * This function is ported from pg_lsn_in_internal(). The function cannot be\r\n> + * called from client binaries.\r\n> + */\r\n> +XLogRecPtr\r\n> +strtoLSN(const char *str, bool *have_error)\r\n> \r\n> SUGGESTION (comment wording)\r\n> This function is ported from pg_lsn_in_internal() which cannot be\r\n> called from client binaries.\r\n\r\nThese changes are reverted.\r\n\r\n> src/bin/pg_upgrade/function.c\r\n> \r\n> 3. struct plugin_list\r\n> \r\n> +typedef struct plugin_list\r\n> +{\r\n> + int dbnum;\r\n> + char *plugin;\r\n> + struct plugin_list *next;\r\n> +} plugin_list;\r\n> \r\n> I found that name confusing. IMO should be like 'plugin_list_elem'.\r\n> \r\n> e.g. it gets too strange in subsequent code:\r\n> + plugin_list *newentry = (plugin_list *) pg_malloc(sizeof(plugin_list));\r\n> \r\n> ~~~\r\n> \r\n> 4. is_plugin_unique\r\n> \r\n> +/* Has the given plugin already been listed? */\r\n> +static bool\r\n> +is_plugin_unique(plugin_list_head *listhead, const char *plugin)\r\n> +{\r\n> + plugin_list *point;\r\n> +\r\n> + /* Quick return if the head is NULL */\r\n> + if (listhead == NULL)\r\n> + return true;\r\n> +\r\n> + /* Seek the plugin list */\r\n> + for (point = listhead->head; point; point = point->next)\r\n> + {\r\n> + if (strcmp(point->plugin, plugin) == 0)\r\n> + return false;\r\n> + }\r\n> +\r\n> + return true;\r\n> +}\r\n> \r\n> What's the meaning of the name 'point'? Maybe something generic like\r\n> 'cur' or similar is better?\r\n> \r\n> ~~~\r\n> \r\n> 5. get_output_plugins\r\n> \r\n> +/*\r\n> + * Load the list of unique output plugins.\r\n> + *\r\n> + * XXX: Currently, we extract the list of unique output plugins, but this may\r\n> + * be overkill. The list is used for two purposes - 1) to allocate the minimal\r\n> + * memory for the library list and 2) to skip storing duplicated plugin names.\r\n> + * However, the consumer check_loadable_libraries() can avoid double checks\r\n> for\r\n> + * the same library. The above means that we can arrange output plugins without\r\n> + * considering their uniqueness, so that we can remove this function.\r\n> + */\r\n> +static plugin_list_head *\r\n> +get_output_plugins(void)\r\n> +{\r\n> + plugin_list_head *head = NULL;\r\n> + int dbnum;\r\n> +\r\n> + /* Quick return if there are no logical slots to be migrated. */\r\n> + if (count_old_cluster_logical_slots() == 0)\r\n> + return NULL;\r\n> +\r\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> + {\r\n> + LogicalSlotInfoArr *slot_arr = &old_cluster.dbarr.dbs[dbnum].slot_arr;\r\n> + int slotnum;\r\n> +\r\n> + for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *slot = &slot_arr->slots[slotnum];\r\n> +\r\n> + /* Add to the list if the plugin has not been listed yet */\r\n> + if (is_plugin_unique(head, slot->plugin))\r\n> + add_plugin_list_item(&head, dbnum, slot->plugin);\r\n> + }\r\n> + }\r\n> +\r\n> + return head;\r\n> +}\r\n> \r\n> About the XXX. Yeah, since the uniqueness seems checked later anyway\r\n> all this extra code seems overkill. Instead of all the extra code you\r\n> just need a comment to mention how it will be sorted and checked\r\n> later.\r\n> \r\n> But even if you prefer to keep it, I thought those 2 functions\r\n> 'is_plugin_unique()' and 'add_plugin_list_item()' could have been\r\n> combined to just have 'add_plugin_list_unique_item()'. Since order\r\n> does not matter, such a function would just add items to the end of\r\n> the list (after finding uniqueness) instead of to the head.\r\n\r\nBased on suggestions from you and Hou[1], I withdrew to check their uniqueness.\r\nSo these functions and structures are removed.\r\n\r\n> 6. get_loadable_libraries\r\n> \r\n> FirstNormalObjectId);\r\n> +\r\n> totaltups += PQntuples(ress[dbnum]);\r\n> ~\r\n> \r\n> The extra blank line in the existing code is not needed in this patch.\r\n\r\nRemoved.\r\n\r\n> 7. get_loadable_libraries\r\n> \r\n> int rowno;\r\n> + plugin_list *point;\r\n> \r\n> ~\r\n> \r\n> Same as a prior comment #4. What's the meaning of the name 'point'?\r\n\r\nThe variable was removed.\r\n\r\n> 8. get_loadable_libraries\r\n> +\r\n> + /*\r\n> + * If the old cluster has logical replication slots, plugins used by\r\n> + * them must be also stored. It must be done only once, so do it at\r\n> + * dbnum == 0 case.\r\n> + */\r\n> + if (output_plugins == NULL)\r\n> + continue;\r\n> +\r\n> + if (dbnum != 0)\r\n> + continue;\r\n> \r\n> This logic seems misplaced. If this \"must be done only once\" then why\r\n> is it within the db loop in the first place? Shouldn't this be done\r\n> seperately outside the loop?\r\n\r\nThe logic was removed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 9.\r\n> +/*\r\n> + * Helper function for get_old_cluster_logical_slot_infos()\r\n> + */\r\n> +static WALAvailability\r\n> +GetWALAvailabilityByString(const char *str)\r\n> \r\n> Should this be forward declared like the other static functions are?\r\n\r\nThe function was removed.\r\n\r\n> 10. get_old_cluster_logical_slot_infos\r\n> \r\n> + for (slotnum = 0; slotnum < num_slots; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *curr = &slotinfos[slotnum];\r\n> + bool have_error = false;\r\n> \r\n> Here seems an unnecessary assignment to 'have_error' because it will\r\n> always be assigned again before it is checked.\r\n\r\nThe variable was removed.\r\n\r\n> 11. get_old_cluster_logical_slot_infos\r\n> \r\n> + curr->confirmed_flush = strtoLSN(\r\n> + PQgetvalue(res,\r\n> + slotnum,\r\n> + i_confirmed_flush),\r\n> + &have_error);\r\n> + curr->wal_status = GetWALAvailabilityByString(\r\n> + PQgetvalue(res,\r\n> + slotnum,\r\n> + i_wal_status));\r\n> \r\n> Can this excessive wrapping be improved? Maybe new vars are needed.\r\n\r\nThe part was removed.\r\n\r\n> 12.\r\n> +static void\r\n> +print_slot_infos(LogicalSlotInfoArr *slot_arr)\r\n> +{\r\n> + int slotnum;\r\n> +\r\n> + for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\r\n> +\r\n> + if (slotnum == 0)\r\n> + pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\r\n> +\r\n> + pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\r\n> + slot_info->slotname,\r\n> + slot_info->plugin,\r\n> + slot_info->two_phase);\r\n> + }\r\n> +}\r\n> \r\n> This seems an odd way to output the heading. Isn't it better to put\r\n> this outside the loop?\r\n> \r\n> SUGGESTION\r\n> if (slot_arr->nslots > 0)\r\n> pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.c\r\n> \r\n> 13.\r\n> +/*\r\n> + * setup_new_cluster()\r\n> + *\r\n> + * Starts a new cluster for updating the wal_level in the control fine, then\r\n> + * does final setups. Logical slots are also created here.\r\n> + */\r\n> +static void\r\n> +setup_new_cluster(void)\r\n> \r\n> typo\r\n> \r\n> /control fine/control file/\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/OS0PR01MB57165A8F24BEFF5F4CCBBE5994EFA%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 6 Sep 2023 13:36:17 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! \r\n\r\n> \r\n> I also think referring to the conflicting field would be better not\r\n> only for the purpose of avoiding extra code but also to give accurate\r\n> information about invalidated slots for which we want to give an\r\n> error.\r\n\r\nFixed.\r\n\r\n> Additionally, I think we should try to avoid writing a new function\r\n> strtoLSN as that adds a maintainability burden. We can probably send\r\n> the value fetched from pg_controldata in the query for comparison with\r\n> confirmed_flush LSN.\r\n\r\nChanged like that. LogicalSlotInfo was also updated to have the Boolean.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n",
"msg_date": "Wed, 6 Sep 2023 13:36:28 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> \r\n> I think it would be better to write problematic slots in the script\r\n> file like we are doing in the function\r\n> check_for_composite_data_type_usage()->check_for_data_types_usage()\r\n> and give a message suggesting what the user can do as we are doing in\r\n> check_for_composite_data_type_usage(). That will be helpful for the\r\n> user to take necessary action.\r\n\r\nDid it. I wondered how we output the list of slots because there are two types of\r\nproblem, but currently I used a same file. If you have better approach, please\r\nteach me.\r\n\r\n> A few other comments:\r\n> =================\r\n> 1.\r\n> @@ -189,6 +199,8 @@ check_new_cluster(void)\r\n> {\r\n> get_db_and_rel_infos(&new_cluster);\r\n> \r\n> + check_new_cluster_logical_replication_slots();\r\n> +\r\n> check_new_cluster_is_empty();\r\n> \r\n> check_loadable_libraries();\r\n> \r\n> Why check_new_cluster_logical_replication_slots is done before\r\n> check_new_cluster_is_empty? At least check_new_cluster_is_empty()\r\n> would be much quicker to return an error if any. I think if we don't\r\n> have a specific reason to position this new check, we can do it at the\r\n> end after check_for_new_tablespace_dir() to avoid breaking the order\r\n> of existing checks.\r\n\r\nMoved to the bottom.\r\n\r\n> 2. Shall we rename get_db_and_rel_infos() to\r\n> get_db_rel_and_slot_infos() or something like that as that function\r\n> now fetches the slot information as well?\r\n\r\nFixed. Comments were also fixed as well. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Wed, 6 Sep 2023 13:36:37 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi, here are my code review comments for the patch v32-0002\n\n======\nsrc/bin/pg_upgrade/check.c\n\n1. check_new_cluster_logical_replication_slots\n\n+ res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\n+ max_replication_slots = atoi(PQgetvalue(res, 0, 0));\n+\n+ if (PQntuples(res) != 1)\n+ pg_fatal(\"could not determine max_replication_slots\");\n\nShouldn't the PQntuples check be *before* the PQgetvalue and\nassignment to max_replication_slots?\n\n~~~\n\n2. check_new_cluster_logical_replication_slots\n\n+ res = executeQueryOrDie(conn, \"SHOW wal_level;\");\n+ wal_level = PQgetvalue(res, 0, 0);\n+\n+ if (PQntuples(res) != 1)\n+ pg_fatal(\"could not determine wal_level\");\n\nShouldn't the PQntuples check be *before* the PQgetvalue and\nassignment to wal_level?\n\n~~~\n\n3. check_old_cluster_for_valid_slots\n\nI saw that similar code with scripts like this is doing PG_REPORT:\n\npg_log(PG_REPORT, \"fatal\");\n\nbut that PG_REPORT is missing from this function.\n\n======\nsrc/bin/pg_upgrade/function.c\n\n4. get_loadable_libraries\n\n@@ -42,11 +43,12 @@ library_name_compare(const void *p1, const void *p2)\n ((const LibraryInfo *) p2)->dbnum;\n }\n\n-\n /*\n * get_loadable_libraries()\n\n~\n\nRemoving that blank line (above this function) should not be included\nin the patch.\n\n~~~\n\n5. get_loadable_libraries\n\n+ /*\n+ * Allocate a memory for extensions and logical replication output\n+ * plugins.\n+ */\n+ os_info.libraries = pg_malloc_array(LibraryInfo,\n+ totaltups + count_old_cluster_logical_slots());\n\n/Allocate a memory/Allocate memory/\n\n~~~\n\n6. get_loadable_libraries\n+ /*\n+ * Store the name of output plugins as well. There is a possibility\n+ * that duplicated plugins are set, but the consumer function\n+ * check_loadable_libraries() will avoid checking the same library, so\n+ * we do not have to consider their uniqueness here.\n+ */\n+ for (slotno = 0; slotno < slot_arr->nslots; slotno++)\n\n/Store the name/Store the names/\n\n======\nsrc/bin/pg_upgrade/info.c\n\n7. get_old_cluster_logical_slot_infos\n\n+ i_slotname = PQfnumber(res, \"slot_name\");\n+ i_plugin = PQfnumber(res, \"plugin\");\n+ i_twophase = PQfnumber(res, \"two_phase\");\n+ i_caughtup = PQfnumber(res, \"caughtup\");\n+ i_conflicting = PQfnumber(res, \"conflicting\");\n+\n+ for (slotnum = 0; slotnum < num_slots; slotnum++)\n+ {\n+ LogicalSlotInfo *curr = &slotinfos[slotnum];\n+\n+ curr->slotname = pg_strdup(PQgetvalue(res, slotnum, i_slotname));\n+ curr->plugin = pg_strdup(PQgetvalue(res, slotnum, i_plugin));\n+ curr->two_phase = (strcmp(PQgetvalue(res, slotnum, i_twophase), \"t\") == 0);\n+ curr->caughtup = (strcmp(PQgetvalue(res, slotnum, i_caughtup), \"t\") == 0);\n+ curr->conflicting = (strcmp(PQgetvalue(res, slotnum, i_conflicting),\n\"t\") == 0);\n+ }\n\nSaying \"tup\" always looks like it should be something tuple-related.\nIMO it will be better to call all these \"caught_up\" instead of\n\"caughtup\":\n\n\"caughtup\" ==> \"caught_up\"\ni_caughtup ==> i_caught_up\ncurr->caughtup ==> curr->caught_up\n\n~~~\n\n8. print_slot_infos\n\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ int slotnum;\n+\n+ if (slot_arr->nslots > 1)\n+ pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\n+\n+ for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n+\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\n+ slot_info->slotname,\n+ slot_info->plugin,\n+ slot_info->two_phase);\n+ }\n+}\n\nAlthough it makes no functional difference, it might be neater if the\nfor loop is also within that \"if (slot_arr->nslots > 1)\" condition.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n9.\n+/*\n+ * Structure to store logical replication slot information\n+ */\n+typedef struct\n+{\n+ char *slotname; /* slot name */\n+ char *plugin; /* plugin */\n+ bool two_phase; /* can the slot decode 2PC? */\n+ bool caughtup; /* Is confirmed_flush_lsn the same as latest\n+ * checkpoint LSN? */\n+ bool conflicting; /* Is the slot usable? */\n+} LogicalSlotInfo;\n\n9a.\n+ bool caughtup; /* Is confirmed_flush_lsn the same as latest\n+ * checkpoint LSN? */\n\ncaughtup ==> caught_up\n\n~\n\n9b.\n+ bool conflicting; /* Is the slot usable? */\n\nThe field name has the opposite meaning of the wording of the comment.\n(e.g. it is usable when it is NOT conflicting, right?).\n\nMaybe there needs a better field name, or a better comment, or both.\nAFAICT from other code pg_fatal message 'conflicting' is always\ninterpreted as 'lost' so maybe the field should be called that?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 7 Sep 2023 17:38:20 +1200",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> ======\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 1. check_new_cluster_logical_replication_slots\r\n> \r\n> + res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\r\n> + max_replication_slots = atoi(PQgetvalue(res, 0, 0));\r\n> +\r\n> + if (PQntuples(res) != 1)\r\n> + pg_fatal(\"could not determine max_replication_slots\");\r\n> \r\n> Shouldn't the PQntuples check be *before* the PQgetvalue and\r\n> assignment to max_replication_slots?\r\n\r\nRight, fixed. Also, the checking was added at the first query.\r\n\r\n> 2. check_new_cluster_logical_replication_slots\r\n> \r\n> + res = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n> + wal_level = PQgetvalue(res, 0, 0);\r\n> +\r\n> + if (PQntuples(res) != 1)\r\n> + pg_fatal(\"could not determine wal_level\");\r\n> \r\n> Shouldn't the PQntuples check be *before* the PQgetvalue and\r\n> assignment to wal_level?\r\n\r\nFixed.\r\n\r\n> 3. check_old_cluster_for_valid_slots\r\n> \r\n> I saw that similar code with scripts like this is doing PG_REPORT:\r\n> \r\n> pg_log(PG_REPORT, \"fatal\");\r\n> \r\n> but that PG_REPORT is missing from this function.\r\n\r\nAdded.\r\n\r\n> src/bin/pg_upgrade/function.c\r\n> \r\n> 4. get_loadable_libraries\r\n> \r\n> @@ -42,11 +43,12 @@ library_name_compare(const void *p1, const void *p2)\r\n> ((const LibraryInfo *) p2)->dbnum;\r\n> }\r\n> \r\n> -\r\n> /*\r\n> * get_loadable_libraries()\r\n> \r\n> ~\r\n> \r\n> Removing that blank line (above this function) should not be included\r\n> in the patch.\r\n\r\nRestored the blank.\r\n\r\n> 5. get_loadable_libraries\r\n> \r\n> + /*\r\n> + * Allocate a memory for extensions and logical replication output\r\n> + * plugins.\r\n> + */\r\n> + os_info.libraries = pg_malloc_array(LibraryInfo,\r\n> + totaltups + count_old_cluster_logical_slots());\r\n> \r\n> /Allocate a memory/Allocate memory/\r\n\r\nFixed.\r\n\r\n> 6. get_loadable_libraries\r\n> + /*\r\n> + * Store the name of output plugins as well. There is a possibility\r\n> + * that duplicated plugins are set, but the consumer function\r\n> + * check_loadable_libraries() will avoid checking the same library, so\r\n> + * we do not have to consider their uniqueness here.\r\n> + */\r\n> + for (slotno = 0; slotno < slot_arr->nslots; slotno++)\r\n> \r\n> /Store the name/Store the names/\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 7. get_old_cluster_logical_slot_infos\r\n> \r\n> + i_slotname = PQfnumber(res, \"slot_name\");\r\n> + i_plugin = PQfnumber(res, \"plugin\");\r\n> + i_twophase = PQfnumber(res, \"two_phase\");\r\n> + i_caughtup = PQfnumber(res, \"caughtup\");\r\n> + i_conflicting = PQfnumber(res, \"conflicting\");\r\n> +\r\n> + for (slotnum = 0; slotnum < num_slots; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *curr = &slotinfos[slotnum];\r\n> +\r\n> + curr->slotname = pg_strdup(PQgetvalue(res, slotnum, i_slotname));\r\n> + curr->plugin = pg_strdup(PQgetvalue(res, slotnum, i_plugin));\r\n> + curr->two_phase = (strcmp(PQgetvalue(res, slotnum, i_twophase), \"t\") == 0);\r\n> + curr->caughtup = (strcmp(PQgetvalue(res, slotnum, i_caughtup), \"t\") == 0);\r\n> + curr->conflicting = (strcmp(PQgetvalue(res, slotnum, i_conflicting),\r\n> \"t\") == 0);\r\n> + }\r\n> \r\n> Saying \"tup\" always looks like it should be something tuple-related.\r\n> IMO it will be better to call all these \"caught_up\" instead of\r\n> \"caughtup\":\r\n> \r\n> \"caughtup\" ==> \"caught_up\"\r\n> i_caughtup ==> i_caught_up\r\n> curr->caughtup ==> curr->caught_up\r\n\r\nFixed. The alias was also fixed.\r\n\r\n> 8. print_slot_infos\r\n> \r\n> +static void\r\n> +print_slot_infos(LogicalSlotInfoArr *slot_arr)\r\n> +{\r\n> + int slotnum;\r\n> +\r\n> + if (slot_arr->nslots > 1)\r\n> + pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\r\n> +\r\n> + for (slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\r\n> +\r\n> + pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %d\",\r\n> + slot_info->slotname,\r\n> + slot_info->plugin,\r\n> + slot_info->two_phase);\r\n> + }\r\n> +}\r\n> \r\n> Although it makes no functional difference, it might be neater if the\r\n> for loop is also within that \"if (slot_arr->nslots > 1)\" condition.\r\n\r\nHmm, but the point makes more differences between print_rel_infos() and\r\nprint_slot_infos(), I thought it should be similar. Instead, I added a quick\r\nreturn. Thought?\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 9.\r\n> +/*\r\n> + * Structure to store logical replication slot information\r\n> + */\r\n> +typedef struct\r\n> +{\r\n> + char *slotname; /* slot name */\r\n> + char *plugin; /* plugin */\r\n> + bool two_phase; /* can the slot decode 2PC? */\r\n> + bool caughtup; /* Is confirmed_flush_lsn the same as latest\r\n> + * checkpoint LSN? */\r\n> + bool conflicting; /* Is the slot usable? */\r\n> +} LogicalSlotInfo;\r\n> \r\n> 9a.\r\n> + bool caughtup; /* Is confirmed_flush_lsn the same as latest\r\n> + * checkpoint LSN? */\r\n> \r\n> caughtup ==> caught_up\r\n\r\nFixed.\r\n\r\n> 9b.\r\n> + bool conflicting; /* Is the slot usable? */\r\n> \r\n> The field name has the opposite meaning of the wording of the comment.\r\n> (e.g. it is usable when it is NOT conflicting, right?).\r\n> \r\n> Maybe there needs a better field name, or a better comment, or both.\r\n> AFAICT from other code pg_fatal message 'conflicting' is always\r\n> interpreted as 'lost' so maybe the field should be called that?\r\n\r\nChanged to \"is_lost\", which is easy to understand the meaning.\r\n\r\nAlso, I fixed following points:\r\n\r\n* Added a period to messages in check_new_cluster_logical_replication_slots(),\r\n except the final line. According to other functions like check_new_cluster_is_empty(),\r\n the period is ignored if the escape sequence is at the end.\r\n* Removed the --check test because sometimes it failed on the windows machine.\r\n I reported in another thread [1].\r\n* Set max_slot_wal_keep_size to -1 when old cluster was started. Accordin to the\r\n discussion [2], the setting is sufficient to supress the WAL removal.\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB586654E2D74B838021BE77CAF5EEA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/ZPl659a5hPDHPq9w%40paquier.xyz\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 7 Sep 2023 12:24:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thursday, September 7, 2023 8:24 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Dear Peter,\r\n> \r\n> Thank you for reviewing! PSA new version.\r\n\r\nThanks for updating the patches !\r\n\r\nHere are some comments:\r\n\r\n1.\r\n\r\n bool\t\treap_child(bool wait_for_child);\r\n+\r\n+XLogRecPtr\tstrtoLSN(const char *str, bool *have_error);\r\n\r\nThis function has be removed.\r\n\r\n2.\r\n\r\n+\tif (nslots_on_new)\r\n+\t{\r\n+\t\tif (nslots_on_new == 1)\r\n+\t\t\tpg_fatal(\"New cluster must not have logical replication slots but found a slot.\");\r\n+\t\telse\r\n+\t\t\tpg_fatal(\"New cluster must not have logical replication slots but found %d slots.\",\r\n+\t\t\t\t\t nslots_on_new);\r\n\r\nWe could try ngettext() here:\r\n\t\tpg_log_warning(ngettext(\"New cluster must not have logical replication slots but found %d slot.\",\r\n\t\t\t\t\t\t\t\t\"New cluster must not have logical replication slots but found %d slots\",\r\n\t\t\t\t\t\t\t\tnslots_on_new)\r\n\r\n3.\r\n-\tcreate_script_for_old_cluster_deletion(&deletion_script_file_name);\r\n-\r\n\r\nIs there a reason for reordering this function ? Sorry If I missed some\r\nprevious discussions.\r\n\r\n\r\n4.\r\n\r\n@@ -610,6 +724,12 @@ free_db_and_rel_infos(DbInfoArr *db_arr)\r\n \t{\r\n \t\tfree_rel_infos(&db_arr->dbs[dbnum].rel_arr);\r\n \t\tpg_free(db_arr->dbs[dbnum].db_name);\r\n+\r\n+\t\t/*\r\n+\t\t * Logical replication slots must not exist on the new cluster before\r\n+\t\t * create_logical_replication_slots().\r\n+\t\t */\r\n+\t\tAssert(db_arr->dbs[dbnum].slot_arr.nslots == 0);\r\n\r\n\r\nI think the assert is not necessary, as the patch will check the new cluster's\r\nslots in another function. Besides, this function is not only used for new\r\ncluster, but the comment only mentioned the new cluster which seems a bit\r\ninconsistent. So, how about removing it ?\r\n\r\n5.\r\n \t\t\t (cluster == &new_cluster) ?\r\n-\t\t\t \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" : \"\",\r\n+\t\t\t \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" :\r\n+\t\t\t \" -c max_slot_wal_keep_size=-1\",\r\n\r\nI think we need to set max_slot_wal_keep_size on new cluster as well, otherwise\r\nit's possible that the new created slots get invalidated during upgrade, what\r\ndo you think ?\r\n\r\n6.\r\n\r\n+\tbool\t\tis_lost;\t\t/* Is the slot in 'lost'? */\r\n+} LogicalSlotInfo;\r\n\r\nWould it be better to use 'invalidated', as the same is used in error message\r\nof ReportSlotInvalidation() and logicaldecoding.sgml.\r\n\r\n7.\r\n+\tfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n+\t{\r\n\t...\r\n+\t\tif (script)\r\n+\t\t{\r\n+\t\t\tfclose(script);\r\n+\r\n+\t\t\tpg_log(PG_REPORT, \"fatal\");\r\n+\t\t\tpg_fatal(\"The source cluster contains one or more problematic logical replication slots.\\n\"\r\n\r\nI think we should do this pg_fatal out of the for() loop, otherwise we cannot\r\ncollect all the problematic slots.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Fri, 8 Sep 2023 08:42:04 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 5:54 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for reviewing! PSA new version.\n>\n\nFew comments:\n=============\n1.\n<para>\n+ All slots on the old cluster must be usable, i.e., there are no slots\n+ whose\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>wal_status</structfield>\n+ is <literal>lost</literal>.\n+ </para>\n\nShall we refer to conflicting flag here instead of wal_status?\n\n2.\n--- a/src/bin/pg_upgrade/check.c\n+++ b/src/bin/pg_upgrade/check.c\n@@ -9,6 +9,7 @@\n\n #include \"postgres_fe.h\"\n\n+#include \"access/xlogdefs.h\"\n\nThis include doesn't seem to be required as we already include this\nfile via pg_upgrade.h.\n\n3.\n+ res = executeQueryOrDie(conn, \"SHOW wal_level;\");\n+\n+ if (PQntuples(res) != 1)\n+ pg_fatal(\"could not determine wal_level.\");\n+\n+ wal_level = PQgetvalue(res, 0, 0);\n+\n+ if (strcmp(wal_level, \"logical\") != 0)\n+ pg_fatal(\"wal_level must be \\\"logical\\\", but is set to \\\"%s\\\"\",\n+ wal_level);\n\nwal_level should be checked before the number of slots required.\n\n4.\n@@ -81,7 +84,11 @@ get_loadable_libraries(void)\n{\n...\n+ totaltups++;\n+ }\n+\n }\n\nSpurious new line in the above code.\n\n5.\n- os_info.libraries = (LibraryInfo *) pg_malloc(totaltups *\nsizeof(LibraryInfo));\n+ /*\n+ * Allocate memory for extensions and logical replication output plugins.\n+ */\n+ os_info.libraries = pg_malloc_array(LibraryInfo,\n\nWe haven't referred to extensions previously in this function, so how\nabout changing the comment to: \"Allocate memory for required libraries\nand logical replication output plugins.\"?\n\n6.\n+ /*\n+ * If we are reading the old_cluster, gets infos for logical\n+ * replication slots.\n+ */\n\nHow about changing the comment to: \"Retrieve the logical replication\nslots infos for the old cluster.\"?\n\n7.\n+ /*\n+ * The temporary slots are expressly ignored while checking because such\n+ * slots cannot exist after the upgrade. During the upgrade, clusters are\n+ * started and stopped several times causing any temporary slots to be\n+ * removed.\n+ */\n\n/expressly/explicitly\n\n8.\n+/*\n+ * count_old_cluster_logical_slots()\n+ *\n+ * Sum up and return the number of logical replication slots for all databases.\n\nI think it would be better to just say: \"Returns the number of logical\nreplication slots for all databases.\"\n\n9.\n+ * Note: This must be done after doing the pg_resetwal command because\n+ * pg_resetwal would remove required WALs.\n+ */\n+ if (count_old_cluster_logical_slots())\n+ create_logical_replication_slots();\n\nWe can slightly change the Note to: \"This must be done after executing\npg_resetwal command in the caller because pg_resetwal would remove\nrequired WALs.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 15:58:29 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 2:12 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> 2.\n>\n> + if (nslots_on_new)\n> + {\n> + if (nslots_on_new == 1)\n> + pg_fatal(\"New cluster must not have logical replication slots but found a slot.\");\n> + else\n> + pg_fatal(\"New cluster must not have logical replication slots but found %d slots.\",\n> + nslots_on_new);\n>\n> We could try ngettext() here:\n> pg_log_warning(ngettext(\"New cluster must not have logical replication slots but found %d slot.\",\n> \"New cluster must not have logical replication slots but found %d slots\",\n> nslots_on_new)\n>\n\nWill using pg_log_warning suffice for the purpose of exiting the\nupgrade process? I don't think the intention here is to continue after\nfinding such a case.\n\n>\n> 4.\n>\n> @@ -610,6 +724,12 @@ free_db_and_rel_infos(DbInfoArr *db_arr)\n> {\n> free_rel_infos(&db_arr->dbs[dbnum].rel_arr);\n> pg_free(db_arr->dbs[dbnum].db_name);\n> +\n> + /*\n> + * Logical replication slots must not exist on the new cluster before\n> + * create_logical_replication_slots().\n> + */\n> + Assert(db_arr->dbs[dbnum].slot_arr.nslots == 0);\n>\n>\n> I think the assert is not necessary, as the patch will check the new cluster's\n> slots in another function. Besides, this function is not only used for new\n> cluster, but the comment only mentioned the new cluster which seems a bit\n> inconsistent. So, how about removing it ?\n>\n\nYeah, I also find it odd.\n\n> 5.\n> (cluster == &new_cluster) ?\n> - \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" : \"\",\n> + \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" :\n> + \" -c max_slot_wal_keep_size=-1\",\n>\n> I think we need to set max_slot_wal_keep_size on new cluster as well, otherwise\n> it's possible that the new created slots get invalidated during upgrade, what\n> do you think ?\n>\n\nI also think that would be better.\n\n> 6.\n>\n> + bool is_lost; /* Is the slot in 'lost'? */\n> +} LogicalSlotInfo;\n>\n> Would it be better to use 'invalidated',\n>\n\nOr how about simply 'invalid'?\n\nA few other points:\n1.\n ntups = PQntuples(res);\n- dbinfos = (DbInfo *) pg_malloc(sizeof(DbInfo) * ntups);\n+ dbinfos = (DbInfo *) pg_malloc0(sizeof(DbInfo) * ntups);\n\nCan we write a comment to say why we need zero memory here?\n\n 2. Why get_old_cluster_logical_slot_infos() need to use\npg_malloc_array whereas for similar stuff get_rel_infos() use\npg_malloc()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 8 Sep 2023 16:53:32 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\nThank you for reviewing! PSA new version! PSA new version.\r\n\r\n> Here are some comments:\r\n> \r\n> 1.\r\n> \r\n> bool\t\treap_child(bool wait_for_child);\r\n> +\r\n> +XLogRecPtr\tstrtoLSN(const char *str, bool *have_error);\r\n> \r\n> This function has be removed.\r\n\r\nRemoved.\r\n\r\n> 2.\r\n> \r\n> +\tif (nslots_on_new)\r\n> +\t{\r\n> +\t\tif (nslots_on_new == 1)\r\n> +\t\t\tpg_fatal(\"New cluster must not have logical replication\r\n> slots but found a slot.\");\r\n> +\t\telse\r\n> +\t\t\tpg_fatal(\"New cluster must not have logical replication\r\n> slots but found %d slots.\",\r\n> +\t\t\t\t\t nslots_on_new);\r\n> \r\n> We could try ngettext() here:\r\n> \t\tpg_log_warning(ngettext(\"New cluster must not have logical\r\n> replication slots but found %d slot.\",\r\n> \t\t\t\t\t\t\t\t\"New\r\n> cluster must not have logical replication slots but found %d slots\",\r\n> \r\n> \tnslots_on_new)\r\n\r\nI agreed to use ngettext(), but I disagreed to change to warning.\r\nChanged to use ngettext().\r\n\r\n> 3.\r\n> -\tcreate_script_for_old_cluster_deletion(&deletion_script_file_name);\r\n> -\r\n> \r\n> Is there a reason for reordering this function ? Sorry If I missed some\r\n> previous discussions.\r\n\r\nWe discussed to move create_logical_replication_slots(), but not for\r\ncreate_script_for_old_cluster_deletion(). Restored.\r\n\r\n> 4.\r\n> \r\n> @@ -610,6 +724,12 @@ free_db_and_rel_infos(DbInfoArr *db_arr)\r\n> \t{\r\n> \t\tfree_rel_infos(&db_arr->dbs[dbnum].rel_arr);\r\n> \t\tpg_free(db_arr->dbs[dbnum].db_name);\r\n> +\r\n> +\t\t/*\r\n> +\t\t * Logical replication slots must not exist on the new cluster\r\n> before\r\n> +\t\t * create_logical_replication_slots().\r\n> +\t\t */\r\n> +\t\tAssert(db_arr->dbs[dbnum].slot_arr.nslots == 0);\r\n> \r\n> \r\n> I think the assert is not necessary, as the patch will check the new cluster's\r\n> slots in another function. Besides, this function is not only used for new\r\n> cluster, but the comment only mentioned the new cluster which seems a bit\r\n> inconsistent. So, how about removing it ?\r\n\r\nAmit also pointed out, so removed the Assertion and comment.\r\n\r\n> 5.\r\n> \t\t\t (cluster == &new_cluster) ?\r\n> -\t\t\t \" -c synchronous_commit=off -c fsync=off -c\r\n> full_page_writes=off\" : \"\",\r\n> +\t\t\t \" -c synchronous_commit=off -c fsync=off -c\r\n> full_page_writes=off\" :\r\n> +\t\t\t \" -c max_slot_wal_keep_size=-1\",\r\n> \r\n> I think we need to set max_slot_wal_keep_size on new cluster as well, otherwise\r\n> it's possible that the new created slots get invalidated during upgrade, what\r\n> do you think ?\r\n\r\nAdded.\r\n\r\n> 6.\r\n> \r\n> +\tbool\t\tis_lost;\t\t/* Is the slot in 'lost'? */\r\n> +} LogicalSlotInfo;\r\n> \r\n> Would it be better to use 'invalidated', as the same is used in error message\r\n> of ReportSlotInvalidation() and logicaldecoding.sgml.\r\n\r\nPer suggestion from Amit, changed to 'invalid'.\r\n\r\n> 7.\r\n> +\tfor (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> +\t{\r\n> \t...\r\n> +\t\tif (script)\r\n> +\t\t{\r\n> +\t\t\tfclose(script);\r\n> +\r\n> +\t\t\tpg_log(PG_REPORT, \"fatal\");\r\n> +\t\t\tpg_fatal(\"The source cluster contains one or more\r\n> problematic logical replication slots.\\n\"\r\n> \r\n> I think we should do this pg_fatal out of the for() loop, otherwise we cannot\r\n> collect all the problematic slots.\r\n\r\nYeah, agreed. Fixed.\r\n\r\nAlso, based on the discussion [1], I added an elog(ERROR) in InvalidatePossiblyObsoleteSlot().\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1%2BWBphnmvMpjrxceymzuoMuyV2_pMGaJq-zNODiJqAa7Q%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 8 Sep 2023 13:01:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing!\r\n\r\n> Few comments:\r\n> =============\r\n> 1.\r\n> <para>\r\n> + All slots on the old cluster must be usable, i.e., there are no slots\r\n> + whose\r\n> + <link\r\n> linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>\r\n> wal_status</structfield>\r\n> + is <literal>lost</literal>.\r\n> + </para>\r\n> \r\n> Shall we refer to conflicting flag here instead of wal_status?\r\n\r\nChanged. I used the word 'lost' in check_old_cluster_for_valid_slots() because of\r\nthe line, so changed them accordingly.\r\n\r\n> 2.\r\n> --- a/src/bin/pg_upgrade/check.c\r\n> +++ b/src/bin/pg_upgrade/check.c\r\n> @@ -9,6 +9,7 @@\r\n> \r\n> #include \"postgres_fe.h\"\r\n> \r\n> +#include \"access/xlogdefs.h\"\r\n> \r\n> This include doesn't seem to be required as we already include this\r\n> file via pg_upgrade.h.\r\n\r\nI preferred to include explicitly... but fixed.\r\n\r\n> 3.\r\n> + res = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n> +\r\n> + if (PQntuples(res) != 1)\r\n> + pg_fatal(\"could not determine wal_level.\");\r\n> +\r\n> + wal_level = PQgetvalue(res, 0, 0);\r\n> +\r\n> + if (strcmp(wal_level, \"logical\") != 0)\r\n> + pg_fatal(\"wal_level must be \\\"logical\\\", but is set to \\\"%s\\\"\",\r\n> + wal_level);\r\n> \r\n> wal_level should be checked before the number of slots required.\r\n\r\nMoved.\r\n\r\n> 4.\r\n> @@ -81,7 +84,11 @@ get_loadable_libraries(void)\r\n> {\r\n> ...\r\n> + totaltups++;\r\n> + }\r\n> +\r\n> }\r\n> \r\n> Spurious new line in the above code.\r\n\r\nRemoved.\r\n\r\n> 5.\r\n> - os_info.libraries = (LibraryInfo *) pg_malloc(totaltups *\r\n> sizeof(LibraryInfo));\r\n> + /*\r\n> + * Allocate memory for extensions and logical replication output plugins.\r\n> + */\r\n> + os_info.libraries = pg_malloc_array(LibraryInfo,\r\n> \r\n> We haven't referred to extensions previously in this function, so how\r\n> about changing the comment to: \"Allocate memory for required libraries\r\n> and logical replication output plugins.\"?\r\n\r\nChanged.\r\n\r\n> 6.\r\n> + /*\r\n> + * If we are reading the old_cluster, gets infos for logical\r\n> + * replication slots.\r\n> + */\r\n> \r\n> How about changing the comment to: \"Retrieve the logical replication\r\n> slots infos for the old cluster.\"?\r\n\r\nChanged.\r\n\r\n> 7.\r\n> + /*\r\n> + * The temporary slots are expressly ignored while checking because such\r\n> + * slots cannot exist after the upgrade. During the upgrade, clusters are\r\n> + * started and stopped several times causing any temporary slots to be\r\n> + * removed.\r\n> + */\r\n> \r\n> /expressly/explicitly\r\n\r\nReplaced.\r\n\r\n> 8.\r\n> +/*\r\n> + * count_old_cluster_logical_slots()\r\n> + *\r\n> + * Sum up and return the number of logical replication slots for all databases.\r\n> \r\n> I think it would be better to just say: \"Returns the number of logical\r\n> replication slots for all databases.\"\r\n\r\nChanged.\r\n\r\n> 9.\r\n> + * Note: This must be done after doing the pg_resetwal command because\r\n> + * pg_resetwal would remove required WALs.\r\n> + */\r\n> + if (count_old_cluster_logical_slots())\r\n> + create_logical_replication_slots();\r\n> \r\n> We can slightly change the Note to: \"This must be done after executing\r\n> pg_resetwal command in the caller because pg_resetwal would remove\r\n> required WALs.\"\r\n>\r\n\r\nReworded.\r\n\r\nYou can see the new version in [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866AB60B4CF404419D9373DF5EDA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 8 Sep 2023 13:03:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> On Fri, Sep 8, 2023 at 2:12 PM Zhijie Hou (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > 2.\r\n> >\r\n> > + if (nslots_on_new)\r\n> > + {\r\n> > + if (nslots_on_new == 1)\r\n> > + pg_fatal(\"New cluster must not have logical\r\n> replication slots but found a slot.\");\r\n> > + else\r\n> > + pg_fatal(\"New cluster must not have logical\r\n> replication slots but found %d slots.\",\r\n> > + nslots_on_new);\r\n> >\r\n> > We could try ngettext() here:\r\n> > pg_log_warning(ngettext(\"New cluster must not have logical\r\n> replication slots but found %d slot.\",\r\n> > \"New\r\n> cluster must not have logical replication slots but found %d slots\",\r\n> >\r\n> nslots_on_new)\r\n> >\r\n> \r\n> Will using pg_log_warning suffice for the purpose of exiting the\r\n> upgrade process? I don't think the intention here is to continue after\r\n> finding such a case.\r\n\r\nI also think that pg_log_warning is not good.\r\n\r\n> >\r\n> > 4.\r\n> >\r\n> > @@ -610,6 +724,12 @@ free_db_and_rel_infos(DbInfoArr *db_arr)\r\n> > {\r\n> > free_rel_infos(&db_arr->dbs[dbnum].rel_arr);\r\n> > pg_free(db_arr->dbs[dbnum].db_name);\r\n> > +\r\n> > + /*\r\n> > + * Logical replication slots must not exist on the new cluster\r\n> before\r\n> > + * create_logical_replication_slots().\r\n> > + */\r\n> > + Assert(db_arr->dbs[dbnum].slot_arr.nslots == 0);\r\n> >\r\n> >\r\n> > I think the assert is not necessary, as the patch will check the new cluster's\r\n> > slots in another function. Besides, this function is not only used for new\r\n> > cluster, but the comment only mentioned the new cluster which seems a bit\r\n> > inconsistent. So, how about removing it ?\r\n> >\r\n> \r\n> Yeah, I also find it odd.\r\n\r\nRemoved. Based on the decision, your new comment 1 is not needed anymore.\r\n\r\n> > 5.\r\n> > (cluster == &new_cluster) ?\r\n> > - \" -c synchronous_commit=off -c fsync=off -c\r\n> full_page_writes=off\" : \"\",\r\n> > + \" -c synchronous_commit=off -c fsync=off -c\r\n> full_page_writes=off\" :\r\n> > + \" -c max_slot_wal_keep_size=-1\",\r\n> >\r\n> > I think we need to set max_slot_wal_keep_size on new cluster as well,\r\n> otherwise\r\n> > it's possible that the new created slots get invalidated during upgrade, what\r\n> > do you think ?\r\n> >\r\n> \r\n> I also think that would be better.\r\n\r\nAdded.\r\n\r\n> > 6.\r\n> >\r\n> > + bool is_lost; /* Is the slot in 'lost'? */\r\n> > +} LogicalSlotInfo;\r\n> >\r\n> > Would it be better to use 'invalidated',\r\n> >\r\n> \r\n> Or how about simply 'invalid'?\r\n\r\nUsed the word invalid.\r\n\r\n> A few other points:\r\n> 1.\r\n> ntups = PQntuples(res);\r\n> - dbinfos = (DbInfo *) pg_malloc(sizeof(DbInfo) * ntups);\r\n> + dbinfos = (DbInfo *) pg_malloc0(sizeof(DbInfo) * ntups);\r\n> \r\n> Can we write a comment to say why we need zero memory here?\r\n\r\nReverted the change. Originally it was needed to pass the Assert()\r\nin the free_db_and_rel_infos(), but it was removed per above.\r\n\r\n> 2. Why get_old_cluster_logical_slot_infos() need to use\r\n> pg_malloc_array whereas for similar stuff get_rel_infos() use\r\n> pg_malloc()?\r\n\r\nThey did a same thing. I used pg_malloc_array() macro to keep the code\r\nwithin 80 columns.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 8 Sep 2023 13:05:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 6:36 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 2. Why get_old_cluster_logical_slot_infos() need to use\n> > pg_malloc_array whereas for similar stuff get_rel_infos() use\n> > pg_malloc()?\n>\n> They did a same thing. I used pg_malloc_array() macro to keep the code\n> within 80 columns.\n>\n\nI think it is better to be consistent with the existing code in this\ncase. Also, see, if the usage in get_loadable_libraries() can also be\nchanged back to use pg_malloc().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 9 Sep 2023 08:54:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 6:31 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n\nComments on the latest patch.\n\n1.\nNote that slot restoration must be done after the final pg_resetwal command\nduring the upgrade because pg_resetwal will remove WALs that are required by\nthe slots. Due to this restriction, the timing of restoring replication slots is\ndifferent from other objects.\n\nThis comment in the commit message is confusing. I understand the\nreason but from this, it is not very clear that if resetwal removes\nthe WAL we needed then why it is good to create after the resetwal. I\nthink we should make it clear that creating new slot will set the\nrestart lsn to current WAL location and after that resetwal can remove\nthose WAL where slot restart lsn is pointing....\n\n2.\n\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ All slots on the old cluster must be usable, i.e., there are no slots\n+ whose\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>wal_status</structfield>\n+ is <literal>lost</literal>.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>confirmed_flush_lsn</structfield>\n+ of all slots on the old cluster must be the same as the latest\n+ checkpoint location.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ The output plugins referenced by the slots on the old cluster must be\n+ installed in the new PostgreSQL executable directory.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ The new cluster must have\n+ <link linkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varname></link>\n+ configured to a value greater than or equal to the number of slots\n+ present in the old cluster.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ The new cluster must have\n+ <link linkend=\"guc-wal-level\"><varname>wal_level</varname></link> as\n+ <literal>logical</literal>.\n+ </para>\n+ </listitem>\n+ </itemizedlist>\n\nI think we should also add that the new slot should not have any\npermanent existing logical replication slot.\n\n3.\n- with the primary.) Replication slots are not copied and must\n- be recreated.\n+ with the primary.) Replication slots on the old standby are not copied.\n+ Only logical slots on the primary are migrated to the new standby,\n+ and other slots must be recreated.\n\nThis paragraph should be rephrased. I mean first stating that\n\"Replication slots on the old standby are not copied\" and then saying\nOnly logical slots are migrated doesn't seem like the best way. Maybe\nwe can just say \"Only logical slots on the primary are migrated to the\nnew standby, and other slots must be recreated.\"\n\n4.\n+ /*\n+ * Raise an ERROR if the logical replication slot is invalidating. It\n+ * would not happen because max_slot_wal_keep_size is set to -1 during\n+ * the upgrade, but it stays safe.\n+ */\n+ if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n+ elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\n\nRephrase the first line as -> Raise an ERROR if the logical\nreplication slot is invalidating during an upgrade.\n\n5.\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n\n\nFor readability change this to if\n(GET_MAJOR_VERSION(old_cluster.major_version) < 1700), because in most\nof the checks related to this, we are using 1700 so better be\nconsistent in this.\n\n6.\n+ if (nslots_on_new)\n+ pg_fatal(ngettext(\"New cluster must not have logical replication\nslots but found %d slot.\",\n+ \"New cluster must not have logical replication slots but found %d slots.\",\n+ nslots_on_new),\n+ nslots_on_new);\n...\n+ if (PQntuples(res) != 1)\n+ pg_fatal(\"could not determine wal_level.\");\n+\n+ wal_level = PQgetvalue(res, 0, 0);\n+\n+ if (strcmp(wal_level, \"logical\") != 0)\n+ pg_fatal(\"wal_level must be \\\"logical\\\", but is set to \\\"%s\\\"\",\n+ wal_level);\n\n\nI have noticed that the case of the first letter in the pg_fatal\nmessage is not consistent.\n\n7.\n+\n+ /* Is the slot still usable? */\n+ if (slot->invalid)\n+ {\n\nWhy comment says \"Is the slot still usable?\" I think it should be \"Is\nthe slot usable?\" otherwise it appears that we have first fetched the\nslots and now we are refetching it and checking whether it is still\nusable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Sep 2023 10:39:03 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 10:39 AM Dilip Kumar <[email protected]> wrote:\n>\n> 3.\n> - with the primary.) Replication slots are not copied and must\n> - be recreated.\n> + with the primary.) Replication slots on the old standby are not copied.\n> + Only logical slots on the primary are migrated to the new standby,\n> + and other slots must be recreated.\n>\n> This paragraph should be rephrased. I mean first stating that\n> \"Replication slots on the old standby are not copied\" and then saying\n> Only logical slots are migrated doesn't seem like the best way. Maybe\n> we can just say \"Only logical slots on the primary are migrated to the\n> new standby, and other slots must be recreated.\"\n>\n\nIt is fine to combine these sentences but let's be a bit more\nexplicit: \"Only logical slots on the primary are migrated to the new\nstandby, and other slots on the old standby must be recreated as they\nare not copied.\"\n\n> 4.\n> + /*\n> + * Raise an ERROR if the logical replication slot is invalidating. It\n> + * would not happen because max_slot_wal_keep_size is set to -1 during\n> + * the upgrade, but it stays safe.\n> + */\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> + elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\n>\n> Rephrase the first line as -> Raise an ERROR if the logical\n> replication slot is invalidating during an upgrade.\n>\n\nI think it would be better to write something like: \"The logical\nreplication slots shouldn't be invalidated as max_slot_wal_keep_size\nis set to -1 during the upgrade.\"\n\n> 5.\n> + /* Logical slots can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> + return;\n>\n>\n> For readability change this to if\n> (GET_MAJOR_VERSION(old_cluster.major_version) < 1700), because in most\n> of the checks related to this, we are using 1700 so better be\n> consistent in this.\n>\n\nBut the current check is consistent with what we do at other places\nduring the upgrade. I think the patch is trying to be consistent with\nexisting code as much as possible.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 11 Sep 2023 11:16:23 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 11, 2023 at 11:16 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Sep 11, 2023 at 10:39 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > 3.\n> > - with the primary.) Replication slots are not copied and must\n> > - be recreated.\n> > + with the primary.) Replication slots on the old standby are not copied.\n> > + Only logical slots on the primary are migrated to the new standby,\n> > + and other slots must be recreated.\n> >\n> > This paragraph should be rephrased. I mean first stating that\n> > \"Replication slots on the old standby are not copied\" and then saying\n> > Only logical slots are migrated doesn't seem like the best way. Maybe\n> > we can just say \"Only logical slots on the primary are migrated to the\n> > new standby, and other slots must be recreated.\"\n> >\n>\n> It is fine to combine these sentences but let's be a bit more\n> explicit: \"Only logical slots on the primary are migrated to the new\n> standby, and other slots on the old standby must be recreated as they\n> are not copied.\"\n\nFine with this.\n\n> > 4.\n> > + /*\n> > + * Raise an ERROR if the logical replication slot is invalidating. It\n> > + * would not happen because max_slot_wal_keep_size is set to -1 during\n> > + * the upgrade, but it stays safe.\n> > + */\n> > + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> > + elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\n> >\n> > Rephrase the first line as -> Raise an ERROR if the logical\n> > replication slot is invalidating during an upgrade.\n> >\n>\n> I think it would be better to write something like: \"The logical\n> replication slots shouldn't be invalidated as max_slot_wal_keep_size\n> is set to -1 during the upgrade.\"\n\nThis makes it much clear.\n\n> > 5.\n> > + /* Logical slots can be migrated since PG17. */\n> > + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> > + return;\n> >\n> >\n> > For readability change this to if\n> > (GET_MAJOR_VERSION(old_cluster.major_version) < 1700), because in most\n> > of the checks related to this, we are using 1700 so better be\n> > consistent in this.\n> >\n>\n> But the current check is consistent with what we do at other places\n> during the upgrade. I think the patch is trying to be consistent with\n> existing code as much as possible.\n\nOkay, I see. Thanks for pointing that out.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Sep 2023 11:48:20 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 6:31 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for reviewing! PSA new version! PSA new version.\n>\n\nFew comments:\n==============\n1.\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>confirmed_flush_lsn</structfield>\n+ of all slots on the old cluster must be the same as the latest\n+ checkpoint location.\n\nWe can add something like: \"This ensures that all the data has been\nreplicated before the upgrade.\" to make it clear why this test is\nimportant.\n\n2. Move the wal_level related restriction before max_replication_slots.\n\n3.\n+ /* Is the slot still usable? */\n+ if (slot->invalid)\n+ {\n+ if (script == NULL &&\n+ (script = fopen_priv(output_path, \"w\")) == NULL)\n+ pg_fatal(\"could not open file \\\"%s\\\": %s\",\n+ output_path, strerror(errno));\n+\n+ fprintf(script,\n+ \"slotname :%s\\tproblem: The slot is unusable\\n\",\n+ slot->slotname);\n+ }\n+\n+ /*\n+ * Do additional checks to ensure that confirmed_flush LSN of all\n+ * the slots is the same as the latest checkpoint location.\n+ *\n+ * Note: This can be satisfied only when the old cluster has been\n+ * shut down, so we skip this for live checks.\n+ */\n+ if (!live_check && !slot->caught_up)\n\nIsn't it better to continue for the next slot once we find that slot\nis invalid instead of checking other conditions?\n\n4.\n+\n+ fprintf(script,\n+ \"slotname :%s\\tproblem: The slot is unusable\\n\",\n+ slot->slotname);\n\nLet's keep it as one string and change the message to: \"The slot\n\"\\\"%s\\\" is invalid\"\n\n+ fprintf(script,\n+ \"slotname :%s\\tproblem: The slot has not consumed WALs yet\\n\",\n+ slot->slotname);\n+ }\n\nOn a similar line, we can change this to: \"The slot \"\\\"%s\\\" has not\nconsumed the WAL yet\"\n\n5.\n+ snprintf(output_path, sizeof(output_path), \"%s/%s\",\n+ log_opts.basedir,\n+ \"problematic_logical_relication_slots.txt\");\n\nI think we can name this file as \"invalid_logical_replication_slots\"\nor simply \"logical_replication_slots\"\n\n6.\n+ pg_fatal(\"The source cluster contains one or more problematic\nlogical replication slots.\\n\"\n+ \"The needed workaround depends on the problem.\\n\"\n+ \"1) If the problem is \\\"The slot is unusable,\\\" You can drop such\nreplication slots.\\n\"\n+ \"2) If the problem is \\\"The slot has not consumed WALs yet,\\\" you\ncan consume all remaining WALs.\\n\"\n+ \"Then, you can restart the upgrade.\\n\"\n+ \"A list of problematic logical replication slots is in the file:\\n\"\n+ \" %s\", output_path);\n\nThis doesn't match the similar existing comments. So, let's change it\nto something like:\n\n\"Your installation contains invalid logical replication slots. These\nslots can't be copied so this cluster cannot currently be upgraded.\nConsider either removing such slots or consuming the pending WAL if\nany and then restart the upgrade. A list of invalid logical\nreplication slots is in the file:\"\n\nApart from the above, I have edited a few other comments in the patch.\nSee attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 11 Sep 2023 12:23:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Dilip,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> \r\n> 1.\r\n> Note that slot restoration must be done after the final pg_resetwal command\r\n> during the upgrade because pg_resetwal will remove WALs that are required by\r\n> the slots. Due to this restriction, the timing of restoring replication slots is\r\n> different from other objects.\r\n> \r\n> This comment in the commit message is confusing. I understand the\r\n> reason but from this, it is not very clear that if resetwal removes\r\n> the WAL we needed then why it is good to create after the resetwal. I\r\n> think we should make it clear that creating new slot will set the\r\n> restart lsn to current WAL location and after that resetwal can remove\r\n> those WAL where slot restart lsn is pointing....\r\n\r\nJust to confirm - WAL records must not be removed in any time if it is referred\r\nas restart_lsn. The reason why the slot creation is done after pg_restwal is that\r\nrequired WALs are not removed by the command. See [1].\r\nMoreover, clarified more in the commit message.\r\n\r\n> 2.\r\n> \r\n> + <itemizedlist>\r\n> + <listitem>\r\n> + <para>\r\n> + All slots on the old cluster must be usable, i.e., there are no slots\r\n> + whose\r\n> + <link\r\n> linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>\r\n> wal_status</structfield>\r\n> + is <literal>lost</literal>.\r\n> + </para>\r\n> + </listitem>\r\n> + <listitem>\r\n> + <para>\r\n> + <link\r\n> linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>c\r\n> onfirmed_flush_lsn</structfield>\r\n> + of all slots on the old cluster must be the same as the latest\r\n> + checkpoint location.\r\n> + </para>\r\n> + </listitem>\r\n> + <listitem>\r\n> + <para>\r\n> + The output plugins referenced by the slots on the old cluster must be\r\n> + installed in the new PostgreSQL executable directory.\r\n> + </para>\r\n> + </listitem>\r\n> + <listitem>\r\n> + <para>\r\n> + The new cluster must have\r\n> + <link\r\n> linkend=\"guc-max-replication-slots\"><varname>max_replication_slots</varna\r\n> me></link>\r\n> + configured to a value greater than or equal to the number of slots\r\n> + present in the old cluster.\r\n> + </para>\r\n> + </listitem>\r\n> + <listitem>\r\n> + <para>\r\n> + The new cluster must have\r\n> + <link\r\n> linkend=\"guc-wal-level\"><varname>wal_level</varname></link> as\r\n> + <literal>logical</literal>.\r\n> + </para>\r\n> + </listitem>\r\n> + </itemizedlist>\r\n> \r\n> I think we should also add that the new slot should not have any\r\n> permanent existing logical replication slot.\r\n\r\nHmm, I wondered it should be really needed. Tables are required not to be in the\r\nnew cluster too, but not documented. It might be a trivial thing. Anyway, added.\r\n\r\nFYI - the restriction was not introduced by the patch. I reported independently [2],\r\nbut no one has responded since now...\r\n\r\n> 3.\r\n> - with the primary.) Replication slots are not copied and must\r\n> - be recreated.\r\n> + with the primary.) Replication slots on the old standby are not copied.\r\n> + Only logical slots on the primary are migrated to the new standby,\r\n> + and other slots must be recreated.\r\n> \r\n> This paragraph should be rephrased. I mean first stating that\r\n> \"Replication slots on the old standby are not copied\" and then saying\r\n> Only logical slots are migrated doesn't seem like the best way. Maybe\r\n> we can just say \"Only logical slots on the primary are migrated to the\r\n> new standby, and other slots must be recreated.\"\r\n\r\nPer discussion on [3], I used another words. Thanks for suggesting.\r\n\r\n> 4.\r\n> + /*\r\n> + * Raise an ERROR if the logical replication slot is invalidating. It\r\n> + * would not happen because max_slot_wal_keep_size is set to -1 during\r\n> + * the upgrade, but it stays safe.\r\n> + */\r\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\r\n> + elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\r\n> \r\n> Rephrase the first line as -> Raise an ERROR if the logical\r\n> replication slot is invalidating during an upgrade.\r\n\r\nPer discussion on [3], I used another words. Thanks for suggesting.\r\n\r\n> 5.\r\n> + /* Logical slots can be migrated since PG17. */\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> + return;\r\n> \r\n> \r\n> For readability change this to if\r\n> (GET_MAJOR_VERSION(old_cluster.major_version) < 1700), because in most\r\n> of the checks related to this, we are using 1700 so better be\r\n> consistent in this.\r\n\r\nPer discussion on [3], I did not change here.\r\n\r\n> 6.\r\n> + if (nslots_on_new)\r\n> + pg_fatal(ngettext(\"New cluster must not have logical replication\r\n> slots but found %d slot.\",\r\n> + \"New cluster must not have logical replication slots but found %d slots.\",\r\n> + nslots_on_new),\r\n> + nslots_on_new);\r\n> ...\r\n> + if (PQntuples(res) != 1)\r\n> + pg_fatal(\"could not determine wal_level.\");\r\n> +\r\n> + wal_level = PQgetvalue(res, 0, 0);\r\n> +\r\n> + if (strcmp(wal_level, \"logical\") != 0)\r\n> + pg_fatal(\"wal_level must be \\\"logical\\\", but is set to \\\"%s\\\"\",\r\n> + wal_level);\r\n> \r\n> \r\n> I have noticed that the case of the first letter in the pg_fatal\r\n> message is not consistent.\r\n\r\nActually there are some inconsistency even in the check.c file, so I devised\r\nbelow rules. How do you think?\r\n\r\n* Non-complete sentence starts with the lower case.\r\n (e.g., \"could not open\", \"could not determine\")\r\n* proper nouns are always noted with the lower cases\r\n (e.g., \"template0 must not allow...\", \"wal_level must be...\").\r\n* Other than above, the sentence starts with the upper case.\r\n\r\n> 7.\r\n> +\r\n> + /* Is the slot still usable? */\r\n> + if (slot->invalid)\r\n> + {\r\n> \r\n> Why comment says \"Is the slot still usable?\" I think it should be \"Is\r\n> the slot usable?\" otherwise it appears that we have first fetched the\r\n> slots and now we are refetching it and checking whether it is still\r\n> usable.\r\n\r\nChanged.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58664C81887B3AF2EB6B16E3F5939%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866D277F6BEDEA4223B3559F5E6A@TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[3]: https://www.postgresql.org/message-id/CAFiTN-vs53SqZiZN1GcSuKLmMY%3D0d14wJDDm1aKmoBONwnqaGg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 11 Sep 2023 13:21:39 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving a suggestion!\r\n\r\n> >\r\n> > > 2. Why get_old_cluster_logical_slot_infos() need to use\r\n> > > pg_malloc_array whereas for similar stuff get_rel_infos() use\r\n> > > pg_malloc()?\r\n> >\r\n> > They did a same thing. I used pg_malloc_array() macro to keep the code\r\n> > within 80 columns.\r\n> >\r\n> \r\n> I think it is better to be consistent with the existing code in this\r\n> case. Also, see, if the usage in get_loadable_libraries() can also be\r\n> changed back to use pg_malloc().\r\n\r\nFixed as you said. The line becomes too long, so a variable was newly introduced.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 11 Sep 2023 13:21:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing!\r\n\r\n> Few comments:\r\n> ==============\r\n> 1.\r\n> + <link\r\n> linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>c\r\n> onfirmed_flush_lsn</structfield>\r\n> + of all slots on the old cluster must be the same as the latest\r\n> + checkpoint location.\r\n> \r\n> We can add something like: \"This ensures that all the data has been\r\n> replicated before the upgrade.\" to make it clear why this test is\r\n> important.\r\n\r\nAdded.\r\n\r\n> 2. Move the wal_level related restriction before max_replication_slots.\r\n> \r\n> 3.\r\n> + /* Is the slot still usable? */\r\n> + if (slot->invalid)\r\n> + {\r\n> + if (script == NULL &&\r\n> + (script = fopen_priv(output_path, \"w\")) == NULL)\r\n> + pg_fatal(\"could not open file \\\"%s\\\": %s\",\r\n> + output_path, strerror(errno));\r\n> +\r\n> + fprintf(script,\r\n> + \"slotname :%s\\tproblem: The slot is unusable\\n\",\r\n> + slot->slotname);\r\n> + }\r\n> +\r\n> + /*\r\n> + * Do additional checks to ensure that confirmed_flush LSN of all\r\n> + * the slots is the same as the latest checkpoint location.\r\n> + *\r\n> + * Note: This can be satisfied only when the old cluster has been\r\n> + * shut down, so we skip this for live checks.\r\n> + */\r\n> + if (!live_check && !slot->caught_up)\r\n> \r\n> Isn't it better to continue for the next slot once we find that slot\r\n> is invalid instead of checking other conditions?\r\n\r\nRight, fixed.\r\n\r\n> 4.\r\n> +\r\n> + fprintf(script,\r\n> + \"slotname :%s\\tproblem: The slot is unusable\\n\",\r\n> + slot->slotname);\r\n> \r\n> Let's keep it as one string and change the message to: \"The slot\r\n> \"\\\"%s\\\" is invalid\"\r\n\r\nChanged.\r\n\r\n> + fprintf(script,\r\n> + \"slotname :%s\\tproblem: The slot has not consumed WALs yet\\n\",\r\n> + slot->slotname);\r\n> + }\r\n> \r\n> On a similar line, we can change this to: \"The slot \"\\\"%s\\\" has not\r\n> consumed the WAL yet\"\r\n\r\nChanged.\r\n\r\n> 5.\r\n> + snprintf(output_path, sizeof(output_path), \"%s/%s\",\r\n> + log_opts.basedir,\r\n> + \"problematic_logical_relication_slots.txt\");\r\n> \r\n> I think we can name this file as \"invalid_logical_replication_slots\"\r\n> or simply \"logical_replication_slots\"\r\n\r\nThe latter one seems too general for me, \"invalid_...\" was chosen.\r\n\r\n> 6.\r\n> + pg_fatal(\"The source cluster contains one or more problematic\r\n> logical replication slots.\\n\"\r\n> + \"The needed workaround depends on the problem.\\n\"\r\n> + \"1) If the problem is \\\"The slot is unusable,\\\" You can drop such\r\n> replication slots.\\n\"\r\n> + \"2) If the problem is \\\"The slot has not consumed WALs yet,\\\" you\r\n> can consume all remaining WALs.\\n\"\r\n> + \"Then, you can restart the upgrade.\\n\"\r\n> + \"A list of problematic logical replication slots is in the file:\\n\"\r\n> + \" %s\", output_path);\r\n> \r\n> This doesn't match the similar existing comments. So, let's change it\r\n> to something like:\r\n> \r\n> \"Your installation contains invalid logical replication slots. These\r\n> slots can't be copied so this cluster cannot currently be upgraded.\r\n> Consider either removing such slots or consuming the pending WAL if\r\n> any and then restart the upgrade. A list of invalid logical\r\n> replication slots is in the file:\"\r\n\r\nBasically changed to your suggestion, but slightly reworded based on\r\nwhat Grammarly said.\r\n\r\n> Apart from the above, I have edited a few other comments in the patch.\r\n> See attached.\r\n\r\nThanks for attaching! Included.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 11 Sep 2023 13:22:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san, here are my review comments for v34-0002\n\nThere is likely to be some overlap because others have modified and/or\ncommented on some of the same points as me, and v35 was already posted\nbefore this review. I'll leave it to you to sort out any clashes and\nignore them where appropriate.\n\n======\n1. GENERAL -- Cluster Terminology\n\nThis is not really a problem of your patch, but during message review,\nI noticed the terms old/new cluster VERSUS source/target cluster and\nboth were used many times:\n\nFor example.\n\".*new clusmter --> 44 occurences\n\".*old cluster --> 21 occurences\n\".*source cluster --> 6 occurences\n\".*target cluster --> 12 occurences\n\nPerhaps there should be a new thread/patch to use consistent terms.\n\nThoughts?\n\n~~~\n\n2. GENERAL - Error message cases\n\nJust FYI, there are many inconsistent capitalising in these patch\nmessages, but then the same is also true for the HEAD code. It's a bit\nmessy, but generally, I think your capitalisation was aligned with\nwhat I saw in HEAD, so I didn't comment anywhere about it.\n\n======\nsrc/backend/replication/slot.c\n\n3. InvalidatePossiblyObsoleteSlot\n\n+ /*\n+ * Raise an ERROR if the logical replication slot is invalidating. It\n+ * would not happen because max_slot_wal_keep_size is set to -1 during\n+ * the upgrade, but it stays safe.\n+ */\n+ if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n+ elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\n\n3a.\nThat comment didn't seem good. I think you mean like in the suggestion below.\n\nSUGGESTION\nIt should not be possible for logical replication slots to be\ninvalidated because max_slot_wal_keep_size is set to -1 during the\nupgrade. The following is just for sanity-checking.\n\n~\n\n3b.\nI wasn't sure if 'max_slot_wal_keep_size' GUC is accessible in this\nscope, but if it is available then maybe\nAssert(max_slot_wal_keep_size_mb == -1); should also be included in\nthis sanity check.\n\n======\nsrc/bin/pg_upgrade/check.c\n\n4. check_new_cluster_logical_replication_slots\n\n+ conn = connectToServer(&new_cluster, \"template1\");\n+\n+ prep_status(\"Checking for logical replication slots\");\n\nThere is some inconsistency with all the subsequent pg_fatals within\nthis function -- some of them mention \"New cluster\" but most of them\ndo not.\n\nMeanwhile, Kuroda-san showed me sample output like:\n\nChecking for presence of required libraries ok\nChecking database user is the install user ok\nChecking for prepared transactions ok\nChecking for new cluster tablespace directories ok\nChecking for logical replication slots\nNew cluster must not have logical replication slots but found 1 slot.\nFailure, exiting\n\nSo, I felt the log message title (\"Checking...\") should be changed to\ninclude the words \"new cluster\" just like the log preceding it:\n\n\"Checking for logical replication slots\" ==> \"Checking for new cluster\nlogical replication slots\"\n\nNow all the subsequent pg_fatals clearly are for \"new cluster\"\n\n~\n\n5. check_new_cluster_logical_replication_slots\n\n+ if (nslots_on_new)\n+ pg_fatal(ngettext(\"New cluster must not have logical replication\nslots but found %d slot.\",\n+ \"New cluster must not have logical replication slots but found %d slots.\",\n+ nslots_on_new),\n+ nslots_on_new);\n\n5a.\nTBH, I didn't see why you go to unnecessary trouble to have a plural\nmessage here. The message could just be like:\n\"New cluster must have 0 logical replication slots but found %d.\"\n\n~\n\n5b.\nHowever, now (from the previous review comment #4) if \"New cluster\" is\nalready explicit in the log, the pg_fatal message can become just:\n\"New cluster must have ...\" ==> \"Expected 0 logical replication slots\nbut found %d.\"\n\n~~~\n\n6. check_old_cluster_for_valid_slots\n\n+ if (script)\n+ {\n+ fclose(script);\n+\n+ pg_log(PG_REPORT, \"fatal\");\n+ pg_fatal(\"The source cluster contains one or more problematic\nlogical replication slots.\\n\"\n+ \"The needed workaround depends on the problem.\\n\"\n+ \"1) If the problem is \\\"The slot is unusable,\\\" You can drop such\nreplication slots.\\n\"\n+ \"2) If the problem is \\\"The slot has not consumed WALs yet,\\\" you\ncan consume all remaining WALs.\\n\"\n+ \"Then, you can restart the upgrade.\\n\"\n+ \"A list of problematic logical replication slots is in the file:\\n\"\n+ \" %s\", output_path);\n+ }\n\nThis needs fixing but I saw it has been updated in v35, so I'll check\nit there later.\n\n======\nsrc/bin/pg_upgrade/info.c\n\n7. get_db_rel_and_slot_infos\n\nvoid\nget_db_rel_and_slot_infos(ClusterInfo *cluster)\n{\nint dbnum;\n\nif (cluster->dbarr.dbs != NULL)\nfree_db_and_rel_infos(&cluster->dbarr);\n\n~\n\nJudging from the HEAD code this function was intended to be reentrant\n-- e.g. it does cleanup code free_db_and_rel_infos in case there was\nsomething there from before.\n\nIIUC there is no such cleanup for the slot_arr. I forget why this was\nremoved. Sure, you might be able to survive the memory leaks, but\nchoosing NOT to clean up the slot_arr seems to contradict the\nintention of HEAD calling free_db_and_rel_infos.\n\n~~~\n\n8. get_db_infos\n\nI noticed the pg_malloc0 is reverted in this function.\n\n- dbinfos = (DbInfo *) pg_malloc(sizeof(DbInfo) * ntups);\n+ dbinfos = (DbInfo *) pg_malloc0(sizeof(DbInfo) * ntups);\n\nIMO it is better to do pg_malloc0 here.\n\nSure, everything probably works OK for the current code, but it seems\nunnecessarily risky to assume that functions will forever be called in\na specific order. AFAICT if someone (e.g. for debugging) calls\ncount_old_cluster_logical_slots() or calls print_slot_infos() then the\nbehaviour is undefined because slot_arr.nslots remains uninitialized.\n\n~~~\n\n9. get_old_cluster_logical_slot_infos\n\n+ i_slotname = PQfnumber(res, \"slot_name\");\n+ i_plugin = PQfnumber(res, \"plugin\");\n+ i_twophase = PQfnumber(res, \"two_phase\");\n+ i_caught_up = PQfnumber(res, \"caught_up\");\n+ i_invalid = PQfnumber(res, \"conflicting\");\n\nIMO SQL should be using an alias for this column, so you can say:\ni_invalid = PQfnumber(res, \"invalid\")\n\nwhich seems better than switching the wording in code.\n\n======\nsrc/bin/pg_upgrade/pg_upgrade.h\n\n10. LogicalSlotInfo\n\n+typedef struct\n+{\n+ char *slotname; /* slot name */\n+ char *plugin; /* plugin */\n+ bool two_phase; /* can the slot decode 2PC? */\n+ bool caught_up; /* Is confirmed_flush_lsn the same as latest\n+ * checkpoint LSN? */\n+ bool invalid; /* Is the slot usable? */\n+} LogicalSlotInfo;\n\n~\n\n+ bool invalid; /* Is the slot usable? */\nThis field name and comment have opposite meanings. Invalid means NOT usable.\n\nSUGGESTION\n/* If true, the slot is unusable. */\n\n======\nsrc/bin/pg_upgrade/server.c\n\n11. start_postmaster\n\n * we only modify the new cluster, so only use it there. If there is a\n * crash, the new cluster has to be recreated anyway. fsync=off is a big\n * win on ext4.\n+ *\n+ * Also, the max_slot_wal_keep_size is set to -1 to prevent the WAL removal\n+ * required by logical slots. The setting could avoid the invalidation of\n+ * slots during the upgrade.\n */\n~\n\nIMO this comment \"to prevent the WAL removal required by logical\nslots\" is ambiguous about how it could be interpreted. Needs\nrearranging for clarity.\n\n~\n\n12. start_postmaster\n\n (cluster == &new_cluster) ?\n- \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" : \"\",\n+ \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c\nmax_slot_wal_keep_size=-1 \" :\n+ \" -c max_slot_wal_keep_size=-1\",\n\nInstead of putting the same option on both sides of the ternary, I was\nwondering if it might be better to hardwire the max_slot_wal_keep_size\njust 1 time in the format string?\n\n======\n.../pg_upgrade/t/003_logical_replication_slots.pl\n\n13.\n# Remove the remained slot\n\n/remained/remaining/\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Sep 2023 09:56:14 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san.\n\nHere are some additional review comments for v35-0002 (and because we\noverlapped, my v34-0002 review comments have not been addressed yet)\n\n======\nCommit message\n\n1.\nNote that the pg_resetwal command would remove WAL files, which are required as\nrestart_lsn. If WALs required by logical replication slots are removed, they are\nunusable. Therefore, during the upgrade, slot restoration is done\nafter the final\npg_resetwal command. The workflow ensures that required WALs are remained.\n\n~\n\nSUGGESTION (minor wording and /required as/required for/ and\n/remained/retained/)\nNote that the pg_resetwal command would remove WAL files, which are\nrequired for restart_lsn. If WALs required by logical replication\nslots are removed, the slots are unusable. Therefore, during the\nupgrade, slot restoration is done after the final pg_resetwal command.\nThe workflow ensures that required WALs are retained.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n2.\nThe SGML is mal-formed so I am unable to build PG DOCS. Please try\nbuilding the docs before posting the patch.\n\nref/pgupgrade.sgml:446: parser error : Opening and ending tag\nmismatch: itemizedlist line 410 and listitem\n </listitem>\n ^\n\n~~~\n\n3.\n+ <listitem>\n+ <para>\n+ The new cluster must not have permanent logical replication slots, i.e.,\n+ there are no slots whose\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>temporary</structfield>\n+ is <literal>false</literal>.\n+ </para>\n+ </listitem>\n\n/there are no slots whose/there must be no slots where/\n\n~~~\n\n4.\n or take a file system backup as the standbys are still synchronized\n- with the primary.) Replication slots are not copied and must\n- be recreated.\n+ with the primary.) Only logical slots on the primary are migrated to the\n+ new standby, and other slots on the old standby must be recreated as\n+ they are not copied.\n </para>\n\nMixing the terms \"migrated\" and \"copied\" seems to complicate this.\nDoes the following suggestion work better instead?\n\nSUGGESTION (??)\nOnly logical slots on the primary are migrated to the new standby. Any\nother slots present on the old standby must be recreated.\n\n======\nsrc/backend/replication/slot.c\n\n5. InvalidatePossiblyObsoleteSlot\n\n+ /*\n+ * The logical replication slots shouldn't be invalidated as\n+ * max_slot_wal_keep_size is set to -1 during the upgrade.\n+ */\n+ if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n+ elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\n+\n\nI felt the comment could have another sentence like \"The following is\njust a sanity check.\"\n\n======\nsrc/bin/pg_upgrade/function.c\n\n6. get_loadable_libraries\n\n+ array_size = totaltups + count_old_cluster_logical_slots();\n+ os_info.libraries = (LibraryInfo *) pg_malloc(sizeof(LibraryInfo) *\n(array_size));\n totaltups = 0;\n\n6a.\nMaybe something like 'n_libinfos' would be a more meaningful name than\n'array_size'?\n\n~\n\n6b.\n+ os_info.libraries = (LibraryInfo *) pg_malloc(sizeof(LibraryInfo) *\n(array_size));\n\nThose extra parentheses around \"(array_size)\" seem overkill.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Sep 2023 12:06:31 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Monday, September 11, 2023 9:22 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n>\r\n> Thank you for reviewing! PSA new version.\r\n\r\nThanks for updating the patch, few cosmetic comments:\r\n\r\n1.\r\n\r\n #include \"access/transam.h\"\r\n #include \"catalog/pg_language_d.h\"\r\n+#include \"fe_utils/string_utils.h\"\r\n #include \"pg_upgrade.h\"\r\n\r\nIt seems we don't need this head file anymore.\r\n\r\n\r\n2.\r\n+\t\tif (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\r\n+\t\t\telog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\r\n\r\nI think normally the first letter is lowercase, and we can avoid the period.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 12 Sep 2023 02:33:25 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 02:33:25AM +0000, Zhijie Hou (Fujitsu) wrote:\n> 2.\n> +\t\tif (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> +\t\t\telog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\n> \n> I think normally the first letter is lowercase, and we can avoid the period.\n\nDocumentation is your friend:\nhttps://www.postgresql.org/docs/current/error-style-guide.html\n--\nMichael",
"msg_date": "Tue, 12 Sep 2023 12:57:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! Before posting new patch set, I want to respond some\r\ncomments.\r\n\r\n> \r\n> ======\r\n> 1. GENERAL -- Cluster Terminology\r\n> \r\n> This is not really a problem of your patch, but during message review,\r\n> I noticed the terms old/new cluster VERSUS source/target cluster and\r\n> both were used many times:\r\n> \r\n> For example.\r\n> \".*new clusmter --> 44 occurences\r\n> \".*old cluster --> 21 occurences\r\n> \".*source cluster --> 6 occurences\r\n> \".*target cluster --> 12 occurences\r\n> \r\n> Perhaps there should be a new thread/patch to use consistent terms.\r\n> \r\n> Thoughts?\r\n\r\nI preferred the term new/old because I could not found the term source/target\r\nin the documentation for the pg_upgrade. (IIUC I used new/old in my patch).\r\nAnyway, it should be discussed in another thread.\r\n\r\n> 2. GENERAL - Error message cases\r\n> \r\n> Just FYI, there are many inconsistent capitalising in these patch\r\n> messages, but then the same is also true for the HEAD code. It's a bit\r\n> messy, but generally, I think your capitalisation was aligned with\r\n> what I saw in HEAD, so I didn't comment anywhere about it.\r\n\r\nYeah, the rule is broken even in HEAD. I determined a rule in [1], which seems\r\nconsistent with other parts in the file.\r\nMichael kindly told the error message formatting [2], and basically it follows the\r\nstyle. (IIUC pg_fatal(\"Your installation...\") is followed the\r\n\"Detail and hint messages\" rule.)\r\n\r\n> ======\r\n> src/bin/pg_upgrade/info.c\r\n> \r\n> 7. get_db_rel_and_slot_infos\r\n> \r\n> void\r\n> get_db_rel_and_slot_infos(ClusterInfo *cluster)\r\n> {\r\n> int dbnum;\r\n> \r\n> if (cluster->dbarr.dbs != NULL)\r\n> free_db_and_rel_infos(&cluster->dbarr);\r\n> \r\n> ~\r\n> \r\n> Judging from the HEAD code this function was intended to be reentrant\r\n> -- e.g. it does cleanup code free_db_and_rel_infos in case there was\r\n> something there from before.\r\n> \r\n> IIUC there is no such cleanup for the slot_arr. I forget why this was\r\n> removed. Sure, you might be able to survive the memory leaks, but\r\n> choosing NOT to clean up the slot_arr seems to contradict the\r\n> intention of HEAD calling free_db_and_rel_infos.\r\n\r\nfree_db_and_rel_infos() is called if get_db_rel_and_slot_infos() is called\r\nseveral times for the same cluster. Followings are callers: \r\n\r\n* check_and_dump_old_cluster(), target is old_cluster\r\n* check_new_cluster(), target is new_cluster\r\n* create_new_objects(), target is new_cluster\r\n\r\nAnd we requires that new_cluster must not have logical slots, this restriction\r\ncannot ease. Therefore, there are no possibilities slot_arr must be free()'d,\r\nso that I removed (See similar discussion [3]). I think we should not add no-op codes.\r\nIn old version there was an Assert() instead, but removed based on the comment [4].\r\n\r\n> 8. get_db_infos\r\n> \r\n> I noticed the pg_malloc0 is reverted in this function.\r\n> \r\n> - dbinfos = (DbInfo *) pg_malloc(sizeof(DbInfo) * ntups);\r\n> + dbinfos = (DbInfo *) pg_malloc0(sizeof(DbInfo) * ntups);\r\n> \r\n> IMO it is better to do pg_malloc0 here.\r\n> \r\n> Sure, everything probably works OK for the current code,\r\n\r\nYes, it works well. No one checks slot_arr before\r\nget_old_cluster_logical_slot_infos(). In the old version, it was checked like\r\n(slot_arr == NULL) infree_db_and_rel_infos(), but removed.\r\n\r\n> but it seems\r\n> unnecessarily risky to assume that functions will forever be called in\r\n> a specific order. AFAICT if someone (e.g. for debugging) calls\r\n> count_old_cluster_logical_slots() or calls print_slot_infos() then the\r\n> behaviour is undefined because slot_arr.nslots remains uninitialized.\r\n\r\n\r\nHmm, I do not think such assumption is needed. In the current code pg_malloc() is\r\nused in get_db_infos(), so there is a possibility that print_rel_infos() is\r\nexecuted for debugging. The behavior is undefined - this is same as you said,\r\nand code has been alive. Based on that I think we can accept the risk and\r\nreduce operations instead. If you knew other example, please share here...\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB586642D33208D190F67CDD7BF5F2A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-GRAMMAR-PUNCTUATION\r\n[3]: https://www.postgresql.org/message-id/TYAPR01MB5866732D30ABB976992BDECCF5789%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[4]: https://www.postgresql.org/message-id/OS0PR01MB5716670FE547BA87FDEF895E94EDA%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 12 Sep 2023 07:04:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Michael,\n\n> On Tue, Sep 12, 2023 at 02:33:25AM +0000, Zhijie Hou (Fujitsu) wrote:\n> > 2.\n> > +\t\tif (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> > +\t\t\telog(ERROR, \"Replication slots must not be invalidated\n> during the upgrade.\");\n> >\n> > I think normally the first letter is lowercase, and we can avoid the period.\n> \n> Documentation is your friend:\n> https://www.postgresql.org/docs/current/error-style-guide.html\n\nThank you for the information! It is quite helpful for me.\n(Some fatal errors started with capital character like \"Your installation contains...\",\nbut I regarded them as the detail or hint message.)\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 08:10:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> src/backend/replication/slot.c\r\n> \r\n> 3. InvalidatePossiblyObsoleteSlot\r\n> \r\n> + /*\r\n> + * Raise an ERROR if the logical replication slot is invalidating. It\r\n> + * would not happen because max_slot_wal_keep_size is set to -1 during\r\n> + * the upgrade, but it stays safe.\r\n> + */\r\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\r\n> + elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\r\n> \r\n> 3a.\r\n> That comment didn't seem good. I think you mean like in the suggestion below.\r\n> \r\n> SUGGESTION\r\n> It should not be possible for logical replication slots to be\r\n> invalidated because max_slot_wal_keep_size is set to -1 during the\r\n> upgrade. The following is just for sanity-checking.\r\n\r\nThis part was updated in v35. Please tell me if current version is still bad...\r\n\r\n> 3b.\r\n> I wasn't sure if 'max_slot_wal_keep_size' GUC is accessible in this\r\n> scope, but if it is available then maybe\r\n> Assert(max_slot_wal_keep_size_mb == -1); should also be included in\r\n> this sanity check.\r\n\r\nIIUC, guc parameters are visible from all the postgres processes.\r\nAdded.\r\n\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 4. check_new_cluster_logical_replication_slots\r\n> \r\n> + conn = connectToServer(&new_cluster, \"template1\");\r\n> +\r\n> + prep_status(\"Checking for logical replication slots\");\r\n> \r\n> There is some inconsistency with all the subsequent pg_fatals within\r\n> this function -- some of them mention \"New cluster\" but most of them\r\n> do not.\r\n> \r\n> Meanwhile, Kuroda-san showed me sample output like:\r\n> \r\n> Checking for presence of required libraries ok\r\n> Checking database user is the install user ok\r\n> Checking for prepared transactions ok\r\n> Checking for new cluster tablespace directories ok\r\n> Checking for logical replication slots\r\n> New cluster must not have logical replication slots but found 1 slot.\r\n> Failure, exiting\r\n> \r\n> So, I felt the log message title (\"Checking...\") should be changed to\r\n> include the words \"new cluster\" just like the log preceding it:\r\n> \r\n> \"Checking for logical replication slots\" ==> \"Checking for new cluster\r\n> logical replication slots\"\r\n> \r\n> Now all the subsequent pg_fatals clearly are for \"new cluster\"\r\n\r\nChanged.\r\n\r\n> 5. check_new_cluster_logical_replication_slots\r\n> \r\n> + if (nslots_on_new)\r\n> + pg_fatal(ngettext(\"New cluster must not have logical replication\r\n> slots but found %d slot.\",\r\n> + \"New cluster must not have logical replication slots but found %d slots.\",\r\n> + nslots_on_new),\r\n> + nslots_on_new);\r\n> \r\n> 5a.\r\n> TBH, I didn't see why you go to unnecessary trouble to have a plural\r\n> message here. The message could just be like:\r\n> \"New cluster must have 0 logical replication slots but found %d.\"\r\n> \r\n> ~\r\n> \r\n> 5b.\r\n> However, now (from the previous review comment #4) if \"New cluster\" is\r\n> already explicit in the log, the pg_fatal message can become just:\r\n> \"New cluster must have ...\" ==> \"Expected 0 logical replication slots\r\n> but found %d.\"\r\n\r\nBasically it's better. But the initial character should be lower case and period\r\nis not needed. Modified like that.\r\n\r\n> 9. get_old_cluster_logical_slot_infos\r\n> \r\n> + i_slotname = PQfnumber(res, \"slot_name\");\r\n> + i_plugin = PQfnumber(res, \"plugin\");\r\n> + i_twophase = PQfnumber(res, \"two_phase\");\r\n> + i_caught_up = PQfnumber(res, \"caught_up\");\r\n> + i_invalid = PQfnumber(res, \"conflicting\");\r\n> \r\n> IMO SQL should be using an alias for this column, so you can say:\r\n> i_invalid = PQfnumber(res, \"invalid\")\r\n> \r\n> which seems better than switching the wording in code.\r\n\r\nModified. The argument of PQfnumber() must be same as the column name, so the\r\nword \"as invalid\" was added to SQL.\r\n\r\n> src/bin/pg_upgrade/pg_upgrade.h\r\n> \r\n> 10. LogicalSlotInfo\r\n> \r\n> +typedef struct\r\n> +{\r\n> + char *slotname; /* slot name */\r\n> + char *plugin; /* plugin */\r\n> + bool two_phase; /* can the slot decode 2PC? */\r\n> + bool caught_up; /* Is confirmed_flush_lsn the same as latest\r\n> + * checkpoint LSN? */\r\n> + bool invalid; /* Is the slot usable? */\r\n> +} LogicalSlotInfo;\r\n> \r\n> ~\r\n> \r\n> + bool invalid; /* Is the slot usable? */\r\n> This field name and comment have opposite meanings. Invalid means NOT usable.\r\n> \r\n> SUGGESTION\r\n> /* If true, the slot is unusable. */\r\n\r\nFixed.\r\n\r\n> src/bin/pg_upgrade/server.c\r\n> \r\n> 11. start_postmaster\r\n> \r\n> * we only modify the new cluster, so only use it there. If there is a\r\n> * crash, the new cluster has to be recreated anyway. fsync=off is a big\r\n> * win on ext4.\r\n> + *\r\n> + * Also, the max_slot_wal_keep_size is set to -1 to prevent the WAL removal\r\n> + * required by logical slots. The setting could avoid the invalidation of\r\n> + * slots during the upgrade.\r\n> */\r\n> ~\r\n> \r\n> IMO this comment \"to prevent the WAL removal required by logical\r\n> slots\" is ambiguous about how it could be interpreted. Needs\r\n> rearranging for clarity.\r\n\r\nThe description was changed. How do you think?\r\n\r\n> 12. start_postmaster\r\n> \r\n> (cluster == &new_cluster) ?\r\n> - \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off\" : \"\",\r\n> + \" -c synchronous_commit=off -c fsync=off -c full_page_writes=off -c\r\n> max_slot_wal_keep_size=-1 \" :\r\n> + \" -c max_slot_wal_keep_size=-1\",\r\n> \r\n> Instead of putting the same option on both sides of the ternary, I was\r\n> wondering if it might be better to hardwire the max_slot_wal_keep_size\r\n> just 1 time in the format string?\r\n\r\nFixed.\r\n\r\n> .../pg_upgrade/t/003_logical_replication_slots.pl\r\n> \r\n> 13.\r\n> # Remove the remained slot\r\n> \r\n> /remained/remaining/\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 12 Sep 2023 11:50:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing!\r\n\r\n=====\r\n> Commit message\r\n> \r\n> 1.\r\n> Note that the pg_resetwal command would remove WAL files, which are required\r\n> as\r\n> restart_lsn. If WALs required by logical replication slots are removed, they are\r\n> unusable. Therefore, during the upgrade, slot restoration is done\r\n> after the final\r\n> pg_resetwal command. The workflow ensures that required WALs are remained.\r\n> \r\n> ~\r\n> \r\n> SUGGESTION (minor wording and /required as/required for/ and\r\n> /remained/retained/)\r\n> Note that the pg_resetwal command would remove WAL files, which are\r\n> required for restart_lsn. If WALs required by logical replication\r\n> slots are removed, the slots are unusable. Therefore, during the\r\n> upgrade, slot restoration is done after the final pg_resetwal command.\r\n> The workflow ensures that required WALs are retained.\r\n\r\nFixed.\r\n\r\n> doc/src/sgml/ref/pgupgrade.sgml\r\n> \r\n> 2.\r\n> The SGML is mal-formed so I am unable to build PG DOCS. Please try\r\n> building the docs before posting the patch.\r\n> \r\n> ref/pgupgrade.sgml:446: parser error : Opening and ending tag\r\n> mismatch: itemizedlist line 410 and listitem\r\n> </listitem>\r\n> ^\r\n\r\nFixed. Sorry for noise.\r\n\r\n> 3.\r\n> + <listitem>\r\n> + <para>\r\n> + The new cluster must not have permanent logical replication slots, i.e.,\r\n> + there are no slots whose\r\n> + <link\r\n> linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>t\r\n> emporary</structfield>\r\n> + is <literal>false</literal>.\r\n> + </para>\r\n> + </listitem>\r\n> \r\n> /there are no slots whose/there must be no slots where/\r\n\r\nFixed. \r\n\r\n> 4.\r\n> or take a file system backup as the standbys are still synchronized\r\n> - with the primary.) Replication slots are not copied and must\r\n> - be recreated.\r\n> + with the primary.) Only logical slots on the primary are migrated to the\r\n> + new standby, and other slots on the old standby must be recreated as\r\n> + they are not copied.\r\n> </para>\r\n> \r\n> Mixing the terms \"migrated\" and \"copied\" seems to complicate this.\r\n> Does the following suggestion work better instead?\r\n> \r\n> SUGGESTION (??)\r\n> Only logical slots on the primary are migrated to the new standby. Any\r\n> other slots present on the old standby must be recreated.\r\n\r\nHmm, I preferred to use \"copied\". How do you think?\r\n\r\n> src/backend/replication/slot.c\r\n> \r\n> 5. InvalidatePossiblyObsoleteSlot\r\n> \r\n> + /*\r\n> + * The logical replication slots shouldn't be invalidated as\r\n> + * max_slot_wal_keep_size is set to -1 during the upgrade.\r\n> + */\r\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\r\n> + elog(ERROR, \"Replication slots must not be invalidated during the upgrade.\");\r\n> +\r\n> \r\n> I felt the comment could have another sentence like \"The following is\r\n> just a sanity check.\"\r\n\r\nAdded.\r\n\r\n> src/bin/pg_upgrade/function.c\r\n> \r\n> 6. get_loadable_libraries\r\n> \r\n> + array_size = totaltups + count_old_cluster_logical_slots();\r\n> + os_info.libraries = (LibraryInfo *) pg_malloc(sizeof(LibraryInfo) *\r\n> (array_size));\r\n> totaltups = 0;\r\n> \r\n> 6a.\r\n> Maybe something like 'n_libinfos' would be a more meaningful name than\r\n> 'array_size'?\r\n\r\nFixed. \r\n\r\n> 6b.\r\n> + os_info.libraries = (LibraryInfo *) pg_malloc(sizeof(LibraryInfo) *\r\n> (array_size));\r\n> \r\n> Those extra parentheses around \"(array_size)\" seem overkill.\r\n\r\nRemoved.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 12 Sep 2023 11:50:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\n\nThank you for reviewing!\n\n> 1.\n> \n> #include \"access/transam.h\"\n> #include \"catalog/pg_language_d.h\"\n> +#include \"fe_utils/string_utils.h\"\n> #include \"pg_upgrade.h\"\n> \n> It seems we don't need this head file anymore.\n\nRemoved.\n\n> 2.\n> +\t\tif (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> +\t\t\telog(ERROR, \"Replication slots must not be invalidated\n> during the upgrade.\");\n> \n> I think normally the first letter is lowercase, and we can avoid the period.\n\nRight, fixed. Also, a period is removed based on the rule. Apart from other detailed\nmessages, this just reports what happened.\n\n```\n if (nslots_on_old > max_replication_slots)\n pg_fatal(\"max_replication_slots (%d) must be greater than or equal to the number of \"\n- \"logical replication slots (%d) on the old cluster.\",\n+ \"logical replication slots (%d) on the old cluster\",\n max_replication_slots, nslots_on_old);\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 12 Sep 2023 11:50:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Hi Kuroda-san. Here are my review comments for patch v36-0002.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n Configure the servers for log shipping. (You do not need to run\n <function>pg_backup_start()</function> and\n<function>pg_backup_stop()</function>\n or take a file system backup as the standbys are still synchronized\n- with the primary.) Replication slots are not copied and must\n- be recreated.\n+ with the primary.) Only logical slots on the primary are copied to the\n+ new standby, and other other slots on the old standby must be recreated\n+ as they are not copied.\n </para>\n\nIMO this text still needs some minor changes like shown below, Anyway,\nthere is a typo: /other other/\n\nSUGGESTION\nOnly logical slots on the primary are copied to the new standby, but\nother slots on the old standby are not copied so must be recreated\nmanually.\n\n======\nsrc/bin/pg_upgrade/server.c\n\n2.\n+ *\n+ * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n+ * checkpointer process. If WALs required by logical replication slots are\n+ * removed, the slots are unusable. The setting ensures that such WAL\n+ * records have remained so that invalidation of slots would be avoided\n+ * during the upgrade.\n\nThe comment already explained the reason for the setting is to prevent\nremoving the needed WAL records, so I felt there is no need for the\nlast sentence to repeat the same information.\n\nBEFORE\nThe setting ensures that such WAL records have remained so that\ninvalidation of slots would be avoided during the upgrade.\n\nSUGGESTION\nThis setting prevents the invalidation of slots during the upgrade.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Sep 2023 10:27:35 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 5:20 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n\nFew comments:\n=============\n1. One thing to note is that if user checks whether the old cluster is\nupgradable with --check option and then try to upgrade, that will also\nfail. Because during the --check run there would at least one\nadditional shutdown checkpoint WAL and then in the next run the slots\nposition won't match. Note, I am saying this in context of using\n--check option with not-running old cluster. Won't that be surprising\nto users? One possibility is that we document such a behaviour and\nother is that we go back to WAL reading design where we can ignore\nknown WAL records like shutdown checkpoint, XLOG_RUNNING_XACTS, etc.\n\n2.\n+ /*\n+ * Store the names of output plugins as well. There is a possibility\n+ * that duplicated plugins are set, but the consumer function\n+ * check_loadable_libraries() will avoid checking the same library, so\n+ * we do not have to consider their uniqueness here.\n+ */\n+ for (slotno = 0; slotno < slot_arr->nslots; slotno++)\n+ {\n+ os_info.libraries[totaltups].name = pg_strdup(slot_arr->slots[slotno].plugin);\n\nHere, we should ignore invalid slots.\n\n3.\n+ if (!live_check && !slot->caught_up)\n+ {\n+ if (script == NULL &&\n+ (script = fopen_priv(output_path, \"w\")) == NULL)\n+ pg_fatal(\"could not open file \\\"%s\\\": %s\",\n+ output_path, strerror(errno));\n+\n+ fprintf(script,\n+ \"The slot \\\"%s\\\" has not consumed the WAL yet\\n\",\n+ slot->slotname);\n\nIs it possible to print the LSN locations of slot and last checkpoint?\nI think that will aid in debugging the problems if any and could be\nhelpful to users as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Sep 2023 15:19:24 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! Before making a patch I can reply the important point.\r\n\r\n> 1. One thing to note is that if user checks whether the old cluster is\r\n> upgradable with --check option and then try to upgrade, that will also\r\n> fail. Because during the --check run there would at least one\r\n> additional shutdown checkpoint WAL and then in the next run the slots\r\n> position won't match. Note, I am saying this in context of using\r\n> --check option with not-running old cluster. Won't that be surprising\r\n> to users? One possibility is that we document such a behaviour and\r\n> other is that we go back to WAL reading design where we can ignore\r\n> known WAL records like shutdown checkpoint, XLOG_RUNNING_XACTS, etc.\r\n\r\nGood catch, we have never considered the case that --check is executed for\r\nstopped cluster. You are right, the old cluster is turned on/off during the\r\ncheck and it generates SHUTDOWN_CHECKPOINT. This leads that confirmed_flush is\r\nbehind the latest checkpoint lsn.\r\n\r\nGood catch, we have never considered the case that --check is executed for\r\nstopped cluster. You are right, the old cluster is turned on/off during the\r\ncheck and it generates SHUTDOWN_CHECKPOINT. This leads that confirmed_flush is\r\nbehind the latest checkpoint lsn.\r\n\r\nHere are other approaches we came up with:\r\n\r\n1. adds WARNING message when the --check is executed and slots are checked.\r\n We can say like: \r\n\r\n```\r\n...\r\nChecking for valid logical replication slots \r\nWARNING: this check generated WALs\r\nNext pg_uprade would fail.\r\nPlease ensure again that all WALs are replicated.\r\n...\r\n```\r\n\r\n\r\n2. adds hint message in the FATAL error when the confirmed_flush is not same as\r\n the latest checkpoint:\r\n\r\n```\r\n...\r\nChecking for valid logical replication slots fatal\r\n\r\nYour installation contains invalid logical replication slots.\r\nThese slots can't be copied, so this cluster cannot be upgraded.\r\nConsider removing such slots or consuming the pending WAL if any,\r\nand then restart the upgrade.\r\nIf you did pg_upgrade --check before this run, it may be a cause.\r\nPlease start clusters and confirm again that all changes are\r\nreplicated.\r\nA list of invalid logical replication slots is in the file:\r\n```\r\n\r\n3. requests users to do pg_upgrade --check on backup database, if old cluster\r\n has logical slots. Basically they save a whole of cluster before doing pg_uprade,\r\n so it may be acceptable. This is not a modification of codes.\r\n\r\nHow do others think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 13 Sep 2023 13:52:17 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wednesday, September 13, 2023 9:52 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Dear Amit,\r\n> \r\n> Thank you for reviewing! Before making a patch I can reply the important point.\r\n> \r\n> > 1. One thing to note is that if user checks whether the old cluster is\r\n> > upgradable with --check option and then try to upgrade, that will also\r\n> > fail. Because during the --check run there would at least one\r\n> > additional shutdown checkpoint WAL and then in the next run the slots\r\n> > position won't match. Note, I am saying this in context of using\r\n> > --check option with not-running old cluster. Won't that be surprising\r\n> > to users? One possibility is that we document such a behaviour and\r\n> > other is that we go back to WAL reading design where we can ignore\r\n> > known WAL records like shutdown checkpoint, XLOG_RUNNING_XACTS, etc.\r\n> \r\n> Good catch, we have never considered the case that --check is executed for\r\n> stopped cluster. You are right, the old cluster is turned on/off during the\r\n> check and it generates SHUTDOWN_CHECKPOINT. This leads that\r\n> confirmed_flush is\r\n> behind the latest checkpoint lsn.\r\n> \r\n> Here are other approaches we came up with:\r\n> \r\n> 1. adds WARNING message when the --check is executed and slots are\r\n> checked.\r\n> We can say like:\r\n> \r\n> ```\r\n> ...\r\n> Checking for valid logical replication slots\r\n> WARNING: this check generated WALs\r\n> Next pg_uprade would fail.\r\n> Please ensure again that all WALs are replicated.\r\n> ...\r\n> ```\r\n> \r\n> \r\n> 2. adds hint message in the FATAL error when the confirmed_flush is not same\r\n> as\r\n> the latest checkpoint:\r\n> \r\n> ```\r\n> ...\r\n> Checking for valid logical replication slots fatal\r\n> \r\n> Your installation contains invalid logical replication slots.\r\n> These slots can't be copied, so this cluster cannot be upgraded.\r\n> Consider removing such slots or consuming the pending WAL if any,\r\n> and then restart the upgrade.\r\n> If you did pg_upgrade --check before this run, it may be a cause.\r\n> Please start clusters and confirm again that all changes are\r\n> replicated.\r\n> A list of invalid logical replication slots is in the file:\r\n> ```\r\n> \r\n> 3. requests users to do pg_upgrade --check on backup database, if old cluster\r\n> has logical slots. Basically they save a whole of cluster before doing\r\n> pg_uprade,\r\n> so it may be acceptable. This is not a modification of codes.\r\n> \r\n\r\nHere are some more ideas about the issue for reference.\r\n\r\n1) Extending the controlfile.\r\n\r\nWe can dd a new field (e.g. non_upgrade_checkPoint) to record the last check point\r\nptr happened in non-upgrade mode. The new field won't be updated due to\r\n\"pg_upgrade --check\", so pg_upgrade can use this LSN to compare with the slot's\r\nconfirmed_flush_lsn.\r\n\r\nPros: User can smoothly upgrade the cluster even if they run \"pg_upgrade\r\n--check\" in advance.\r\n\r\nCons: Not sure if this is a enough reason to introduce new field in\r\ncontrolfile.\r\n\r\n-----------\r\n\r\n2) Advance the slot's confirmed_flush_lsn in pg_upgrade if the check passes.\r\n\r\nIntroducing an upgrade support SQL function\r\n(binary_upgrade_advance_logical_slot_lsn()) to set a\r\nflag(catch_confirmed_lsn_up) on server side. On server side, when trying to\r\nflush the slot in shutdown checkpoint(CheckPointReplicationSlots), we update\r\nthe slot's confirmed_flush_lsn to the lsn of the current checkpoint if\r\ncatch_confirmed_lsn_up is set.\r\n\r\nPros: User can smoothly upgrade the cluster even if they run \"pg_upgrade\r\n--check\" in advance.\r\n\r\nCons: Although we have some examples for using functions\r\n(binary_upgrade_set_next_pg_enum_oid ...) to set some variables during upgrade\r\n, but not sure if it's a standard behavior to change the slot's lsn during\r\nupgrade.\r\n\r\n-----------\r\n\r\n3) Introduce a new pg_upgrade option(e.g. skip_slot_check), and suggest if user\r\n already did the upgrade check for stopped server, they can use this option\r\n when trying to upgrade later.\r\n\r\nPros: Can save some efforts for user to advance each slot's lsn.\r\n\r\nCons: I didn't see similar options in pg_upgrade, might need some agreement.\r\n\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Thu, 14 Sep 2023 03:10:38 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 7:22 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n>\n> Thank you for reviewing! Before making a patch I can reply the important point.\n>\n> > 1. One thing to note is that if user checks whether the old cluster is\n> > upgradable with --check option and then try to upgrade, that will also\n> > fail. Because during the --check run there would at least one\n> > additional shutdown checkpoint WAL and then in the next run the slots\n> > position won't match. Note, I am saying this in context of using\n> > --check option with not-running old cluster. Won't that be surprising\n> > to users? One possibility is that we document such a behaviour and\n> > other is that we go back to WAL reading design where we can ignore\n> > known WAL records like shutdown checkpoint, XLOG_RUNNING_XACTS, etc.\n>\n> Good catch, we have never considered the case that --check is executed for\n> stopped cluster. You are right, the old cluster is turned on/off during the\n> check and it generates SHUTDOWN_CHECKPOINT. This leads that confirmed_flush is\n> behind the latest checkpoint lsn.\n>\n> Good catch, we have never considered the case that --check is executed for\n> stopped cluster. You are right, the old cluster is turned on/off during the\n> check and it generates SHUTDOWN_CHECKPOINT. This leads that confirmed_flush is\n> behind the latest checkpoint lsn.\n\nGood catch.\n\n> Here are other approaches we came up with:\n>\n> 1. adds WARNING message when the --check is executed and slots are checked.\n> We can say like:\n>\n> ```\n> ...\n> Checking for valid logical replication slots\n> WARNING: this check generated WALs\n> Next pg_uprade would fail.\n> Please ensure again that all WALs are replicated.\n> ...\n\nIMHO the --check is a very common command users execute before the\nactual upgrade. So issuing such a WARNING might not be good because\nthen what option user have? Do they need to again restart the cluster\nin order to stream the new WAL and again shut it down? I don't think\nthat is really an acceptable idea. Maybe as discussed in the past we\ncan provide an option to skip the slot checking and during the --check\ncommand we can give a WARNING and suggest that better to use\n--skip-slot-checking for the main upgrade as we have already checked.\nThis could still be okay for the user.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Sep 2023 09:14:40 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 8:40 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n\n>\n> Here are some more ideas about the issue for reference.\n>\n> 1) Extending the controlfile.\n>\n> We can dd a new field (e.g. non_upgrade_checkPoint) to record the last check point\n> ptr happened in non-upgrade mode. The new field won't be updated due to\n> \"pg_upgrade --check\", so pg_upgrade can use this LSN to compare with the slot's\n> confirmed_flush_lsn.\n>\n> Pros: User can smoothly upgrade the cluster even if they run \"pg_upgrade\n> --check\" in advance.\n>\n> Cons: Not sure if this is a enough reason to introduce new field in\n> controlfile.\n\nYeah, this could be an option but I am not sure either that adding a\nnew option for this purpose is the best way.\n\n> -----------\n>\n> 2) Advance the slot's confirmed_flush_lsn in pg_upgrade if the check passes.\n>\n> Introducing an upgrade support SQL function\n> (binary_upgrade_advance_logical_slot_lsn()) to set a\n> flag(catch_confirmed_lsn_up) on server side. On server side, when trying to\n> flush the slot in shutdown checkpoint(CheckPointReplicationSlots), we update\n> the slot's confirmed_flush_lsn to the lsn of the current checkpoint if\n> catch_confirmed_lsn_up is set.\n>\n> Pros: User can smoothly upgrade the cluster even if they run \"pg_upgrade\n> --check\" in advance.\n>\n> Cons: Although we have some examples for using functions\n> (binary_upgrade_set_next_pg_enum_oid ...) to set some variables during upgrade\n> , but not sure if it's a standard behavior to change the slot's lsn during\n> upgrade.\n\nI feel this seems like a good option.\n\n> -----------\n>\n> 3) Introduce a new pg_upgrade option(e.g. skip_slot_check), and suggest if user\n> already did the upgrade check for stopped server, they can use this option\n> when trying to upgrade later.\n>\n> Pros: Can save some efforts for user to advance each slot's lsn.\n>\n> Cons: I didn't see similar options in pg_upgrade, might need some agreement.\n\nYeah right, in fact during the --check command we can give that\nsuggestion as well.\n\nI feel option 2 looks best to me unless there is some design issue to\nthat, as of now I do not see any issue with that though. Let's see\nwhat others think.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Sep 2023 09:21:05 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 9:21 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 8:40 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n>\n> >\n> > Here are some more ideas about the issue for reference.\n> >\n> > 1) Extending the controlfile.\n> >\n> > We can dd a new field (e.g. non_upgrade_checkPoint) to record the last check point\n> > ptr happened in non-upgrade mode. The new field won't be updated due to\n> > \"pg_upgrade --check\", so pg_upgrade can use this LSN to compare with the slot's\n> > confirmed_flush_lsn.\n> >\n> > Pros: User can smoothly upgrade the cluster even if they run \"pg_upgrade\n> > --check\" in advance.\n> >\n> > Cons: Not sure if this is a enough reason to introduce new field in\n> > controlfile.\n>\n> Yeah, this could be an option but I am not sure either that adding a\n> new option for this purpose is the best way.\n>\n\nI also think so. It seems this could work but adding upgrade-specific\ninformation to other data structures doesn't sound like a clean\nsolution.\n\n> > -----------\n> >\n> > 2) Advance the slot's confirmed_flush_lsn in pg_upgrade if the check passes.\n> >\n> > Introducing an upgrade support SQL function\n> > (binary_upgrade_advance_logical_slot_lsn()) to set a\n> > flag(catch_confirmed_lsn_up) on server side. On server side, when trying to\n> > flush the slot in shutdown checkpoint(CheckPointReplicationSlots), we update\n> > the slot's confirmed_flush_lsn to the lsn of the current checkpoint if\n> > catch_confirmed_lsn_up is set.\n> >\n> > Pros: User can smoothly upgrade the cluster even if they run \"pg_upgrade\n> > --check\" in advance.\n> >\n> > Cons: Although we have some examples for using functions\n> > (binary_upgrade_set_next_pg_enum_oid ...) to set some variables during upgrade\n> > , but not sure if it's a standard behavior to change the slot's lsn during\n> > upgrade.\n>\n> I feel this seems like a good option.\n>\n\nIn this idea, if the user decides not to proceed after the upgrade\n--check, then we would have incremented the confirmed_flush location\nof all slots without the subscriber's acknowledgment. It may not be\nthe usual scenario but in theory, it may violate our basic principle\nof incrementing confirmed_flush location. Another thing to consider is\nwe have to do this for all logical slots under the assumption that all\nare already caught up as pg_upgrade would have ensured that. So,\nideally, the server should have some knowledge that the slots are\nalready caught up to the latest location which again doesn't seem like\na clean idea.\n\n> > -----------\n> >\n> > 3) Introduce a new pg_upgrade option(e.g. skip_slot_check), and suggest if user\n> > already did the upgrade check for stopped server, they can use this option\n> > when trying to upgrade later.\n> >\n> > Pros: Can save some efforts for user to advance each slot's lsn.\n> >\n> > Cons: I didn't see similar options in pg_upgrade, might need some agreement.\n>\n> Yeah right, in fact during the --check command we can give that\n> suggestion as well.\n>\n\nHmm, we can't mandate users to skip checking slots because that is the\nwhole point of --check slots.\n\n> I feel option 2 looks best to me unless there is some design issue to\n> that, as of now I do not see any issue with that though. Let's see\n> what others think.\n>\n\nBy the way, did you consider the previous approach this patch was\nusing? Basically, instead of getting the last checkpoint location from\nthe control file, we will read the WAL file starting from the\nconfirmed_flush location of a slot and if we find any WAL other than\nexpected WALs like shutdown checkpoint, running_xacts, etc. then we\nwill error out.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 10:00:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 10:00 AM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 9:21 AM Dilip Kumar <[email protected]> wrote:\n\n> > > Cons: Although we have some examples for using functions\n> > > (binary_upgrade_set_next_pg_enum_oid ...) to set some variables during upgrade\n> > > , but not sure if it's a standard behavior to change the slot's lsn during\n> > > upgrade.\n> >\n> > I feel this seems like a good option.\n> >\n>\n> In this idea, if the user decides not to proceed after the upgrade\n> --check, then we would have incremented the confirmed_flush location\n> of all slots without the subscriber's acknowledgment.\n\nYeah, thats a problem.\n\n\n> > > -----------\n> > >\n> > > 3) Introduce a new pg_upgrade option(e.g. skip_slot_check), and suggest if user\n> > > already did the upgrade check for stopped server, they can use this option\n> > > when trying to upgrade later.\n> > >\n> > > Pros: Can save some efforts for user to advance each slot's lsn.\n> > >\n> > > Cons: I didn't see similar options in pg_upgrade, might need some agreement.\n> >\n> > Yeah right, in fact during the --check command we can give that\n> > suggestion as well.\n> >\n>\n> Hmm, we can't mandate users to skip checking slots because that is the\n> whole point of --check slots.\n\nI mean not to mandate skipping in the --check command. But once the\ncheck command has already checked the slot then we can issue a\nsuggestion to the user that the slots are already checked so that\nduring the actual upgrade we can --skip checking the slots. So for\nuser who has already run the check command and is now following with\nan upgrade can skip slot checking if we can provide such an option.\n\n> > I feel option 2 looks best to me unless there is some design issue to\n> > that, as of now I do not see any issue with that though. Let's see\n> > what others think.\n> >\n>\n> By the way, did you consider the previous approach this patch was\n> using? Basically, instead of getting the last checkpoint location from\n> the control file, we will read the WAL file starting from the\n> confirmed_flush location of a slot and if we find any WAL other than\n> expected WALs like shutdown checkpoint, running_xacts, etc. then we\n> will error out.\n\nSo basically, while scanning from confirmed_flush we must ensure that\nwe find a first record as SHUTDOWN CHECKPOINT record at the same LSN,\nand after that, we should not get any other WAL other than like you\nsaid shutdown checkpoint, running_xacts. That way we will ensure both\naspect that the confirmed flush LSN is at the shutdown checkpoint and\nafter that there is no real activity in the system. I think to me,\nthis seems like the best available option so far.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Sep 2023 10:37:20 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 10:37 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Sep 14, 2023 at 10:00 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Sep 14, 2023 at 9:21 AM Dilip Kumar <[email protected]> wrote:\n>\n> > > > -----------\n> > > >\n> > > > 3) Introduce a new pg_upgrade option(e.g. skip_slot_check), and suggest if user\n> > > > already did the upgrade check for stopped server, they can use this option\n> > > > when trying to upgrade later.\n> > > >\n> > > > Pros: Can save some efforts for user to advance each slot's lsn.\n> > > >\n> > > > Cons: I didn't see similar options in pg_upgrade, might need some agreement.\n> > >\n> > > Yeah right, in fact during the --check command we can give that\n> > > suggestion as well.\n> > >\n> >\n> > Hmm, we can't mandate users to skip checking slots because that is the\n> > whole point of --check slots.\n>\n> I mean not to mandate skipping in the --check command. But once the\n> check command has already checked the slot then we can issue a\n> suggestion to the user that the slots are already checked so that\n> during the actual upgrade we can --skip checking the slots. So for\n> user who has already run the check command and is now following with\n> an upgrade can skip slot checking if we can provide such an option.\n>\n\noh, okay, we can document and request the user to follow as you\nsuggest but I guess it will be more work for the user and also is less\nintuitive.\n\n> > > I feel option 2 looks best to me unless there is some design issue to\n> > > that, as of now I do not see any issue with that though. Let's see\n> > > what others think.\n> > >\n> >\n> > By the way, did you consider the previous approach this patch was\n> > using? Basically, instead of getting the last checkpoint location from\n> > the control file, we will read the WAL file starting from the\n> > confirmed_flush location of a slot and if we find any WAL other than\n> > expected WALs like shutdown checkpoint, running_xacts, etc. then we\n> > will error out.\n>\n> So basically, while scanning from confirmed_flush we must ensure that\n> we find a first record as SHUTDOWN CHECKPOINT record at the same LSN,\n> and after that, we should not get any other WAL other than like you\n> said shutdown checkpoint, running_xacts. That way we will ensure both\n> aspect that the confirmed flush LSN is at the shutdown checkpoint and\n> after that there is no real activity in the system.\n>\n\nRight.\n\n> I think to me,\n> this seems like the best available option so far.\n>\n\nYeah, let's see if someone else has a different opinion or has a better idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 11:17:50 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> > So basically, while scanning from confirmed_flush we must ensure that\r\n> > we find a first record as SHUTDOWN CHECKPOINT record at the same LSN,\r\n> > and after that, we should not get any other WAL other than like you\r\n> > said shutdown checkpoint, running_xacts. That way we will ensure both\r\n> > aspect that the confirmed flush LSN is at the shutdown checkpoint and\r\n> > after that there is no real activity in the system.\r\n> >\r\n> \r\n> Right.\r\n> \r\n> > I think to me,\r\n> > this seems like the best available option so far.\r\n> >\r\n> \r\n> Yeah, let's see if someone else has a different opinion or has a better idea.\r\n\r\nBased on the recent discussion, I made a prototype which reads all WAL records\r\nand verifies their type. A new upgrade function binary_upgrade_validate_wal_record_types_after_lsn()\r\ndoes that. This function reads WALs from start_lsn (confirmed_flush), and returns\r\ntrue if they can ignore. The type of ignored records are listed in [1].\r\n\r\nKindly Hou found that XLOG_HEAP2_PRUNE may be generated during the pg_upgrade\r\n--check, so it was added to acceptable type.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58660273EACEFC5BF256B133F50DA@TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 15 Sep 2023 03:13:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nAgain, thank you for reviewing! New patch is available in [1].\r\n\r\n> 2.\r\n> + /*\r\n> + * Store the names of output plugins as well. There is a possibility\r\n> + * that duplicated plugins are set, but the consumer function\r\n> + * check_loadable_libraries() will avoid checking the same library, so\r\n> + * we do not have to consider their uniqueness here.\r\n> + */\r\n> + for (slotno = 0; slotno < slot_arr->nslots; slotno++)\r\n> + {\r\n> + os_info.libraries[totaltups].name = pg_strdup(slot_arr->slots[slotno].plugin);\r\n> \r\n> Here, we should ignore invalid slots.\r\n\r\n\"continue\" was added.\r\n\r\n> 3.\r\n> + if (!live_check && !slot->caught_up)\r\n> + {\r\n> + if (script == NULL &&\r\n> + (script = fopen_priv(output_path, \"w\")) == NULL)\r\n> + pg_fatal(\"could not open file \\\"%s\\\": %s\",\r\n> + output_path, strerror(errno));\r\n> +\r\n> + fprintf(script,\r\n> + \"The slot \\\"%s\\\" has not consumed the WAL yet\\n\",\r\n> + slot->slotname);\r\n> \r\n> Is it possible to print the LSN locations of slot and last checkpoint?\r\n> I think that will aid in debugging the problems if any and could be\r\n> helpful to users as well.\r\n\r\nBased on recent discussion, I'm not sure we should output the actual LSN here.\r\n(We do not check latect checkpoint anymore)\r\nIf you still think it should be, please tell me again.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866D63A6460059DC661BF62F5F6A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 15 Sep 2023 03:14:50 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! New patch is available in [1].\r\n\r\n> 1.\r\n> Configure the servers for log shipping. (You do not need to run\r\n> <function>pg_backup_start()</function> and\r\n> <function>pg_backup_stop()</function>\r\n> or take a file system backup as the standbys are still synchronized\r\n> - with the primary.) Replication slots are not copied and must\r\n> - be recreated.\r\n> + with the primary.) Only logical slots on the primary are copied to the\r\n> + new standby, and other other slots on the old standby must be recreated\r\n> + as they are not copied.\r\n> </para>\r\n> \r\n> IMO this text still needs some minor changes like shown below, Anyway,\r\n> there is a typo: /other other/\r\n> \r\n> SUGGESTION\r\n> Only logical slots on the primary are copied to the new standby, but\r\n> other slots on the old standby are not copied so must be recreated\r\n> manually.\r\n>\r\n\r\nFixed.\r\n\r\n> ======\r\n> src/bin/pg_upgrade/server.c\r\n> \r\n> 2.\r\n> + *\r\n> + * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\r\n> + * checkpointer process. If WALs required by logical replication slots are\r\n> + * removed, the slots are unusable. The setting ensures that such WAL\r\n> + * records have remained so that invalidation of slots would be avoided\r\n> + * during the upgrade.\r\n> \r\n> The comment already explained the reason for the setting is to prevent\r\n> removing the needed WAL records, so I felt there is no need for the\r\n> last sentence to repeat the same information.\r\n> \r\n> BEFORE\r\n> The setting ensures that such WAL records have remained so that\r\n> invalidation of slots would be avoided during the upgrade.\r\n> \r\n> SUGGESTION\r\n> This setting prevents the invalidation of slots during the upgrade.\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866D63A6460059DC661BF62F5F6A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 15 Sep 2023 03:15:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 8:43 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n\nFew comments:\n1. Why is the FPI record (XLOG_FPI_FOR_HINT) not considered a record\nto be ignored? This can be generated during reading system tables.\n\n2.\n+binary_upgrade_validate_wal_record_types_after_lsn(PG_FUNCTION_ARGS)\n{\n...\n+ if (initial_record)\n+ {\n+ /* Initial record must be XLOG_CHECKPOINT_SHUTDOWN */\n+ if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID,\n+ XLOG_CHECKPOINT_SHUTDOWN))\n+ result = false;\n...\n+ if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_SHUTDOWN) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_ONLINE) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_STANDBY_ID, XLOG_RUNNING_XACTS) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_HEAP2_ID, XLOG_HEAP2_PRUNE))\n+ result = false;\n...\n}\n\nIsn't it better to immediately return false if any unexpected WAL is\nfound? This will avoid reading unnecessary WAL\n\n3.\n+Datum\n+binary_upgrade_validate_wal_record_types_after_lsn(PG_FUNCTION_ARGS)\n+{\n...\n+\n+ CHECK_IS_BINARY_UPGRADE;\n+\n+ /* Quick exit if the given lsn is larger than current one */\n+ if (start_lsn >= curr_lsn)\n+ PG_RETURN_BOOL(true);\n\nWhy do you return true here? My understanding was if the first record\nis not a shutdown checkpoint record then it should fail, if that is\nnot true then I think we need to explain the same in comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 15 Sep 2023 11:41:29 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version patch set.\r\n\r\n> Few comments:\r\n> 1. Why is the FPI record (XLOG_FPI_FOR_HINT) not considered a record\r\n> to be ignored? This can be generated during reading system tables.\r\n\r\nOh, I just missed. Written in comments atop the function, but not added here.\r\nAdded to white-list.\r\n\r\n> 2.\r\n> +binary_upgrade_validate_wal_record_types_after_lsn(PG_FUNCTION_ARGS)\r\n> {\r\n> ...\r\n> + if (initial_record)\r\n> + {\r\n> + /* Initial record must be XLOG_CHECKPOINT_SHUTDOWN */\r\n> + if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID,\r\n> + XLOG_CHECKPOINT_SHUTDOWN))\r\n> + result = false;\r\n> ...\r\n> + if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID,\r\n> XLOG_CHECKPOINT_SHUTDOWN) &&\r\n> + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID,\r\n> XLOG_CHECKPOINT_ONLINE) &&\r\n> + !CHECK_WAL_RECORD(rmid, info, RM_STANDBY_ID,\r\n> XLOG_RUNNING_XACTS) &&\r\n> + !CHECK_WAL_RECORD(rmid, info, RM_HEAP2_ID, XLOG_HEAP2_PRUNE))\r\n> + result = false;\r\n> ...\r\n> }\r\n> \r\n> Isn't it better to immediately return false if any unexpected WAL is\r\n> found? This will avoid reading unnecessary WAL\r\n\r\nIIUC we can exit the loop of the result == false, so we do not have to read\r\nunnecessary WALs. See the condition below. I used the approach because\r\nprivate_data and xlogreader should be pfree()'d as cleanup.\r\n\r\n```\r\n\t/* Loop until all WALs are read, or unexpected record is found */\r\n\twhile (result && ReadNextXLogRecord(xlogreader))\r\n\t{\r\n```\r\n\r\n> 3.\r\n> +Datum\r\n> +binary_upgrade_validate_wal_record_types_after_lsn(PG_FUNCTION_ARGS)\r\n> +{\r\n> ...\r\n> +\r\n> + CHECK_IS_BINARY_UPGRADE;\r\n> +\r\n> + /* Quick exit if the given lsn is larger than current one */\r\n> + if (start_lsn >= curr_lsn)\r\n> + PG_RETURN_BOOL(true);\r\n> \r\n> Why do you return true here? My understanding was if the first record\r\n> is not a shutdown checkpoint record then it should fail, if that is\r\n> not true then I think we need to explain the same in comments.\r\n\r\nI wondered what should be because it is unexpected input for us (note that this \r\nunction could be used only for upgrade purpose). But yes, initially read WAL must\r\nbe XLOG_SHUTDOWN_CHECKPOINT, so changed as you said.\r\n\r\nAlso, I did a self-reviewing again and reworded comments.\r\n\r\nBTW, the 0002 ports some functions from pg_walinspect, it may be not elegant.\r\nCoupling degree between core/extensions should be also lower. So I made another\r\npatch which does not port anything and implements similar functionalities instead.\r\nI called the patch 0003, but can be applied atop 0001 (not 0002). To make cfbot\r\nhappy, attached as txt file.\r\nCould you please tell me which do you like 0002 or 0003?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 15 Sep 2023 12:32:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "> Thank you for reviewing! PSA new version patch set.\r\n\r\nSorry, wrong patch attached. PSA the correct ones.\r\nThere is a possibility that XLOG_PARAMETER_CHANGE may be generated, when GUC\r\nparameters are changed just before doing the upgrade. Added to list.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 15 Sep 2023 13:02:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Friday, September 15, 2023 8:33 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> \r\n> Also, I did a self-reviewing again and reworded comments.\r\n> \r\n> BTW, the 0002 ports some functions from pg_walinspect, it may be not\r\n> elegant.\r\n> Coupling degree between core/extensions should be also lower. So I made\r\n> another patch which does not port anything and implements similar\r\n> functionalities instead.\r\n> I called the patch 0003, but can be applied atop 0001 (not 0002). To make cfbot\r\n> happy, attached as txt file.\r\n> Could you please tell me which do you like 0002 or 0003?\r\n\r\nI think basically it's OK that we follow the same method as pg_walinspect to\r\nread the WAL. The reasons are as follows:\r\n\r\nThere are currently two set of APIs that are used to read WALs.\r\na) XLogReaderAllocate()/XLogReadRecord() -- pg_walinspect and current patch uses\r\nb) XLogReaderAllocate()/WALRead()\r\n\r\nThe first setup APIs is easier to use and are used in most of WAL reading\r\ncodes, while the second set of APIs is used more in low level places and is not\r\nvery easy to use. So I think it's better to use the first set of APIs.\r\n\r\nBesides, our function needs to distinguish the failure and end-of-wal cases\r\nwhen XLogReadRecord() returns NULL and to read the wal without waiting. So, the\r\nWAL reader callbacks in pg_walinspect also meets this requirement which is reason that\r\nI think we can follow the same. I also checked other public wal reader callbacks but\r\nthey either report ERRORs if XLogReadRecord() returns NULL or will wait while\r\nreading wals.\r\n\r\nIf we agree to follow the same method of pg_walinspect, I think the left\r\nthing is whether to port some functions like what 0002. I personally\r\nthink it's fine to make common functions to save codes.\r\n\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Mon, 18 Sep 2023 07:03:59 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Friday, September 15, 2023 9:02 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Sorry, wrong patch attached. PSA the correct ones.\r\n> There is a possibility that XLOG_PARAMETER_CHANGE may be generated,\r\n> when GUC parameters are changed just before doing the upgrade. Added to\r\n> list.\r\n\r\nI did some simple performance tests for the patch just to make sure it doesn't\r\nintroduce obvious overhead, the result looks good to me. I tested two cases:\r\n\r\n1) The time for upgrade when the old db has 0, 10,50, 100 slots\r\n0 slots(HEAD) : 0m5.585s\r\n0 slots : 0m5.591s\r\n10 slots : 0m5.602s\r\n50 slots : 0m5.636s\r\n100 slots : 0m5.778s\r\n\r\n2) The time for upgrade after doing \"upgrade --check\" in advance, when\r\nthe old db has 0, 10,50, 100 slots.\r\n\r\n0 slots(HEAD) : 0m5.588s\r\n0 slots : 0m5.596s\r\n10 slots : 0m5.605s\r\n50 slots : 0m5.737s\r\n100 slots : 0m5.783s\r\n\r\nThe data of the local machine I used is:\r\nCPU(s):\t40\r\nModel name:\tIntel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz\r\nCore(s) per socket:\t10\r\nSocket(s):\t2\r\nmemory:\t125GB\r\ndisk:\t6T HDD\r\n\r\nThe old database is empty except for the slots in both tests.\r\n\r\nThe test script is also attached for reference(run perf.sh after\r\nadjusting other settings.)\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Mon, 18 Sep 2023 11:16:41 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 15, 2023 at 6:32 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > Thank you for reviewing! PSA new version patch set.\n>\n> Sorry, wrong patch attached. PSA the correct ones.\n> There is a possibility that XLOG_PARAMETER_CHANGE may be generated, when GUC\n> parameters are changed just before doing the upgrade. Added to list.\n>\n\nYou forgot to update 0002 patch for XLOG_PARAMETER_CHANGE. I think it\nis okay to move walinspect's functionality into common place so that\nit can be used by this patch as suggested by Hou-San. The only reason\nit is okay to keep it specific to walinspect is if we want to enhance\nthat functions for walinspect but I think if that happens then we can\nevaluate whether to enhance it by having additional parameters or\ncreating something specific for walinspect.\n\n* +Datum\n+binary_upgrade_validate_wal_record_types_after_lsn(PG_FUNCTION_ARGS)\n\nHow about naming it as binary_upgrade_validate_wal_records()? I don't\nsee it is helpful to make it too long.\n\nApart from this, I have made minor cosmetic changes in the attached.\nIf these looks okay to you then you can include them in next version.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 18 Sep 2023 17:19:05 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version!\r\n\r\n> > Sorry, wrong patch attached. PSA the correct ones.\r\n> > There is a possibility that XLOG_PARAMETER_CHANGE may be generated,\r\n> when GUC\r\n> > parameters are changed just before doing the upgrade. Added to list.\r\n> >\r\n> \r\n> You forgot to update 0002 patch for XLOG_PARAMETER_CHANGE.\r\n\r\nOh, I did wrong git operations locally. Sorry for inconvenience.\r\n\r\n> I think it\r\n> is okay to move walinspect's functionality into common place so that\r\n> it can be used by this patch as suggested by Hou-San. The only reason\r\n> it is okay to keep it specific to walinspect is if we want to enhance\r\n> that functions for walinspect but I think if that happens then we can\r\n> evaluate whether to enhance it by having additional parameters or\r\n> creating something specific for walinspect.\r\n\r\nOK, merged 0001 + 0002 into one.\r\n\r\n> * +Datum\r\n> +binary_upgrade_validate_wal_record_types_after_lsn(PG_FUNCTION_ARGS)\r\n> \r\n> How about naming it as binary_upgrade_validate_wal_records()? I don't\r\n> see it is helpful to make it too long.\r\n\r\nAgreed, fixed.\r\n\r\n> Apart from this, I have made minor cosmetic changes in the attached.\r\n> If these looks okay to you then you can include them in next version.\r\n\r\nSeems better, included.\r\n\r\nApart from above, I fixed not to call binary_upgrade_validate_wal_records() during\r\nthe live check, because it raises ERROR if the server is not in the upgrade. The\r\nresult would be used only when not in the live check mode, so it's OK to skip.\r\nAlso, some comments were slightly reworded.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 19 Sep 2023 06:17:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 11:47 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n>\n> Thank you for reviewing! PSA new version!\n>\n\n*\n+#include \"access/xlogdefs.h\"\n #include \"common/relpath.h\"\n #include \"libpq-fe.h\"\n\nThe above include is not required. I have removed that and made a few\ncosmetic changes in the attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 19 Sep 2023 17:57:31 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version. In this version I ran pgindent again.\r\n\r\n> +#include \"access/xlogdefs.h\"\r\n> #include \"common/relpath.h\"\r\n> #include \"libpq-fe.h\"\r\n> \r\n> The above include is not required. I have removed that and made a few\r\n> cosmetic changes in the attached.\r\n\r\nYes, it is not needed anymore. Firstly it was introduced to use the datatype\r\nXLogRecPtr, but removed in recent version.\r\n\r\nMoreover, I my colleague Hou found several problems for v40. Here is a fixed\r\nversion. Below bullets are the found issues.\r\n\r\n* Fixed to allow XLOG_SWICH when reading the record, including the initial one.\r\n The XLOG_SWICH may inserted after walsender exits. This is occurred when\r\n archive_mode is set to on (or always). \r\n* Fixed to set max_slot_wal_keep_size -1 only when the cluster is PG17+.\r\n max_slot_wal_keep_size was introduced in PG13, so previous patch could not\r\n upgrade from PG12 and prior.\r\n The setting is only needed to upgrade logical slots, so it should be set only\r\n when in PG17 and later.\r\n* Avoid to call binary_upgrade_validate_wal_records() when the slot is invalidated.\r\n The function raises an ERROR if the record corresponds to the given LSN.\r\n The output is like:\r\n\r\n```\r\nERROR: requested WAL segment pg_wal/000000010000000000000001 has already been removed\r\n```\r\n\r\n It is usual behavior but we do not want to error out here, so it was avoided.\r\n The upgrading would fail correctly if there are invalid slots.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 20 Sep 2023 05:30:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 11:00 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n>\n> Thank you for reviewing! PSA new version. In this version I ran pgindent again.\n>\n\n+ /*\n+ * There is a possibility that following records may be generated\n+ * during the upgrade.\n+ */\n+ if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_SHUTDOWN) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_ONLINE) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_SWITCH) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_FPI_FOR_HINT) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_PARAMETER_CHANGE) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_STANDBY_ID, XLOG_RUNNING_XACTS) &&\n+ !CHECK_WAL_RECORD(rmid, info, RM_HEAP2_ID, XLOG_HEAP2_PRUNE))\n+ is_valid = false;\n+\n+ CHECK_FOR_INTERRUPTS();\n\nJust wondering why XLOG_HEAP2_VACUUM or other vacuum-related commands\ncan not occur during the upgrade?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Sep 2023 11:51:35 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 11:51 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 11:00 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Amit,\n> >\n> > Thank you for reviewing! PSA new version. In this version I ran pgindent again.\n> >\n>\n> + /*\n> + * There is a possibility that following records may be generated\n> + * during the upgrade.\n> + */\n> + if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_SHUTDOWN) &&\n> + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_ONLINE) &&\n> + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_SWITCH) &&\n> + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_FPI_FOR_HINT) &&\n> + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_PARAMETER_CHANGE) &&\n> + !CHECK_WAL_RECORD(rmid, info, RM_STANDBY_ID, XLOG_RUNNING_XACTS) &&\n> + !CHECK_WAL_RECORD(rmid, info, RM_HEAP2_ID, XLOG_HEAP2_PRUNE))\n> + is_valid = false;\n> +\n> + CHECK_FOR_INTERRUPTS();\n>\n> Just wondering why XLOG_HEAP2_VACUUM or other vacuum-related commands\n> can not occur during the upgrade?\n>\n\nBecause autovacuum is disabled during upgrade. See comment: \"Use -b to\ndisable autovacuum\" in start_postmaster().\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Sep 2023 12:12:26 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 11:00 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit,\n\n+int\n+count_old_cluster_logical_slots(void)\n+{\n+ int dbnum;\n+ int slot_count = 0;\n+\n+ for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n+ slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\n+\n+ return slot_count;\n+}\n\nIn this code, aren't we assuming that 'slot_arr.nslots' will be zero\nfor versions <=PG16? On my Windows machine, this value is not zero but\nrather some uninitialized negative value which makes its caller try to\nallocate some undefined memory and fail. I think you need to\ninitialize this in get_old_cluster_logical_slot_infos() for lower\nversions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Sep 2023 12:16:53 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 12:12 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 11:51 AM Dilip Kumar <[email protected]> wrote:\n> >\n> > On Wed, Sep 20, 2023 at 11:00 AM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > Dear Amit,\n> > >\n> > > Thank you for reviewing! PSA new version. In this version I ran pgindent again.\n> > >\n> >\n> > + /*\n> > + * There is a possibility that following records may be generated\n> > + * during the upgrade.\n> > + */\n> > + if (!CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_SHUTDOWN) &&\n> > + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_CHECKPOINT_ONLINE) &&\n> > + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_SWITCH) &&\n> > + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_FPI_FOR_HINT) &&\n> > + !CHECK_WAL_RECORD(rmid, info, RM_XLOG_ID, XLOG_PARAMETER_CHANGE) &&\n> > + !CHECK_WAL_RECORD(rmid, info, RM_STANDBY_ID, XLOG_RUNNING_XACTS) &&\n> > + !CHECK_WAL_RECORD(rmid, info, RM_HEAP2_ID, XLOG_HEAP2_PRUNE))\n> > + is_valid = false;\n> > +\n> > + CHECK_FOR_INTERRUPTS();\n> >\n> > Just wondering why XLOG_HEAP2_VACUUM or other vacuum-related commands\n> > can not occur during the upgrade?\n> >\n>\n> Because autovacuum is disabled during upgrade. See comment: \"Use -b to\n> disable autovacuum\" in start_postmaster().\n\nOkay got it, thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Sep 2023 14:05:18 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 12:16 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 11:00 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Amit,\n>\n> +int\n> +count_old_cluster_logical_slots(void)\n> +{\n> + int dbnum;\n> + int slot_count = 0;\n> +\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n> + slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\n> +\n> + return slot_count;\n> +}\n>\n> In this code, aren't we assuming that 'slot_arr.nslots' will be zero\n> for versions <=PG16? On my Windows machine, this value is not zero but\n> rather some uninitialized negative value which makes its caller try to\n> allocate some undefined memory and fail. I think you need to\n> initialize this in get_old_cluster_logical_slot_infos() for lower\n> versions.\n>\n\n+{ oid => '8046', descr => 'for use by pg_upgrade',\n+ proname => 'binary_upgrade_validate_wal_records',\n+ prorows => '10', proretset => 't', provolatile => 's', prorettype => 'bool',\n+ proargtypes => 'pg_lsn', proallargtypes => '{pg_lsn,bool}',\n+ proargmodes => '{i,o}', proargnames => '{start_lsn,is_ok}',\n+ prosrc => 'binary_upgrade_validate_wal_records' },\n\nIn this many of the fields seem bogus. For example, we don't need\nprorows => '10', proretset => 't' for this function. Similarly\nproargmodes also look incorrect as we don't have any out parameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Sep 2023 15:41:29 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> +int\r\n> +count_old_cluster_logical_slots(void)\r\n> +{\r\n> + int dbnum;\r\n> + int slot_count = 0;\r\n> +\r\n> + for (dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> + slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\r\n> +\r\n> + return slot_count;\r\n> +}\r\n> \r\n> In this code, aren't we assuming that 'slot_arr.nslots' will be zero\r\n> for versions <=PG16? On my Windows machine, this value is not zero but\r\n> rather some uninitialized negative value which makes its caller try to\r\n> allocate some undefined memory and fail. I think you need to\r\n> initialize this in get_old_cluster_logical_slot_infos() for lower\r\n> versions.\r\n\r\nGood catch, I could not notice because it worked well in my RHEL. Here is the\r\nupdated version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 20 Sep 2023 11:28:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! New version can be available in [1].\r\n\r\n> \r\n> +{ oid => '8046', descr => 'for use by pg_upgrade',\r\n> + proname => 'binary_upgrade_validate_wal_records',\r\n> + prorows => '10', proretset => 't', provolatile => 's', prorettype => 'bool',\r\n> + proargtypes => 'pg_lsn', proallargtypes => '{pg_lsn,bool}',\r\n> + proargmodes => '{i,o}', proargnames => '{start_lsn,is_ok}',\r\n> + prosrc => 'binary_upgrade_validate_wal_records' },\r\n> \r\n> In this many of the fields seem bogus. For example, we don't need\r\n> prorows => '10', proretset => 't' for this function. Similarly\r\n> proargmodes also look incorrect as we don't have any out parameter.\r\n>\r\n\r\nThe part was made in old versions and has kept till now. I rechecked them and\r\nchanged like below:\r\n\r\n* This function just returns boolean, proretset was changed to 'f'.\r\n* Based on above, prorows should be zero. Removed.\r\n* Returned value is quite depended on the internal status, provolatile was\r\n changed to 'v'.\r\n* There are no OUT and INOUT arguments, no need to set proallargtypes and proargmodes.\r\n Removed.\r\n* Anonymous arguments are allowed, proargnames was removed NULL.\r\n* This function is not expected to be call in parallel. proparallel was set to 'u'.\r\n* The argument must not be NULL, and we should error out. proisstrict was changed 'f'.\r\n Also, the check was added to the function.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB586615579356A84A8CF29A00F5F9A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 20 Sep 2023 11:30:44 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 11:28:33AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Good catch, I could not notice because it worked well in my RHEL. Here is the\n> updated version.\n\nI am getting slowly up to date with this patch.. But before going in\ndepth with more review, there is something that I got to ask: why is\nthere no option to control if the slots are copied across the upgrade?\nAt least, I would have imagined that an option to disable the copy of\nthe slots would be adapted, say a --no-slot-copy or similar to get\nback to the old behavior if need be.\n\n+ * This is because before that the logical slots are not saved at shutdown, so\n+ * there is no guarantee that the latest confirmed_flush_lsn is saved to disk\n\nIs this comment in get_old_cluster_logical_slot_infos() still true\nafter e0b2eed047d?\n--\nMichael",
"msg_date": "Thu, 21 Sep 2023 16:40:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 1:10 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 11:28:33AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > Good catch, I could not notice because it worked well in my RHEL. Here is the\n> > updated version.\n>\n> I am getting slowly up to date with this patch.. But before going in\n> depth with more review, there is something that I got to ask: why is\n> there no option to control if the slots are copied across the upgrade?\n> At least, I would have imagined that an option to disable the copy of\n> the slots would be adapted, say a --no-slot-copy or similar to get\n> back to the old behavior if need be.\n>\n\nWe have discussed this point. Normally, we don't have such options in\nupgrade, so we were hesitent to add a new one for this but there is a\ndiscussion to add an --exclude-logical-slots option. We are planning\nto add that as a separate patch after getting some more consensus on\nit. Right now, the idea is to get the main patch ready.\n\n> + * This is because before that the logical slots are not saved at shutdown, so\n> + * there is no guarantee that the latest confirmed_flush_lsn is saved to disk\n>\n> Is this comment in get_old_cluster_logical_slot_infos() still true\n> after e0b2eed047d?\n>\n\nYes, we didn't backpatched it, so slots from pre-17 won't be flushed\nat shutdown time even if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 13:50:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hackers,\r\n\r\n> Good catch, I could not notice because it worked well in my RHEL. Here is the\r\n> updated version.\r\n\r\nI did some cosmetic changes for the patch, the functionality was not changed.\r\nE.g., a macro function was replaced to an inline.\r\n\r\nNote that cfbot got angry to old patch, but it seemed the infrastructure-side\r\nerror. Let's see again.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 21 Sep 2023 10:44:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 7:20 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Good catch, I could not notice because it worked well in my RHEL. Here is the\n> updated version.\n\nThanks for the patch. I have some comments on v42:\n\n1.\n+{ oid => '8046', descr => 'for use by pg_upgrade',\n+ proname => 'binary_upgrade_validate_wal_records', proisstrict => 'f',\n+ provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n\n+ if (PG_ARGISNULL(0))\n+ elog(ERROR, \"null argument to\nbinary_upgrade_validate_wal_records is not allowed\");\n\nCan proisstrict => 'f' be removed so that there's no need for explicit\nPG_ARGISNULL check? Any specific reason to keep it?\n\nAnd, the before the ISNULL check the arg is read, which isn't good.\n\n2.\n+Datum\n+binary_upgrade_validate_wal_records(PG_FUNCTION_ARGS)\n\nThe function name looks too generic in the sense that it validates WAL\nrecords for correctness/corruption, but it is not. Can it be something\nlike binary_upgrade_{check_for_wal_logical_end,\ncheck_for_logical_end_of_wal} or such?\n\n3.\n+ /* Quick exit if the given lsn is larger than current one */\n+ if (start_lsn >= GetFlushRecPtr(NULL))\n+ PG_RETURN_BOOL(false);\n+\n\nAn LSN that doesn't exists yet is an error IMO, may be an error better here?\n\n4.\n+ * This function is used to verify that there are no WAL records (except some\n+ * types) after confirmed_flush_lsn of logical slots, which means all the\n+ * changes were replicated to the subscriber. There is a possibility that some\n+ * WALs are inserted during upgrade, so such types would be ignored.\n+ *\n\nThis comment before the function better be at the callsite of the\nfunction, because as far as this function is concerned, it checks if\nthere are any WAL records that are not \"certain\" types after the given\nLSN, it doesn't know logical slots or confirmed_flush_lsn or such.\n\n5. Trying to understand the interaction of this feature with custom\nWAL records that a custom WAL resource manager puts in. Is it okay to\nhave custom WAL records after the \"logical WAL end\"?\n+ /*\n+ * There is a possibility that following records may be generated\n+ * during the upgrade.\n+ */\n\n6.\n+ if (PQntuples(res) != 1)\n+ pg_fatal(\"could not count the number of logical replication slots\");\n+\n\nNot existing a single logical replication slot an error? I think it\nmust be if (PQntuples(res) == 0) return;?\n\n7. A nit:\n+ nslots_on_new = atoi(PQgetvalue(res, 0, 0));\n+\n+ if (nslots_on_new)\n\nJust do if(atoi(PQgetvalue(res, 0, 0)) > 0) and get rid of nslots_on_new?\n\n8.\n+ if (nslots_on_new)\n+ pg_fatal(\"expected 0 logical replication slots but found %d\",\n+ nslots_on_new);\n\nHow about \"New cluster database is containing logical replication\nslots\", note that the some of the fatal messages are starting with an\nupper-case letter.\n\n9.\n+ res = executeQueryOrDie(conn, \"SHOW wal_level;\");\n+ res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\n\nInstead of 2 queries to determine required parameters, isn't it better\nwith a single query like the following?\n\nselect setting from pg_settings where name in ('wal_level',\n'max_replication_slots') order by name;\n\n10.\nWhy just wal_level and max_replication_slots, why not\nmax_worker_processes and max_wal_senders too? I'm looking at\nRecoveryRequiresIntParameter and if they are different on the upgraded\ninstance, chances that the logical replication won't work, no?\n\n11.\n+# 2. Generate extra WAL records. Because these WAL records do not get consumed\n+# it will cause the upcoming pg_upgrade test to fail.\n+$old_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\"\n+);\n+$old_publisher->stop;\n\nThis might be a recipie for sporadic test failures - how is it\nguaranteed that the newly generated WAL records aren't consumed.\n\nMay be stop subscriber or temporarily disable the subscription and\nthen generate WAL records?\n\n12.\n+extern XLogReaderState *InitXLogReaderState(XLogRecPtr lsn);\n+extern XLogRecord *ReadNextXLogRecord(XLogReaderState *xlogreader);\n+\n\nWhy not these functions be defined in xlogreader.h with elog/ereport\nin #ifndef FRONTEND #endif blocks? IMO, xlogreader.h seems right\nlocation for these functions.\n\n13.\n+LogicalReplicationSlotInfo\n\nWhere is this structure defined?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 21 Sep 2023 16:57:44 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 4:57 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Sep 20, 2023 at 7:20 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Good catch, I could not notice because it worked well in my RHEL. Here is the\n> > updated version.\n>\n> Thanks for the patch. I have some comments on v42:\n>\n> 1.\n> +{ oid => '8046', descr => 'for use by pg_upgrade',\n> + proname => 'binary_upgrade_validate_wal_records', proisstrict => 'f',\n> + provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n>\n> + if (PG_ARGISNULL(0))\n> + elog(ERROR, \"null argument to\n> binary_upgrade_validate_wal_records is not allowed\");\n>\n> Can proisstrict => 'f' be removed so that there's no need for explicit\n> PG_ARGISNULL check? Any specific reason to keep it?\n>\n\nProbably trying to keep it similar with\nbinary_upgrade_create_empty_extension(). I think it depends what\nbehaviour we expect for NULL input.\n\n> And, the before the ISNULL check the arg is read, which isn't good.\n>\n\nRight.\n\n> 2.\n> +Datum\n> +binary_upgrade_validate_wal_records(PG_FUNCTION_ARGS)\n>\n> The function name looks too generic in the sense that it validates WAL\n> records for correctness/corruption, but it is not. Can it be something\n> like binary_upgrade_{check_for_wal_logical_end,\n> check_for_logical_end_of_wal} or such?\n>\n\nHow about slightly modified version like\nbinary_upgrade_validate_wal_logical_end?\n\n> 3.\n> + /* Quick exit if the given lsn is larger than current one */\n> + if (start_lsn >= GetFlushRecPtr(NULL))\n> + PG_RETURN_BOOL(false);\n> +\n>\n> An LSN that doesn't exists yet is an error IMO, may be an error better here?\n>\n\nIt will anyway lead to error at later point but we will provide more\ninformation about all the slots that have invalid value of\nconfirmed_flush LSN.\n\n> 4.\n> + * This function is used to verify that there are no WAL records (except some\n> + * types) after confirmed_flush_lsn of logical slots, which means all the\n> + * changes were replicated to the subscriber. There is a possibility that some\n> + * WALs are inserted during upgrade, so such types would be ignored.\n> + *\n>\n> This comment before the function better be at the callsite of the\n> function, because as far as this function is concerned, it checks if\n> there are any WAL records that are not \"certain\" types after the given\n> LSN, it doesn't know logical slots or confirmed_flush_lsn or such.\n>\n\nYeah, we should give information at the callsite but I guess we need\nto give some context atop this function as well so that it is easier\nto explain the functionality.\n\n> 5. Trying to understand the interaction of this feature with custom\n> WAL records that a custom WAL resource manager puts in. Is it okay to\n> have custom WAL records after the \"logical WAL end\"?\n> + /*\n> + * There is a possibility that following records may be generated\n> + * during the upgrade.\n> + */\n>\n\nI don't think so. The only valid records for the checks in this\nfunction are probably the ones that can get generated by the upgrade\nprocess because we ensure that walsender sends all the records before\nit exits at shutdown time.\n\n>\n> 10.\n> Why just wal_level and max_replication_slots, why not\n> max_worker_processes and max_wal_senders too?\n\nIsn't it sufficient to check the parameters that are required to\ncreate a slot aka what we check in the function\nCheckLogicalDecodingRequirements()? We are only creating logical slots\nhere so I think that should be sufficient.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Sep 2023 17:45:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nThank you for reviewing! Before addressing them, I would like to reply some comments.\r\n\r\n> 6.\r\n> + if (PQntuples(res) != 1)\r\n> + pg_fatal(\"could not count the number of logical replication slots\");\r\n> +\r\n> \r\n> Not existing a single logical replication slot an error? I think it\r\n> must be if (PQntuples(res) == 0) return;?\r\n>\r\n\r\nThe query executes \"SELECT count(*)...\", IIUC it exactly returns 1 row.\r\n\r\n> 7. A nit:\r\n> + nslots_on_new = atoi(PQgetvalue(res, 0, 0));\r\n> +\r\n> + if (nslots_on_new)\r\n> \r\n> Just do if(atoi(PQgetvalue(res, 0, 0)) > 0) and get rid of nslots_on_new?\r\n\r\nNote that the vaule would be used for upcoming pg_fatal. I prefer current style\r\nbecause multiple atoi(PQgetvalue(res, 0, 0)) was not so beautiful.\r\n\r\n> \r\n> 11.\r\n> +# 2. Generate extra WAL records. Because these WAL records do not get\r\n> consumed\r\n> +# it will cause the upcoming pg_upgrade test to fail.\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\"\r\n> +);\r\n> +$old_publisher->stop;\r\n> \r\n> This might be a recipie for sporadic test failures - how is it\r\n> guaranteed that the newly generated WAL records aren't consumed.\r\n\r\nYou mentioned at line 118, but at that time logical replication system is not created.\r\nThe subscriber is created at line 163.\r\nTherefore WALs would not be consumed automatically.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Thu, 21 Sep 2023 13:24:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 01:50:28PM +0530, Amit Kapila wrote:\n> We have discussed this point. Normally, we don't have such options in\n> upgrade, so we were hesitent to add a new one for this but there is a\n> discussion to add an --exclude-logical-slots option. We are planning\n> to add that as a separate patch after getting some more consensus on\n> it. Right now, the idea is to get the main patch ready.\n\nOkay. I am wondering if the subscriber part is OK now without an\noption, but that could also be considered separately, as well. At\nleast I hope so.\n--\nMichael",
"msg_date": "Fri, 22 Sep 2023 08:47:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 5:45 PM Amit Kapila <[email protected]> wrote:\n>\n> > Thanks for the patch. I have some comments on v42:\n>\n> Probably trying to keep it similar with\n> binary_upgrade_create_empty_extension(). I think it depends what\n> behaviour we expect for NULL input.\n\nconfirmed_flush_lsn for a logical slot can be null (for instance,\nbefore confirmed_flush is updated for a newly created logical slot if\nsomeone calls pg_stat_replication -> pg_get_replication_slots) and\nwhen it is so, the binary_upgrade_create_empty_extension errors out.\nIs this behaviour wanted? I think the function returning null on null\ninput is a better behaviour here.\n\n> > 2.\n> > +Datum\n> > +binary_upgrade_validate_wal_records(PG_FUNCTION_ARGS)\n> >\n> > The function name looks too generic in the sense that it validates WAL\n> > records for correctness/corruption, but it is not. Can it be something\n> > like binary_upgrade_{check_for_wal_logical_end,\n> > check_for_logical_end_of_wal} or such?\n> >\n>\n> How about slightly modified version like\n> binary_upgrade_validate_wal_logical_end?\n\nWorks for me.\n\n> > 3.\n> > + /* Quick exit if the given lsn is larger than current one */\n> > + if (start_lsn >= GetFlushRecPtr(NULL))\n> > + PG_RETURN_BOOL(false);\n> > +\n> >\n> > An LSN that doesn't exists yet is an error IMO, may be an error better here?\n> >\n>\n> It will anyway lead to error at later point but we will provide more\n> information about all the slots that have invalid value of\n> confirmed_flush LSN.\n\nI disagree with the function returning false for non-existing LSN.\nIMO, failing fast when an LSN that doesn't exist yet is supplied to\nthe function is the right approach. We never know, the slots on disk\ncontent can get corrupted for some reason and confirmed_flush_lsn is\n'FFFFFFFF/FFFFFFFF' or a non-existing LSN.\n\n> > 4.\n> > + * This function is used to verify that there are no WAL records (except some\n> > + * types) after confirmed_flush_lsn of logical slots, which means all the\n> > + * changes were replicated to the subscriber. There is a possibility that some\n> > + * WALs are inserted during upgrade, so such types would be ignored.\n> > + *\n> >\n> > This comment before the function better be at the callsite of the\n> > function, because as far as this function is concerned, it checks if\n> > there are any WAL records that are not \"certain\" types after the given\n> > LSN, it doesn't know logical slots or confirmed_flush_lsn or such.\n> >\n>\n> Yeah, we should give information at the callsite but I guess we need\n> to give some context atop this function as well so that it is easier\n> to explain the functionality.\n\nAt the callsite a detailed description is good. At the function\ndefinition just a reference to the callsite is good.\n\n> > 5. Trying to understand the interaction of this feature with custom\n> > WAL records that a custom WAL resource manager puts in. Is it okay to\n> > have custom WAL records after the \"logical WAL end\"?\n> > + /*\n> > + * There is a possibility that following records may be generated\n> > + * during the upgrade.\n> > + */\n> >\n>\n> I don't think so. The only valid records for the checks in this\n> function are probably the ones that can get generated by the upgrade\n> process because we ensure that walsender sends all the records before\n> it exits at shutdown time.\n\nCan you help me understand how the list of WAL records that pg_upgrade\ncan generate is put up? Identified them after running some tests?\n\n> > 10.\n> > Why just wal_level and max_replication_slots, why not\n> > max_worker_processes and max_wal_senders too?\n>\n> Isn't it sufficient to check the parameters that are required to\n> create a slot aka what we check in the function\n> CheckLogicalDecodingRequirements()? We are only creating logical slots\n> here so I think that should be sufficient.\n\nAh, that makes sense.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 10:56:56 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 6:54 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 6.\n> > + if (PQntuples(res) != 1)\n> > + pg_fatal(\"could not count the number of logical replication slots\");\n> > +\n> >\n> > Not existing a single logical replication slot an error? I think it\n> > must be if (PQntuples(res) == 0) return;?\n> >\n>\n> The query executes \"SELECT count(*)...\", IIUC it exactly returns 1 row.\n\nAh, got it.\n\n> > 7. A nit:\n> > + nslots_on_new = atoi(PQgetvalue(res, 0, 0));\n> > +\n> > + if (nslots_on_new)\n> >\n> > Just do if(atoi(PQgetvalue(res, 0, 0)) > 0) and get rid of nslots_on_new?\n>\n> Note that the vaule would be used for upcoming pg_fatal. I prefer current style\n> because multiple atoi(PQgetvalue(res, 0, 0)) was not so beautiful.\n\n+1.\n\n> You mentioned at line 118, but at that time logical replication system is not created.\n> The subscriber is created at line 163.\n> Therefore WALs would not be consumed automatically.\n\nSo, not calling pg_logical_slot_get_changes() on test_slot1 won't\nconsume the WAL?\n\nA few more comments:\n\n1.\n+ /*\n+ * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n+ * checkpointer process. If WALs required by logical replication slots\n+ * are removed, the slots are unusable. This setting prevents the\n+ * invalidation of slots during the upgrade. We set this option when\n\nIIUC, during upgrade we don't want the checkpointer to remove WAL that\nmay be needed by logical slots, for that the patch overrides the user\nset value for max_slot_wal_keep_size. What if the WAL is removed\nbecause of the wal_keep_size setting?\n\n2.\n+++ b/src/bin/pg_upgrade/t/003_logical_replication_slots.pl\n\nHow about a more descriptive and pointed name for the TAP test file,\nsomething like 003_upgrade_logical_replication_slots.pl?\n\n3. Does this patch support upgrading of logical replication slots on a\nstreaming standby? If yes, isn't it a good idea to add one test for\nupgrading standby with logical replication slots?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 11:59:14 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 10:57 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 5:45 PM Amit Kapila <[email protected]> wrote:\n> > > 3.\n> > > + /* Quick exit if the given lsn is larger than current one */\n> > > + if (start_lsn >= GetFlushRecPtr(NULL))\n> > > + PG_RETURN_BOOL(false);\n> > > +\n> > >\n> > > An LSN that doesn't exists yet is an error IMO, may be an error better here?\n> > >\n> >\n> > It will anyway lead to error at later point but we will provide more\n> > information about all the slots that have invalid value of\n> > confirmed_flush LSN.\n>\n> I disagree with the function returning false for non-existing LSN.\n> IMO, failing fast when an LSN that doesn't exist yet is supplied to\n> the function is the right approach. We never know, the slots on disk\n> content can get corrupted for some reason and confirmed_flush_lsn is\n> 'FFFFFFFF/FFFFFFFF' or a non-existing LSN.\n>\n\nI don't think it is big deal to either fail immediately or slightly\nlater with more information about slot. It could be better if we do\nlater because various slots can have the same problem, so we can\nmention all such slots together.\n\n>\n> > > 5. Trying to understand the interaction of this feature with custom\n> > > WAL records that a custom WAL resource manager puts in. Is it okay to\n> > > have custom WAL records after the \"logical WAL end\"?\n> > > + /*\n> > > + * There is a possibility that following records may be generated\n> > > + * during the upgrade.\n> > > + */\n> > >\n> >\n> > I don't think so. The only valid records for the checks in this\n> > function are probably the ones that can get generated by the upgrade\n> > process because we ensure that walsender sends all the records before\n> > it exits at shutdown time.\n>\n> Can you help me understand how the list of WAL records that pg_upgrade\n> can generate is put up? Identified them after running some tests?\n>\n\nYeah, both by tests and manually verifying the WAL records. Basically,\nwe need to care about records that could be generated by background\nprocesses like checkpointer/bgwriter or can be generated during system\ntable scans. You may want to read my latest email for a summary on how\nwe reached at this design choice [1].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JVKZGRHLOEotWi%2Be%2B09jucNedqpkkc-Do4dh5FTAU%2B5w%40mail.gmail.com\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Sep 2023 12:11:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 11:59 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 6:54 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n>\n> 1.\n> + /*\n> + * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n> + * checkpointer process. If WALs required by logical replication slots\n> + * are removed, the slots are unusable. This setting prevents the\n> + * invalidation of slots during the upgrade. We set this option when\n>\n> IIUC, during upgrade we don't want the checkpointer to remove WAL that\n> may be needed by logical slots, for that the patch overrides the user\n> set value for max_slot_wal_keep_size. What if the WAL is removed\n> because of the wal_keep_size setting?\n>\n\nWe are fine with the WAL removal unless it can invalidate the slots\nwhich is prevented by max_slot_wal_keep_size.\n\n>\n> 3. Does this patch support upgrading of logical replication slots on a\n> streaming standby?\n>\n\nNo, and a note has been added by the patch for the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Sep 2023 12:14:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 10:57 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 5:45 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > Thanks for the patch. I have some comments on v42:\n> >\n> > Probably trying to keep it similar with\n> > binary_upgrade_create_empty_extension(). I think it depends what\n> > behaviour we expect for NULL input.\n>\n> confirmed_flush_lsn for a logical slot can be null (for instance,\n> before confirmed_flush is updated for a newly created logical slot if\n> someone calls pg_stat_replication -> pg_get_replication_slots) and\n> when it is so, the binary_upgrade_create_empty_extension errors out.\n> Is this behaviour wanted? I think the function returning null on null\n> input is a better behaviour here.\n>\n\nI think if we do return null on null behavior then the caller needs to\nadd a special case for null value as this function returns bool. We\ncan probably return false in that case. Does that help to address your\nconcern?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 22 Sep 2023 14:09:49 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nAgain, thank you for reviewing! Here is a new version patch.\r\n\r\n> 1.\r\n> +{ oid => '8046', descr => 'for use by pg_upgrade',\r\n> + proname => 'binary_upgrade_validate_wal_records', proisstrict => 'f',\r\n> + provolatile => 'v', proparallel => 'u', prorettype => 'bool',\r\n> \r\n> + if (PG_ARGISNULL(0))\r\n> + elog(ERROR, \"null argument to\r\n> binary_upgrade_validate_wal_records is not allowed\");\r\n> \r\n> Can proisstrict => 'f' be removed so that there's no need for explicit\r\n> PG_ARGISNULL check? Any specific reason to keep it?\r\n\r\nTheoretically it could be, but I was not sure. I think you wanted us to follow\r\nspecs of pg_walinspect functions, but it is just a upgrade function. Normally\r\nusers cannot call it. Also, as Amit said [1], the caller must consider the\r\nspecial case. Currently the function returns false at that time, we can change\r\nmore appropriate style later.\r\n\r\n> And, the before the ISNULL check the arg is read, which isn't good.\r\n\r\nRight, fixed.\r\n\r\n> 2.\r\n> +Datum\r\n> +binary_upgrade_validate_wal_records(PG_FUNCTION_ARGS)\r\n> \r\n> The function name looks too generic in the sense that it validates WAL\r\n> records for correctness/corruption, but it is not. Can it be something\r\n> like binary_upgrade_{check_for_wal_logical_end,\r\n> check_for_logical_end_of_wal} or such?\r\n\r\nPer discussion [2], changed to binary_upgrade_validate_wal_logical_end.\r\n\r\n> 3.\r\n> + /* Quick exit if the given lsn is larger than current one */\r\n> + if (start_lsn >= GetFlushRecPtr(NULL))\r\n> + PG_RETURN_BOOL(false);\r\n> +\r\n> \r\n> An LSN that doesn't exists yet is an error IMO, may be an error better here?\r\n\r\nWe think that the invalid slots should be listed at the end, so basically we do\r\nnot want to error out. This would be also changed if there are better opinions.\r\n\r\n> 4.\r\n> + * This function is used to verify that there are no WAL records (except some\r\n> + * types) after confirmed_flush_lsn of logical slots, which means all the\r\n> + * changes were replicated to the subscriber. There is a possibility that some\r\n> + * WALs are inserted during upgrade, so such types would be ignored.\r\n> + *\r\n> \r\n> This comment before the function better be at the callsite of the\r\n> function, because as far as this function is concerned, it checks if\r\n> there are any WAL records that are not \"certain\" types after the given\r\n> LSN, it doesn't know logical slots or confirmed_flush_lsn or such.\r\n\r\nHmm, I think it is better to do the reverse, because otherwise we need to mention\r\nthe same explanation at other caller of the function if any. So, I have\r\nadjusted the comments atop and at caller. Thought?\r\n\r\n> 8.\r\n> + if (nslots_on_new)\r\n> + pg_fatal(\"expected 0 logical replication slots but found %d\",\r\n> + nslots_on_new);\r\n> \r\n> How about \"New cluster database is containing logical replication\r\n> slots\", note that the some of the fatal messages are starting with an\r\n> upper-case letter.\r\n\r\nI did not use your suggestion, but changed to upper-case.\r\nActually, the uppper-case rule is broken even in the file. Here I regarded\r\nthis sentence as hint message.\r\n\r\n> 9.\r\n> + res = executeQueryOrDie(conn, \"SHOW wal_level;\");\r\n> + res = executeQueryOrDie(conn, \"SHOW max_replication_slots;\");\r\n> \r\n> Instead of 2 queries to determine required parameters, isn't it better\r\n> with a single query like the following?\r\n> \r\n> select setting from pg_settings where name in ('wal_level',\r\n> 'max_replication_slots') order by name;\r\n\r\nModified, but use ORDER BY ... DESC. This come from a previous comment [3].\r\n\r\n> \r\n> 12.\r\n> +extern XLogReaderState *InitXLogReaderState(XLogRecPtr lsn);\r\n> +extern XLogRecord *ReadNextXLogRecord(XLogReaderState *xlogreader);\r\n> +\r\n> \r\n> Why not these functions be defined in xlogreader.h with elog/ereport\r\n> in #ifndef FRONTEND #endif blocks? IMO, xlogreader.h seems right\r\n> location for these functions.\r\n\r\nI checked comments atop both files, and xlogreader.h seems better. Fixed.\r\n\r\n> 13.\r\n> +LogicalReplicationSlotInfo\r\n> \r\n> Where is this structure defined?\r\n\r\nOpps, removed.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1LxPDeSkTttEAG2MPEWO%3D83vQe_Bja9F4QcCjVn%3DWt9rA%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1L9oJmdxprFR3oob5KLpHUnkJAt5Le4woxO3wHz-SZ%2BTA%40mail.gmail.com\r\n[3]: https://www.postgresql.org/message-id/CAA4eK1LHH_%3DwbxsEn20%3DW%2Bqz1193OqFj-vvJ-u0uHLMmwLHbRw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Sat, 23 Sep 2023 04:48:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\n> > You mentioned at line 118, but at that time logical replication system is not\r\n> created.\r\n> > The subscriber is created at line 163.\r\n> > Therefore WALs would not be consumed automatically.\r\n> \r\n> So, not calling pg_logical_slot_get_changes() on test_slot1 won't\r\n> consume the WAL?\r\n\r\nYes. This slot was created manually and no one activated it automatically.\r\npg_logical_slot_get_changes() can consume WALs but never called.\r\n\r\n> \r\n> 2.\r\n> +++ b/src/bin/pg_upgrade/t/003_logical_replication_slots.pl\r\n> \r\n> How about a more descriptive and pointed name for the TAP test file,\r\n> something like 003_upgrade_logical_replication_slots.pl?\r\n\r\nGood suggestion. Renamed.\r\n\r\n> 3. Does this patch support upgrading of logical replication slots on a\r\n> streaming standby? If yes, isn't it a good idea to add one test for\r\n> upgrading standby with logical replication slots?\r\n\r\nIIUC pg_upgrade would not be used for physical standby. The standby would be upgrade by:\r\n\r\n* Recreating the database cluster, or\r\n* Executing rsync command.\r\n\r\nFor more detail, please see the documentation.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Sat, 23 Sep 2023 04:49:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n>\n> Yeah, both by tests and manually verifying the WAL records. Basically,\n> we need to care about records that could be generated by background\n> processes like checkpointer/bgwriter or can be generated during system\n> table scans. You may want to read my latest email for a summary on how\n> we reached at this design choice [1].\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1JVKZGRHLOEotWi%2Be%2B09jucNedqpkkc-Do4dh5FTAU%2B5w%40mail.gmail.com\n\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ {\n\nWhy can't the patch allow migration of logical replication slots from\nPG versions < 17 to say 17 or later? If done, it will be a main\nadvantage of the patch since it will enable seamless major version\nupgrades of postgres database instances with logical replication\nslots.\n\nI'm looking at the changes to the postgres backend that this patch\ndoes - AFICS, it does 2 things 1) implements\nbinary_upgrade_validate_wal_logical_end function, 2) adds an assertion\nthat the logical slots won't get invalidated. For (1), pg_upgrade can\nitself can read the WAL from the old cluster to determine the logical\nWAL end (i.e. implement the functionality of\nbinary_upgrade_validate_wal_logical_end ) because the xlogreader is\navailable to FRONTEND tools. For (2), it's just an assertion and\nlogical WAL end determining logic will anyway determine whether or not\nthe slots are valid; if needed, the assertion can be backported.\n\nIs there anything else that stops this patch from supporting migration\nof logical replication slots from PG versions < 17?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 11:15:31 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 11:15 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Sep 22, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Yeah, both by tests and manually verifying the WAL records. Basically,\n> > we need to care about records that could be generated by background\n> > processes like checkpointer/bgwriter or can be generated during system\n> > table scans. You may want to read my latest email for a summary on how\n> > we reached at this design choice [1].\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1JVKZGRHLOEotWi%2Be%2B09jucNedqpkkc-Do4dh5FTAU%2B5w%40mail.gmail.com\n>\n> + /* Logical slots can be migrated since PG17. */\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> + {\n>\n> Why can't the patch allow migration of logical replication slots from\n> PG versions < 17 to say 17 or later? If done, it will be a main\n> advantage of the patch since it will enable seamless major version\n> upgrades of postgres database instances with logical replication\n> slots.\n>\n> I'm looking at the changes to the postgres backend that this patch\n> does - AFICS, it does 2 things 1) implements\n> binary_upgrade_validate_wal_logical_end function, 2) adds an assertion\n> that the logical slots won't get invalidated. For (1), pg_upgrade can\n> itself can read the WAL from the old cluster to determine the logical\n> WAL end (i.e. implement the functionality of\n> binary_upgrade_validate_wal_logical_end ) because the xlogreader is\n> available to FRONTEND tools. For (2), it's just an assertion and\n> logical WAL end determining logic will anyway determine whether or not\n> the slots are valid; if needed, the assertion can be backported.\n>\n> Is there anything else that stops this patch from supporting migration\n> of logical replication slots from PG versions < 17?\n\nIMHO one of the main change we are doing in PG 17 is that on shutdown\ncheckpoint we are ensuring that if the confirmed flush lsn is updated\nsince the last checkpoint and that is not yet synched to the disk then\nwe are doing so. I think this is the most important change otherwise\nmany slots for which we have already streamed all the WAL might give\nan error assuming that there are pending WAL from the slots which are\nnot yet confirmed.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 12:30:07 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 12:30 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Mon, Sep 25, 2023 at 11:15 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Fri, Sep 22, 2023 at 12:11 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > Yeah, both by tests and manually verifying the WAL records. Basically,\n> > > we need to care about records that could be generated by background\n> > > processes like checkpointer/bgwriter or can be generated during system\n> > > table scans. You may want to read my latest email for a summary on how\n> > > we reached at this design choice [1].\n> > >\n> > > [1] - https://www.postgresql.org/message-id/CAA4eK1JVKZGRHLOEotWi%2Be%2B09jucNedqpkkc-Do4dh5FTAU%2B5w%40mail.gmail.com\n> >\n> > + /* Logical slots can be migrated since PG17. */\n> > + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n> > + {\n> >\n> > Why can't the patch allow migration of logical replication slots from\n> > PG versions < 17 to say 17 or later? If done, it will be a main\n> > advantage of the patch since it will enable seamless major version\n> > upgrades of postgres database instances with logical replication\n> > slots.\n> >\n> > I'm looking at the changes to the postgres backend that this patch\n> > does - AFICS, it does 2 things 1) implements\n> > binary_upgrade_validate_wal_logical_end function, 2) adds an assertion\n> > that the logical slots won't get invalidated. For (1), pg_upgrade can\n> > itself can read the WAL from the old cluster to determine the logical\n> > WAL end (i.e. implement the functionality of\n> > binary_upgrade_validate_wal_logical_end ) because the xlogreader is\n> > available to FRONTEND tools. For (2), it's just an assertion and\n> > logical WAL end determining logic will anyway determine whether or not\n> > the slots are valid; if needed, the assertion can be backported.\n> >\n> > Is there anything else that stops this patch from supporting migration\n> > of logical replication slots from PG versions < 17?\n>\n> IMHO one of the main change we are doing in PG 17 is that on shutdown\n> checkpoint we are ensuring that if the confirmed flush lsn is updated\n> since the last checkpoint and that is not yet synched to the disk then\n> we are doing so. I think this is the most important change otherwise\n> many slots for which we have already streamed all the WAL might give\n> an error assuming that there are pending WAL from the slots which are\n> not yet confirmed.\n>\n\nYou might need to refer to [1] for the change I am talking about\n\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BLtWDKXvxS7gnJ562VX%2Bs3C6%2B0uQWamqu%3DUuD8hMfORg%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 12:32:16 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Sep 23, 2023 at 10:18 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Again, thank you for reviewing! Here is a new version patch.\n\nHere are some more comments/thoughts on the v44 patch:\n\n1.\n+# pg_upgrade will fail because the slot still has unconsumed WAL records\n+command_fails(\n+ [\n\nAdd a test case to hit fprintf(script, \"The slot \\\"%s\\\" is invalid\\n\",\nfile as well?\n\n2.\n+ 'run of pg_upgrade where the new cluster has insufficient\nmax_replication_slots');\n+ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\n+ \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n\n+ 'run of pg_upgrade where the new cluster has the wrong wal_level');\n+ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\n+ \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n\n+ 'run of pg_upgrade of old cluster with idle replication slots');\n+ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\n+ \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n\nHow do these tests recognize the failures are the intended ones? I\nmean, for instance when pg_upgrade fails for unused replication\nslots/unconsumed WAL records, then just looking at the presence of\npg_upgrade_output.d might not be sufficient, no? Using\ncommand_fails_like instead of command_fails and looking at the\ncontents of invalid_logical_relication_slots.txt might help make these\ntests more focused.\n\n3.\n+ pg_log(PG_REPORT, \"fatal\");\n+ pg_fatal(\"Your installation contains invalid logical\nreplication slots.\\n\"\n+ \"These slots can't be copied, so this cluster cannot\nbe upgraded.\\n\"\n+ \"Consider removing such slots or consuming the\npending WAL if any,\\n\"\n+ \"and then restart the upgrade.\\n\"\n+ \"A list of invalid logical replication slots is in\nthe file:\\n\"\n+ \" %s\", output_path);\n\nIt's not just the invalid logical replication slots, but also the\nslots with unconsumed WALs which aren't invalid and can be upgraded if\nensured the WAL is consumed. So, a better wording would be:\n pg_fatal(\"Your installation contains logical replication slots\nthat cannot be upgraded.\\n\"\n \"List of all such logical replication slots is in the file:\\n\"\n \"These slots can't be copied, so this cluster cannot\nbe upgraded.\\n\"\n \"Consider removing invalid slots and/or consuming the\npending WAL if any,\\n\"\n \"and then restart the upgrade.\\n\"\n \" %s\", output_path);\n\n4.\n+ /*\n+ * There is a possibility that following records may be generated\n+ * during the upgrade.\n+ */\n+ is_valid = is_xlog_record_type(rmid, info, RM_XLOG_ID,\nXLOG_CHECKPOINT_SHUTDOWN) ||\n+ is_xlog_record_type(rmid, info, RM_XLOG_ID,\nXLOG_CHECKPOINT_ONLINE) ||\n+ is_xlog_record_type(rmid, info, RM_XLOG_ID, XLOG_SWITCH) ||\n+ is_xlog_record_type(rmid, info, RM_XLOG_ID, XLOG_FPI_FOR_HINT) ||\n+ is_xlog_record_type(rmid, info, RM_XLOG_ID,\nXLOG_PARAMETER_CHANGE) ||\n+ is_xlog_record_type(rmid, info, RM_STANDBY_ID,\nXLOG_RUNNING_XACTS) ||\n+ is_xlog_record_type(rmid, info, RM_HEAP2_ID, XLOG_HEAP2_PRUNE);\n\nWhat if we missed to capture the WAL records that may be generated\nduring upgrade?\n\nWhat happens if a custom WAL resource manager generates table/index AM\nWAL records during upgrade?\n\nWhat happens if new WAL records are added that may be generated during\nthe upgrade? Isn't keeping this code extensible and in sync with\nfuture changes a problem? Or we'd better say that any custom WAL\nrecords are found after the slot's confirmed flush LSN, then the slot\nisn't upgraded?\n\n5. In continuation to the above comment:\n\nWhy can't this logic be something like - if there's any WAL record\nseen after a slot's confirmed flush LSN is of type generated by WAL\nresource manager having the rm_decode function defined, then the slot\ncan't be upgraded.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 13:06:16 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 12:32 PM Dilip Kumar <[email protected]> wrote:\n>\n> > > Is there anything else that stops this patch from supporting migration\n> > > of logical replication slots from PG versions < 17?\n> >\n> > IMHO one of the main change we are doing in PG 17 is that on shutdown\n> > checkpoint we are ensuring that if the confirmed flush lsn is updated\n> > since the last checkpoint and that is not yet synched to the disk then\n> > we are doing so. I think this is the most important change otherwise\n> > many slots for which we have already streamed all the WAL might give\n> > an error assuming that there are pending WAL from the slots which are\n> > not yet confirmed.\n> >\n>\n> You might need to refer to [1] for the change I am talking about\n>\n> [1] https://www.postgresql.org/message-id/CAA4eK1%2BLtWDKXvxS7gnJ562VX%2Bs3C6%2B0uQWamqu%3DUuD8hMfORg%40mail.gmail.com\n\nI see. IIUC, without that commit e0b2eed [1], it may happen that the\nslot's on-disk confirmed_flush LSN value can be higher than the WAL\nLSN that's flushed to disk, no? If so, can't it be detected if the WAL\nat confirmed_flush LSN is valid or not when reading WAL with\nxlogreader machinery?\n\nWhat if the commit e0b2eed [1] is treated to be fixing a bug with the\nreasoning [2] and backpatch? When done so, it's easy to support\nupgradation/migration of logical replication slots from PG versions <\n17, no?\n\n[1]\ncommit e0b2eed047df9045664da6f724cb42c10f8b12f0\nAuthor: Amit Kapila <[email protected]>\nDate: Thu Sep 14 08:56:13 2023 +0530\n\n Flush logical slots to disk during a shutdown checkpoint if required.\n\n[2]\n It can also help avoid processing the same transactions again in some\n boundary cases after the clean shutdown and restart. Say, we process\n some transactions for which we didn't send anything downstream (the\n changes got filtered) but the confirm_flush LSN is updated due to\n keepalives. As we don't flush the latest value of confirm_flush LSN, it\n may lead to processing the same changes again without this patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 13:23:19 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 1:23 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Sep 25, 2023 at 12:32 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > > > Is there anything else that stops this patch from supporting migration\n> > > > of logical replication slots from PG versions < 17?\n> > >\n> > > IMHO one of the main change we are doing in PG 17 is that on shutdown\n> > > checkpoint we are ensuring that if the confirmed flush lsn is updated\n> > > since the last checkpoint and that is not yet synched to the disk then\n> > > we are doing so. I think this is the most important change otherwise\n> > > many slots for which we have already streamed all the WAL might give\n> > > an error assuming that there are pending WAL from the slots which are\n> > > not yet confirmed.\n> > >\n> >\n> > You might need to refer to [1] for the change I am talking about\n> >\n> > [1] https://www.postgresql.org/message-id/CAA4eK1%2BLtWDKXvxS7gnJ562VX%2Bs3C6%2B0uQWamqu%3DUuD8hMfORg%40mail.gmail.com\n>\n> I see. IIUC, without that commit e0b2eed [1], it may happen that the\n> slot's on-disk confirmed_flush LSN value can be higher than the WAL\n> LSN that's flushed to disk, no? If so, can't it be detected if the WAL\n> at confirmed_flush LSN is valid or not when reading WAL with\n> xlogreader machinery?\n\nActually, without this commit the slot's \"confirmed_flush LSN\" value\nin memory can be higher than the disk because if you notice this\nfunction LogicalConfirmReceivedLocation(), if we change only the\nconfirmed flush the slot is not marked dirty that means on shutdown\nthe slot will not be persisted to the disk. But logically this will\nnot cause any issue so we can not treat it as a bug it may cause us to\nprocess some extra records after the restart but that is not really a\nbug.\n\n> What if the commit e0b2eed [1] is treated to be fixing a bug with the\n> reasoning [2] and backpatch? When done so, it's easy to support\n> upgradation/migration of logical replication slots from PG versions <\n> 17, no?\n\nMaybe this could be backpatched in order to support this upgrade from\nthe older version but not as a bug fix.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:03:41 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 1:23 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Sep 25, 2023 at 12:32 PM Dilip Kumar <[email protected]> wrote:\n> >\n> > > > Is there anything else that stops this patch from supporting migration\n> > > > of logical replication slots from PG versions < 17?\n> > >\n> > > IMHO one of the main change we are doing in PG 17 is that on shutdown\n> > > checkpoint we are ensuring that if the confirmed flush lsn is updated\n> > > since the last checkpoint and that is not yet synched to the disk then\n> > > we are doing so. I think this is the most important change otherwise\n> > > many slots for which we have already streamed all the WAL might give\n> > > an error assuming that there are pending WAL from the slots which are\n> > > not yet confirmed.\n> > >\n> >\n> > You might need to refer to [1] for the change I am talking about\n> >\n> > [1] https://www.postgresql.org/message-id/CAA4eK1%2BLtWDKXvxS7gnJ562VX%2Bs3C6%2B0uQWamqu%3DUuD8hMfORg%40mail.gmail.com\n>\n> I see. IIUC, without that commit e0b2eed [1], it may happen that the\n> slot's on-disk confirmed_flush LSN value can be higher than the WAL\n> LSN that's flushed to disk, no?\n>\n\nNo, without that commit, there is a very high possibility that even if\nwe have sent the WAL to the subscriber and got the acknowledgment of\nthe same, we would miss updating it before shutdown. This would lead\nto upgrade failures because upgrades have no way to later identify\nwhether the remaining WAL records are sent to the subscriber.\n\n> If so, can't it be detected if the WAL\n> at confirmed_flush LSN is valid or not when reading WAL with\n> xlogreader machinery?\n>\n> What if the commit e0b2eed [1] is treated to be fixing a bug with the\n> reasoning [2] and backpatch? When done so, it's easy to support\n> upgradation/migration of logical replication slots from PG versions <\n> 17, no?\n>\n\nYeah, we could try to make a case to backpatch it but when I raised\nthat point there was not much consensus on backpatching it. We are\naware and understand that if we could backpatch it then the prior\nversion slots be upgraded but the case to backpatch needs broader\nconsensus. For now, the idea is to get the core of the functionality\nto be committed and then we can see if we get the consensus on\nbackpatching the commit you mentioned and probably changing the\nversion checks in this work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Sep 2023 14:06:33 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nThank you for giving comments! Before addressing your comments,\r\nI wanted to reply some of them.\r\n\r\n> 4.\r\n> + /*\r\n> + * There is a possibility that following records may be generated\r\n> + * during the upgrade.\r\n> + */\r\n> + is_valid = is_xlog_record_type(rmid, info, RM_XLOG_ID,\r\n> XLOG_CHECKPOINT_SHUTDOWN) ||\r\n> + is_xlog_record_type(rmid, info, RM_XLOG_ID,\r\n> XLOG_CHECKPOINT_ONLINE) ||\r\n> + is_xlog_record_type(rmid, info, RM_XLOG_ID, XLOG_SWITCH) ||\r\n> + is_xlog_record_type(rmid, info, RM_XLOG_ID,\r\n> XLOG_FPI_FOR_HINT) ||\r\n> + is_xlog_record_type(rmid, info, RM_XLOG_ID,\r\n> XLOG_PARAMETER_CHANGE) ||\r\n> + is_xlog_record_type(rmid, info, RM_STANDBY_ID,\r\n> XLOG_RUNNING_XACTS) ||\r\n> + is_xlog_record_type(rmid, info, RM_HEAP2_ID,\r\n> XLOG_HEAP2_PRUNE);\r\n> \r\n> What if we missed to capture the WAL records that may be generated\r\n> during upgrade?\r\n\r\nIf such records are generated before calling binary_upgrade_validate_wal_logical_end(),\r\nthe upgrading would fail. Otherwise it would be succeeded. Anyway, we don't care\r\nsuch records because those aren't required to be replicated. The main thing we\r\nwant to detect is that we don't miss any record generated before server shutdown.\r\n\r\n> \r\n> What happens if a custom WAL resource manager generates table/index AM\r\n> WAL records during upgrade?\r\n\r\nIf such records are found, definitely we cannot distinguish whether it is acceptable.\r\nWe do not have a way to know the property of custom WALs. We didn't care as there\r\nare other problems in the approach, if such a facility is invoked.\r\nPlease see the similar discussion [1].\r\n\r\n> \r\n> What happens if new WAL records are added that may be generated during\r\n> the upgrade? Isn't keeping this code extensible and in sync with\r\n> future changes a problem? \r\n\r\nActually, others also pointed out the similar point. Originally we just checked\r\nconfirmed_flush_lsn and \"latest checkpoint lsn\" reported by pg_controldata, but\r\nfound an issue what the upgrading cannot be passed if users do pg_upgrade --check\r\njust before the actual upgrade. Then we discussed some idea but they have some\r\ndisadvantages, so we settled on the current idea. Here is a summary which\r\ndescribes current situation it would be quite helpful [2]\r\n(maybe you have already known).\r\n\r\n> Or we'd better say that any custom WAL\r\n> records are found after the slot's confirmed flush LSN, then the slot\r\n> isn't upgraded?\r\n\r\nAfter concluding how we ensure, we can add the sentence accordingly.\r\n\r\n\r\n> \r\n> 5. In continuation to the above comment:\r\n> \r\n> Why can't this logic be something like - if there's any WAL record\r\n> seen after a slot's confirmed flush LSN is of type generated by WAL\r\n> resource manager having the rm_decode function defined, then the slot\r\n> can't be upgraded.\r\n\r\nThank you for giving new approach! We have never seen the approach before,\r\nbut at least XLOG and HEAP2 rmgr have a decode function. So that\r\nXLOG_CHECKPOINT_SHUTDOWN, XLOG_CHECKPOINT_ONLINE, and XLOG_HEAP2_PRUNE cannot\r\nbe ignored the approach, seems not appropriate.\r\nIf you have another approach, I'm very happy if you post.\r\n\r\n[1]: https://www.postgresql.org/message-id/ZNZ4AxUMIrnMgRbo%40momjian.us\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1JVKZGRHLOEotWi%2Be%2B09jucNedqpkkc-Do4dh5FTAU%2B5w%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 25 Sep 2023 11:01:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Monday, September 25, 2023 7:01 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> To: 'Bharath Rupireddy' <[email protected]>\r\n> Cc: Amit Kapila <[email protected]>; Dilip Kumar\r\n> >\r\n> > 5. In continuation to the above comment:\r\n> >\r\n> > Why can't this logic be something like - if there's any WAL record\r\n> > seen after a slot's confirmed flush LSN is of type generated by WAL\r\n> > resource manager having the rm_decode function defined, then the slot\r\n> > can't be upgraded.\r\n> \r\n> Thank you for giving new approach! We have never seen the approach before,\r\n> but at least XLOG and HEAP2 rmgr have a decode function. So that\r\n> XLOG_CHECKPOINT_SHUTDOWN, XLOG_CHECKPOINT_ONLINE, and\r\n> XLOG_HEAP2_PRUNE cannot be ignored the approach, seems not appropriate.\r\n> If you have another approach, I'm very happy if you post.\r\n\r\nAnother idea around decoding is to check if there is any decoding output for\r\nthe WAL records.\r\n\r\nLike we can create a temp slot and use test_decoding to decode the WAL from the\r\nconfirmed_flush_lsn among existing logical replication slots. And if there is\r\nany output from the output plugin, then we consider WAL has not been consumed\r\nyet.\r\n\r\nBut this means we need to ignore some of the WALs like XLOG_XACT_INVALIDATIONS\r\nwhich won't be decoded into the output. Also, this approach could be costly as\r\nit needs to do the extra decoding and output, and we need to assume that \"all the\r\nWAL records including custom records will be decoded and output if they need to\r\nbe consumed\" .\r\n\r\nSo it may not be better, but just share it for reference.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 26 Sep 2023 04:48:40 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nAgain, thank you for reviewing! PSA a new version.\r\n\r\n> \r\n> Here are some more comments/thoughts on the v44 patch:\r\n> \r\n> 1.\r\n> +# pg_upgrade will fail because the slot still has unconsumed WAL records\r\n> +command_fails(\r\n> + [\r\n> \r\n> Add a test case to hit fprintf(script, \"The slot \\\"%s\\\" is invalid\\n\",\r\n> file as well?\r\n\r\nAdded. The test was not added because 002_pg_upgrade.pl did not do similar checks,\r\nbut it is worth verifying. One difficulty was that output directory had millisecond\r\ntimestamp, so the absolute path could not be predicted. So File::Find::find was\r\nused to detect the file.\r\n\r\n> 2.\r\n> + 'run of pg_upgrade where the new cluster has insufficient\r\n> max_replication_slots');\r\n> +ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\r\n> + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\r\n> \r\n> + 'run of pg_upgrade where the new cluster has the wrong wal_level');\r\n> +ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\r\n> + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\r\n> \r\n> + 'run of pg_upgrade of old cluster with idle replication slots');\r\n> +ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\r\n> + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\r\n> \r\n> How do these tests recognize the failures are the intended ones? I\r\n> mean, for instance when pg_upgrade fails for unused replication\r\n> slots/unconsumed WAL records, then just looking at the presence of\r\n> pg_upgrade_output.d might not be sufficient, no? Using\r\n> command_fails_like instead of command_fails and looking at the\r\n> contents of invalid_logical_relication_slots.txt might help make these\r\n> tests more focused.\r\n\r\nYeah, currently the output was not checked. I checked and found that pg_upgrade\r\nwould output all messages (including error message) to stdout, so\r\ncommand_fails_like() could not be used. Therefore, command_checks_all() was used\r\ninstead.\r\n\r\n> 3.\r\n> + pg_log(PG_REPORT, \"fatal\");\r\n> + pg_fatal(\"Your installation contains invalid logical\r\n> replication slots.\\n\"\r\n> + \"These slots can't be copied, so this cluster cannot\r\n> be upgraded.\\n\"\r\n> + \"Consider removing such slots or consuming the\r\n> pending WAL if any,\\n\"\r\n> + \"and then restart the upgrade.\\n\"\r\n> + \"A list of invalid logical replication slots is in\r\n> the file:\\n\"\r\n> + \" %s\", output_path);\r\n> \r\n> It's not just the invalid logical replication slots, but also the\r\n> slots with unconsumed WALs which aren't invalid and can be upgraded if\r\n> ensured the WAL is consumed. So, a better wording would be:\r\n> pg_fatal(\"Your installation contains logical replication slots\r\n> that cannot be upgraded.\\n\"\r\n> \"List of all such logical replication slots is in the file:\\n\"\r\n> \"These slots can't be copied, so this cluster cannot\r\n> be upgraded.\\n\"\r\n> \"Consider removing invalid slots and/or consuming the\r\n> pending WAL if any,\\n\"\r\n> \"and then restart the upgrade.\\n\"\r\n> \" %s\", output_path);\r\n\r\nFixed.\r\n\r\nAlso, I ran pgperltidy. Some formattings were changed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 26 Sep 2023 05:21:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Sep 26, 2023 at 10:51 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Again, thank you for reviewing! PSA a new version.\n\nThanks for the new patch. Here's a comment on v46:\n\n1.\n+Datum\n+binary_upgrade_validate_wal_logical_end(PG_FUNCTION_ARGS\n+{ oid => '8046', descr => 'for use by pg_upgrade',\n+ proname => 'binary_upgrade_validate_wal_logical_end', proisstrict => 'f',\n+ provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n+ proargtypes => 'pg_lsn',\n+ prosrc => 'binary_upgrade_validate_wal_logical_end' },\n\nI think this patch can avoid catalog changes by turning\nbinary_upgrade_validate_wal_logical_end a FRONTEND-only function\nsitting in xlogreader.c after making InitXLogReaderState(),\nReadNextXLogRecord() FRONTEND-friendly (replace elog/ereport with\npg_fatal or such). With this change and back-porting of commit\ne0b2eed0 to save logical slots at shutdown, the patch can help support\nupgrading logical replication slots on PG versions < 17.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 26 Sep 2023 16:42:57 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nThank you for reviewing!\r\n\r\n> Thanks for the new patch. Here's a comment on v46:\r\n> \r\n> 1.\r\n> +Datum\r\n> +binary_upgrade_validate_wal_logical_end(PG_FUNCTION_ARGS\r\n> +{ oid => '8046', descr => 'for use by pg_upgrade',\r\n> + proname => 'binary_upgrade_validate_wal_logical_end', proisstrict => 'f',\r\n> + provolatile => 'v', proparallel => 'u', prorettype => 'bool',\r\n> + proargtypes => 'pg_lsn',\r\n> + prosrc => 'binary_upgrade_validate_wal_logical_end' },\r\n> \r\n> I think this patch can avoid catalog changes by turning\r\n> binary_upgrade_validate_wal_logical_end a FRONTEND-only function\r\n> sitting in xlogreader.c after making InitXLogReaderState(),\r\n> ReadNextXLogRecord() FRONTEND-friendly (replace elog/ereport with\r\n> pg_fatal or such). With this change and back-porting of commit\r\n> e0b2eed0 to save logical slots at shutdown, the patch can help support\r\n> upgrading logical replication slots on PG versions < 17.\r\n\r\nHmm, I think your suggestion may be questionable.\r\n\r\nIf we implement the upgrading function as FRONTEND-only (I have not checked its\r\nfeasibility), it means pg_upgrade uses the latest version WAL reader API to read\r\nWALs in old version cluster, which I didn't think is suggested.\r\n\r\nEach WAL page header has a magic number, XLOG_PAGE_MAGIC, which indicates the\r\ncontent of WAL. Sometimes the value has been changed due to the changes of WAL\r\ncontents, and some functions requires that the magic number must be same as\r\nexpected. E.g., startup process and pg_walinspect functions require that.\r\nTypically XLogReaderValidatePageHeader() ensures the equality.\r\n\r\nNow some functions are ported from pg_walinspect, so upgrading function requires\r\nsame restriction. I think we should not ease the restriction to verify the\r\ncompleteness of files. Followings are the call stack of ported functions\r\ntill XLogReaderValidatePageHeader().\r\n\r\n```\r\nInitXLogReaderState()\r\nXLogFindNextRecord()\r\nReadPageInternal()\r\nXLogReaderValidatePageHeader()\r\n```\r\n\r\n```\r\nReadNextXLogRecord()\r\nXLogReadRecord()\r\nXLogReadAhead()\r\nXLogDecodeNextRecord()\r\nReadPageInternal()\r\nXLogReaderValidatePageHeader()\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Wed, 27 Sep 2023 04:34:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 2:06 PM Amit Kapila <[email protected]> wrote:\n>\n> > > [1] https://www.postgresql.org/message-id/CAA4eK1%2BLtWDKXvxS7gnJ562VX%2Bs3C6%2B0uQWamqu%3DUuD8hMfORg%40mail.gmail.com\n> >\n> > I see. IIUC, without that commit e0b2eed [1], it may happen that the\n> > slot's on-disk confirmed_flush LSN value can be higher than the WAL\n> > LSN that's flushed to disk, no?\n> >\n>\n> No, without that commit, there is a very high possibility that even if\n> we have sent the WAL to the subscriber and got the acknowledgment of\n> the same, we would miss updating it before shutdown. This would lead\n> to upgrade failures because upgrades have no way to later identify\n> whether the remaining WAL records are sent to the subscriber.\n\nThanks for clarifying. I'm trying understand what happens without\ncommit e0b2eed0 with an illustration:\n\nstep 1: publisher - confirmed_flush LSN in replication slot on disk\nstructure is 80\nstep 2: publisher - sends WAL at LSN 100\nstep 3: subscriber - acknowledges the apply LSN or confirmed_flush LSN as 100\nstep 4: publisher - shuts down without writing the new confirmed_flush\nLSN as 100 to disk, note that commit e0b2eed0 is not in place\nstep 5: publisher - restarts\nstep 6: subscriber - upon publisher restart, the subscriber requests\nWAL from publisher from LSN 100 as it tracks the last applied LSN in\nreplication origin\n\nNow, if the pg_upgrade with the patch in this thread is run on\npublisher after step 4, it complains with \"The slot \\\"%s\\\" has not\nconsumed the WAL yet\".\n\nIs my above understanding right?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 10:44:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 10:44 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Sep 25, 2023 at 2:06 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > > [1] https://www.postgresql.org/message-id/CAA4eK1%2BLtWDKXvxS7gnJ562VX%2Bs3C6%2B0uQWamqu%3DUuD8hMfORg%40mail.gmail.com\n> > >\n> > > I see. IIUC, without that commit e0b2eed [1], it may happen that the\n> > > slot's on-disk confirmed_flush LSN value can be higher than the WAL\n> > > LSN that's flushed to disk, no?\n> > >\n> >\n> > No, without that commit, there is a very high possibility that even if\n> > we have sent the WAL to the subscriber and got the acknowledgment of\n> > the same, we would miss updating it before shutdown. This would lead\n> > to upgrade failures because upgrades have no way to later identify\n> > whether the remaining WAL records are sent to the subscriber.\n>\n> Thanks for clarifying. I'm trying understand what happens without\n> commit e0b2eed0 with an illustration:\n>\n> step 1: publisher - confirmed_flush LSN in replication slot on disk\n> structure is 80\n> step 2: publisher - sends WAL at LSN 100\n> step 3: subscriber - acknowledges the apply LSN or confirmed_flush LSN as 100\n> step 4: publisher - shuts down without writing the new confirmed_flush\n> LSN as 100 to disk, note that commit e0b2eed0 is not in place\n> step 5: publisher - restarts\n> step 6: subscriber - upon publisher restart, the subscriber requests\n> WAL from publisher from LSN 100 as it tracks the last applied LSN in\n> replication origin\n>\n> Now, if the pg_upgrade with the patch in this thread is run on\n> publisher after step 4, it complains with \"The slot \\\"%s\\\" has not\n> consumed the WAL yet\".\n>\n> Is my above understanding right?\n>\n\nYes.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Sep 2023 13:06:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 1:06 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Sep 28, 2023 at 10:44 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > > No, without that commit, there is a very high possibility that even if\n> > > we have sent the WAL to the subscriber and got the acknowledgment of\n> > > the same, we would miss updating it before shutdown. This would lead\n> > > to upgrade failures because upgrades have no way to later identify\n> > > whether the remaining WAL records are sent to the subscriber.\n> >\n> > Thanks for clarifying. I'm trying understand what happens without\n> > commit e0b2eed0 with an illustration:\n> >\n> > step 1: publisher - confirmed_flush LSN in replication slot on disk\n> > structure is 80\n> > step 2: publisher - sends WAL at LSN 100\n> > step 3: subscriber - acknowledges the apply LSN or confirmed_flush LSN as 100\n> > step 4: publisher - shuts down without writing the new confirmed_flush\n> > LSN as 100 to disk, note that commit e0b2eed0 is not in place\n> > step 5: publisher - restarts\n> > step 6: subscriber - upon publisher restart, the subscriber requests\n> > WAL from publisher from LSN 100 as it tracks the last applied LSN in\n> > replication origin\n> >\n> > Now, if the pg_upgrade with the patch in this thread is run on\n> > publisher after step 4, it complains with \"The slot \\\"%s\\\" has not\n> > consumed the WAL yet\".\n> >\n> > Is my above understanding right?\n> >\n>\n> Yes.\n\nThanks. Trying things with replication lag - when there's a lag, the\npg_upgrade can't proceed further and it complains \"The slot \"mysub\"\nhas not consumed the WAL yet\".\n\nI think the best way to upgrade a postgres instance with logical\nreplication slots is: 1) ensure no replication lag for the logical\nslots; 2) perform pg_upgrade --check first; 3) perform pg_upgrade if\nthere are no complaints.\n\nWith the above understanding, it looks to me that the commit e0b2eed0\nisn't necessary for back branches. Because, without it the pg_upgrade\ncomplains \"The slot \"mysub\" has not consumed the WAL yet\", and then\nthe user has to restart the instance to ensure the WAL is consumed\n(IOW, to get the correct confirmed_flush LSN to the disk).\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 13:23:58 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 9:40 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Sep 21, 2023 at 01:50:28PM +0530, Amit Kapila wrote:\n> > We have discussed this point. Normally, we don't have such options in\n> > upgrade, so we were hesitent to add a new one for this but there is a\n> > discussion to add an --exclude-logical-slots option. We are planning\n> > to add that as a separate patch after getting some more consensus on\n> > it. Right now, the idea is to get the main patch ready.\n>\n> Okay. I am wondering if the subscriber part is OK now without an\n> option, but that could also be considered separately, as well. At\n> least I hope so.\n\n+1 for an option to skip upgrade logical replication slots for the\nfollowing reasons:\n- one may not want the logical replication slots on the upgraded\ninstance immediately - unless the upgraded instance is tested and\ndetermined to be performant.\n- one may not want the logical replication slots on the upgraded\ninstance immediately - no logical replication setup is wanted on the\nnew instance perhaps because of an architectural/organizational\ndecision.\n- one may take backup of the postgres instance with logical\nreplication slots using any of the file system/snapshot based backup\nmechanisms (not pg_basebackup), essentially getting the on-disk\nreplication slots data as well; the pg_upgrade may fail on the\nbacked-up instance.\n\nI agree to have it as a 0002 patch once the design and things are\nfinalized for the main patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 14:22:21 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 1:24 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Sep 28, 2023 at 1:06 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Sep 28, 2023 at 10:44 AM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > > No, without that commit, there is a very high possibility that even if\n> > > > we have sent the WAL to the subscriber and got the acknowledgment of\n> > > > the same, we would miss updating it before shutdown. This would lead\n> > > > to upgrade failures because upgrades have no way to later identify\n> > > > whether the remaining WAL records are sent to the subscriber.\n> > >\n> > > Thanks for clarifying. I'm trying understand what happens without\n> > > commit e0b2eed0 with an illustration:\n> > >\n> > > step 1: publisher - confirmed_flush LSN in replication slot on disk\n> > > structure is 80\n> > > step 2: publisher - sends WAL at LSN 100\n> > > step 3: subscriber - acknowledges the apply LSN or confirmed_flush LSN as 100\n> > > step 4: publisher - shuts down without writing the new confirmed_flush\n> > > LSN as 100 to disk, note that commit e0b2eed0 is not in place\n> > > step 5: publisher - restarts\n> > > step 6: subscriber - upon publisher restart, the subscriber requests\n> > > WAL from publisher from LSN 100 as it tracks the last applied LSN in\n> > > replication origin\n> > >\n> > > Now, if the pg_upgrade with the patch in this thread is run on\n> > > publisher after step 4, it complains with \"The slot \\\"%s\\\" has not\n> > > consumed the WAL yet\".\n> > >\n> > > Is my above understanding right?\n> > >\n> >\n> > Yes.\n>\n> Thanks. Trying things with replication lag - when there's a lag, the\n> pg_upgrade can't proceed further and it complains \"The slot \"mysub\"\n> has not consumed the WAL yet\".\n>\n> I think the best way to upgrade a postgres instance with logical\n> replication slots is: 1) ensure no replication lag for the logical\n> slots; 2) perform pg_upgrade --check first; 3) perform pg_upgrade if\n> there are no complaints.\n>\n> With the above understanding, it looks to me that the commit e0b2eed0\n> isn't necessary for back branches. Because, without it the pg_upgrade\n> complains \"The slot \"mysub\" has not consumed the WAL yet\", and then\n> the user has to restart the instance to ensure the WAL is consumed\n> (IOW, to get the correct confirmed_flush LSN to the disk).\n>\n\nThe point is it will be difficult for users to ensure that all the WAL\nis consumed because it may have already been sent even after restart\nand shutdown but the check will still fail. I think the argument to\nsupport upgrade from branches where we don't have commit e0b2eed0 has\nsome merits and we can change the checks if there is broader agreement\non it. Let's try to agree on whether the core patch is good as is\nespecially what we want to achieve via validate_wal_records. Once we\nagree on the main patch and commit it, the other work including\nconsidering having an option to upgrade slots can be done as top-up\npatches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Sep 2023 14:27:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 2:22 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Sep 22, 2023 at 9:40 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Sep 21, 2023 at 01:50:28PM +0530, Amit Kapila wrote:\n> > > We have discussed this point. Normally, we don't have such options in\n> > > upgrade, so we were hesitent to add a new one for this but there is a\n> > > discussion to add an --exclude-logical-slots option. We are planning\n> > > to add that as a separate patch after getting some more consensus on\n> > > it. Right now, the idea is to get the main patch ready.\n> >\n> > Okay. I am wondering if the subscriber part is OK now without an\n> > option, but that could also be considered separately, as well. At\n> > least I hope so.\n>\n> +1 for an option to skip upgrade logical replication slots for the\n> following reasons:\n> - one may not want the logical replication slots on the upgraded\n> instance immediately - unless the upgraded instance is tested and\n> determined to be performant.\n> - one may not want the logical replication slots on the upgraded\n> instance immediately - no logical replication setup is wanted on the\n> new instance perhaps because of an architectural/organizational\n> decision.\n> - one may take backup of the postgres instance with logical\n> replication slots using any of the file system/snapshot based backup\n> mechanisms (not pg_basebackup), essentially getting the on-disk\n> replication slots data as well; the pg_upgrade may fail on the\n> backed-up instance.\n>\n> I agree to have it as a 0002 patch once the design and things are\n> finalized for the main patch.\n>\n\nThanks for understanding that it can be done as a 0002 patch because\nwe don't have an agreement on this. Jonathan feels exactly the\nopposite for having an option that by default doesn't migrate slots as\nusers always need to use the option and they may want to have slots\nmigrated by default. So, we may consider to have an --exclude-*\noption.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Sep 2023 14:32:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 4:31 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > 4.\n> > + /*\n> > + * There is a possibility that following records may be generated\n> > + * during the upgrade.\n> > + */\n> > + is_valid = is_xlog_record_type(rmid, info, RM_XLOG_ID,\n> > XLOG_CHECKPOINT_SHUTDOWN) ||\n> > + is_xlog_record_type(rmid, info, RM_XLOG_ID,\n> > XLOG_CHECKPOINT_ONLINE) ||\n> > + is_xlog_record_type(rmid, info, RM_XLOG_ID, XLOG_SWITCH) ||\n> > + is_xlog_record_type(rmid, info, RM_XLOG_ID,\n> > XLOG_FPI_FOR_HINT) ||\n> > + is_xlog_record_type(rmid, info, RM_XLOG_ID,\n> > XLOG_PARAMETER_CHANGE) ||\n> > + is_xlog_record_type(rmid, info, RM_STANDBY_ID,\n> > XLOG_RUNNING_XACTS) ||\n> > + is_xlog_record_type(rmid, info, RM_HEAP2_ID,\n> > XLOG_HEAP2_PRUNE);\n> >\n> > What if we missed to capture the WAL records that may be generated\n> > during upgrade?\n>\n> If such records are generated before calling binary_upgrade_validate_wal_logical_end(),\n> the upgrading would fail. Otherwise it would be succeeded. Anyway, we don't care\n> such records because those aren't required to be replicated. The main thing we\n> want to detect is that we don't miss any record generated before server shutdown.\n\nI read this https://www.postgresql.org/message-id/[email protected]\nand understand that the current patch implements the approach\nsuggested there - \"scan the end of the WAL for records that should\nhave been streamed out\". I think the WAL records that should have been\nstreamed out are all WAL record types in XXXX_decode functions except\nthe ones that have a no-op or an op unrelated to logical decoding. For\ninstance,\n- for xlog_decode, if the records of type {XLOG_CHECKPOINT_ONLINE,\nXLOG_PARAMETER_CHANGE, XLOG_NOOP, XLOG_NEXTOID, XLOG_SWITCH,\nXLOG_BACKUP_END, XLOG_RESTORE_POINT, XLOG_FPW_CHANGE,\nXLOG_FPI_FOR_HINT, XLOG_FPI, XLOG_OVERWRITE_CONTRECORD} are found\nafter confirmed_flush LSN, it is fine.\n- for xact_decode, if the records of type {XLOG_XACT_ASSIGNMENT} are\nfound after confirmed_flush LSN, it is fine.\n- for standby_decode, if the records of type {XLOG_STANDBY_LOCK,\nXLOG_INVALIDATIONS} are found after confirmed_flush LSN, it is fine.\n- for standby_decode, if the records of type {XLOG_STANDBY_LOCK,\nXLOG_INVALIDATIONS} are found after confirmed_flush LSN, it is fine.\n- for heap2_decode, if the records of type {XLOG_HEAP2_REWRITE,\nXLOG_HEAP2_FREEZE_PAGE, XLOG_HEAP2_PRUNE, XLOG_HEAP2_VACUUM,\nXLOG_HEAP2_VISIBLE, XLOG_HEAP2_LOCK_UPDATED} are found after\nconfirmed_flush LSN, it is fine.\n- for heap_decode, if the records of type {XLOG_HEAP_LOCK} are found\nafter confirmed_flush LSN, it is fine.\n\nI think all of the above WAL records are okay to be present after\ncofirmed_flush LSN. If any WAL records other than the above are found\nafter confirmed_flush LSN, those are the one that should have been\nstreamed out and the pg_upgrade must complain with \"The slot \"foo\" has\nnot consumed the WAL yet\" for all such slots, right? But, the function\nbinary_upgrade_validate_wal_logical_end checks for only a handful of\nthe above record types. I know that the list is arrived at based on\ntesting, but it may happen that any of the above WAL records may be\ngenerated and present before/during/after pg_upgrade for which\npg_upgrade failure isn't wanted.\n\nPerhaps, a function in logical/decode.c returning the WAL record as\nvalid if the record type is any of the above. A note in\nreplication/decode.h and/or access/rmgrlist.h asking rmgr adders to\ncategorize the WAL record type in the new function based on its\ndecoding operation might help with future new WAL record type\nadditions.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 28 Sep 2023 15:02:21 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thursday, September 28, 2023 5:32 PM Bharath Rupireddy <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> On Mon, Sep 25, 2023 at 4:31 PM Hayato Kuroda (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > > 4.\r\n> > > + /*\r\n> > > + * There is a possibility that following records may be generated\r\n> > > + * during the upgrade.\r\n> > > + */\r\n> > > + is_valid = is_xlog_record_type(rmid, info, RM_XLOG_ID,\r\n> > > XLOG_CHECKPOINT_SHUTDOWN) ||\r\n> > > + is_xlog_record_type(rmid, info, RM_XLOG_ID,\r\n> > > XLOG_CHECKPOINT_ONLINE) ||\r\n...\r\n> > >\r\n> > > What if we missed to capture the WAL records that may be generated\r\n> > > during upgrade?\r\n> >\r\n> > If such records are generated before calling\r\n> > binary_upgrade_validate_wal_logical_end(),\r\n> > the upgrading would fail. Otherwise it would be succeeded. Anyway, we\r\n> > don't care such records because those aren't required to be\r\n> > replicated. The main thing we want to detect is that we don't miss any record\r\n> generated before server shutdown.\r\n> \r\n> I read this\r\n> https://www.postgresql.org/message-id/20230725170319.h423jbthfohwgnf7@a\r\n> work3.anarazel.de\r\n> and understand that the current patch implements the approach suggested\r\n> there - \"scan the end of the WAL for records that should have been streamed\r\n> out\". I think the WAL records that should have been streamed out are all WAL\r\n> record types in XXXX_decode functions except the ones that have a no-op or an\r\n> op unrelated to logical decoding. For instance,\r\n> - for xlog_decode, if the records of type {XLOG_CHECKPOINT_ONLINE,\r\n> XLOG_PARAMETER_CHANGE, XLOG_NOOP, XLOG_NEXTOID, XLOG_SWITCH,\r\n> XLOG_BACKUP_END, XLOG_RESTORE_POINT, XLOG_FPW_CHANGE,\r\n> XLOG_FPI_FOR_HINT, XLOG_FPI, XLOG_OVERWRITE_CONTRECORD} are found\r\n> after confirmed_flush LSN, it is fine.\r\n> - for xact_decode, if the records of type {XLOG_XACT_ASSIGNMENT} are found\r\n> after confirmed_flush LSN, it is fine.\r\n> - for standby_decode, if the records of type {XLOG_STANDBY_LOCK,\r\n> XLOG_INVALIDATIONS} are found after confirmed_flush LSN, it is fine.\r\n> - for standby_decode, if the records of type {XLOG_STANDBY_LOCK,\r\n> XLOG_INVALIDATIONS} are found after confirmed_flush LSN, it is fine.\r\n> - for heap2_decode, if the records of type {XLOG_HEAP2_REWRITE,\r\n> XLOG_HEAP2_FREEZE_PAGE, XLOG_HEAP2_PRUNE, XLOG_HEAP2_VACUUM,\r\n> XLOG_HEAP2_VISIBLE, XLOG_HEAP2_LOCK_UPDATED} are found after\r\n> confirmed_flush LSN, it is fine.\r\n> - for heap_decode, if the records of type {XLOG_HEAP_LOCK} are found after\r\n> confirmed_flush LSN, it is fine.\r\n> \r\n> I think all of the above WAL records are okay to be present after cofirmed_flush\r\n> LSN. If any WAL records other than the above are found after confirmed_flush\r\n> LSN, those are the one that should have been streamed out and the pg_upgrade\r\n> must complain with \"The slot \"foo\" has not consumed the WAL yet\" for all such\r\n> slots, right? But, the function binary_upgrade_validate_wal_logical_end checks\r\n> for only a handful of the above record types. I know that the list is arrived at\r\n> based on testing, but it may happen that any of the above WAL records may be\r\n> generated and present before/during/after pg_upgrade for which pg_upgrade\r\n> failure isn't wanted.\r\n> \r\n> Perhaps, a function in logical/decode.c returning the WAL record as valid if the\r\n> record type is any of the above. A note in replication/decode.h and/or\r\n> access/rmgrlist.h asking rmgr adders to categorize the WAL record type in the\r\n> new function based on its decoding operation might help with future new WAL\r\n> record type additions.\r\n> \r\n> Thoughts?\r\n\r\nI think this approach can work, but I am not sure if it's better than other\r\napproaches. Mainly because it has almost the same maintaince burden as the\r\ncurrent approach, i.e. we need to verify and update the check function each\r\ntime we add a new WAL record type.\r\n\r\nApart from the WAL scan approach, we also considered alternative approach that\r\ndo not impose an additional maintenance burden and could potentially be less\r\ncomplex. For example, we can add a new field in pg_controldata to record the\r\nlast checkpoint that happens in non-upgrade mode, so that we can compare the\r\nslot's confirmed_flush_lsn with this value, If they are the same, the WAL\r\nshould have been consumed otherwise we disallow upgrading this slot. I would\r\nappreciate if you can share your thought about this approach.\r\n\r\nAnd if we decided to use WAL scan approach, instead of checking each record, we\r\ncould directly check if the WAL record can be decoded into meaningful results\r\nby use test_decoding to decode them. This approach also doesn't add new\r\nmaintenance burden as we anyway need to update the test_decoding if any decode\r\nlogic for new record changes. This was also mentioned [1].\r\n\r\nWhat do you think ?\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB5716FC0F814D78E82E4CC3B894C3A%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Thu, 28 Sep 2023 12:38:20 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 6:08 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Thursday, September 28, 2023 5:32 PM Bharath Rupireddy <[email protected]> wrote:\n>\n> > Perhaps, a function in logical/decode.c returning the WAL record as valid if the\n> > record type is any of the above. A note in replication/decode.h and/or\n> > access/rmgrlist.h asking rmgr adders to categorize the WAL record type in the\n> > new function based on its decoding operation might help with future new WAL\n> > record type additions.\n> >\n> > Thoughts?\n>\n> I think this approach can work, but I am not sure if it's better than other\n> approaches. Mainly because it has almost the same maintaince burden as the\n> current approach, i.e. we need to verify and update the check function each\n> time we add a new WAL record type.\n\nI think that's not a big problem if we have comments in\nreplication/decode.h, access/rmgrlist.h, docs to categorize the new\nWAL records as decodable. Currently, the WAL record types adders will\nhave to do certain things based on notes in comments or docs anyways.\n\nAnother idea to enforce categorizing decodability of WAL records is to\nhave a new RMGR API rm_is_record_decodable or such, the RMGR\nimplementers will then add respective functions returning true/false\nif a given WAL record is decodable or not:\n void (*rm_decode) (struct LogicalDecodingContext *ctx,\n struct XLogRecordBuffer *buf);\n bool (*rm_is_record_decodable) (uint8 type);\n} RmgrData;\n\nPG_RMGR(RM_XLOG_ID, \"XLOG\", xlog_redo, xlog_desc, xlog_identify, NULL,\nNULL, NULL, xlog_is_record_decodable), then the\nxlog_is_record_decodable can look something like [1].\n\nThis approach can also enforce/help custom RMGR implementers to define\nthe decodability of the WAL records.\n\n> Apart from the WAL scan approach, we also considered alternative approach that\n> do not impose an additional maintenance burden and could potentially be less\n> complex. For example, we can add a new field in pg_controldata to record the\n> last checkpoint that happens in non-upgrade mode, so that we can compare the\n> slot's confirmed_flush_lsn with this value, If they are the same, the WAL\n> should have been consumed otherwise we disallow upgrading this slot. I would\n> appreciate if you can share your thought about this approach.\n\nI read this https://www.postgresql.org/message-id/CAA4eK1JVKZGRHLOEotWi%2Be%2B09jucNedqpkkc-Do4dh5FTAU%2B5w%40mail.gmail.com\nand I agree with the concern on adding a new filed in pg_controldata\njust for this purpose and spreading the IsBinaryUpgrade code in\ncheckpointer. Another concern for me with a new filed in\npg_controldata approach is that it makes it hard to make this patch\nsupport back branches. Therefore, -1 for this approach from me.\n\n> And if we decided to use WAL scan approach, instead of checking each record, we\n> could directly check if the WAL record can be decoded into meaningful results\n> by use test_decoding to decode them. This approach also doesn't add new\n> maintenance burden as we anyway need to update the test_decoding if any decode\n> logic for new record changes. This was also mentioned [1].\n>\n> What do you think ?\n>\n> [1] https://www.postgresql.org/message-id/OS0PR01MB5716FC0F814D78E82E4CC3B894C3A%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-1 for decoding the WAL with test_decoding, I don't think it's a great\nidea to create temp slots and launch walsenders during upgrade.\n\nIMO, WAL scanning approach looks better. However, if were to optimize\nit by not scanning WAL records for every replication slot\nconfirmed_flush_lsn (CFL), start with lowest CFL (min of all slots\nCFL), and scan till the end of WAL. The\nbinary_upgrade_validate_wal_logical_end function can return an array\nof LSNs at which decodable WAL records are found. Then, use CFLs of\nall other slots and this array to determine if the slots have\nunconsumed WAL. Following is an illustration of this idea:\n\n1. Slots s1, s2, s3, s4, s5 with CFLs 100, 90, 110, 70, 80 respectively.\n2. Min of all CFLs is 70 for slot s4.\n3. Start scanning WAL from min CFL 70 for slot s4, say there are\nunconsumed WAL at LSN {85, 89}.\n4. Now, without scanning WAL for rest of the slots, determine if they\nhave unconsumed WAL.\n5.1. CFL of slot s1 is 100 and no unconsumed WAL at or after LSN 100 -\nlook at the array of unconsumed WAL LSNs {85, 89}.\n5.2. CFL of slot s2 is 90 and no unconsumed WAL at or after LSN 90 -\nlook at the array of unconsumed WAL LSNs {85, 89}.\n5.3. CFL of slot s3 is 110 and no unconsumed WAL at or after LSN 110 -\nlook at the array of unconsumed WAL LSNs {85, 89}.\n5.4. CFL of slot s4 is 70 and there's unconsumed WAL at or after LSN\n70 - look at the array of unconsumed WAL LSNs {85, 89}.\n5.5. CFL of slot s5 is 80 and there's unconsumed WAL at or after LSN\n80 - look at the array of unconsumed WAL LSNs {85, 89}.\n\nWith this approach, the WAL is scanned only once as opposed to the\ncurrent approach the patch implements.\n\nThoughts?\n\n[1]\nbool\nxlog_is_record_decodable(uint8 type)\n{\n switch (info)\n {\n case XLOG_CHECKPOINT_SHUTDOWN:\n case XLOG_END_OF_RECOVERY:\n return true;\n case XLOG_CHECKPOINT_ONLINE:\n case XLOG_PARAMETER_CHANGE:\n case XLOG_NOOP:\n case XLOG_NEXTOID:\n case XLOG_SWITCH:\n case XLOG_BACKUP_END:\n case XLOG_RESTORE_POINT:\n case XLOG_FPW_CHANGE:\n case XLOG_FPI_FOR_HINT:\n case XLOG_FPI:\n case XLOG_OVERWRITE_CONTRECORD:\n return false;\n default:\n elog(ERROR, \"unexpected RM_XLOG_ID record type: %u\", info);\n }\n}\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 29 Sep 2023 13:00:04 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 29, 2023 at 1:00 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Sep 28, 2023 at 6:08 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n>\n> IMO, WAL scanning approach looks better. However, if were to optimize\n> it by not scanning WAL records for every replication slot\n> confirmed_flush_lsn (CFL), start with lowest CFL (min of all slots\n> CFL), and scan till the end of WAL.\n>\n\nEarlier, I also thought something like that but I guess it won't\nmatter much as most of the slots will be up-to-date at shutdown time.\nThat would mean we would read just one or two records. Personally, I\nfeel it is better to build consensus on the WAL scanning approach,\nbasically, is it okay to decide as the patch is currently doing or\nwhether we should expose an API from the decode module as you are\nproposing? OTOH, if we want to go with other approach like adding\nfield in pg_controldata then we don't need to deal with WAL record\ntypes at all.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Sep 2023 16:29:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nThanks for giving your idea!\r\n\r\n> > I think this approach can work, but I am not sure if it's better than other\r\n> > approaches. Mainly because it has almost the same maintaince burden as the\r\n> > current approach, i.e. we need to verify and update the check function each\r\n> > time we add a new WAL record type.\r\n> \r\n> I think that's not a big problem if we have comments in\r\n> replication/decode.h, access/rmgrlist.h, docs to categorize the new\r\n> WAL records as decodable. Currently, the WAL record types adders will\r\n> have to do certain things based on notes in comments or docs anyways.\r\n> \r\n> Another idea to enforce categorizing decodability of WAL records is to\r\n> have a new RMGR API rm_is_record_decodable or such, the RMGR\r\n> implementers will then add respective functions returning true/false\r\n> if a given WAL record is decodable or not:\r\n> void (*rm_decode) (struct LogicalDecodingContext *ctx,\r\n> struct XLogRecordBuffer *buf);\r\n> bool (*rm_is_record_decodable) (uint8 type);\r\n> } RmgrData;\r\n> \r\n> PG_RMGR(RM_XLOG_ID, \"XLOG\", xlog_redo, xlog_desc, xlog_identify, NULL,\r\n> NULL, NULL, xlog_is_record_decodable), then the\r\n> xlog_is_record_decodable can look something like [1].\r\n> \r\n> This approach can also enforce/help custom RMGR implementers to define\r\n> the decodability of the WAL records.\r\n\r\nYeah, the approach enforces developers to check the decodability.\r\nBut the benefit seems smaller than required efforts for it because the function\r\nwould be used only by pg_upgrade. Could you tell me if you have another use case\r\nin mind? We may able to adopt if we have...\r\nAlso, this approach cannot be backported.\r\n\r\nAnyway, let's see how senior members say.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 29 Sep 2023 11:57:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Yeah, the approach enforces developers to check the decodability.\n> But the benefit seems smaller than required efforts for it because the function\n> would be used only by pg_upgrade. Could you tell me if you have another use case\n> in mind? We may able to adopt if we have...\n\nI'm attaching 0002 patch (on top of v45) which implements the new\ndecodable callback approach that I have in mind. IMO, this new\napproach is extensible, better than the current approach (hard-coding\nof certain WAL records that may be generated during pg_upgrade) taken\nby the patch, and helps deal with the issue that custom WAL resource\nmanagers can have with the current approach taken by the patch.\n\n> Also, this approach cannot be backported.\n\nNeither the current patch as-is. I'm not looking at backporting this\nfeature right now, but making it as robust and extensible as possible\nfor PG17.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 3 Oct 2023 09:58:44 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 9:58 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Yeah, the approach enforces developers to check the decodability.\n> > But the benefit seems smaller than required efforts for it because the function\n> > would be used only by pg_upgrade. Could you tell me if you have another use case\n> > in mind? We may able to adopt if we have...\n>\n> I'm attaching 0002 patch (on top of v45) which implements the new\n> decodable callback approach that I have in mind. IMO, this new\n> approach is extensible, better than the current approach (hard-coding\n> of certain WAL records that may be generated during pg_upgrade) taken\n> by the patch, and helps deal with the issue that custom WAL resource\n> managers can have with the current approach taken by the patch.\n\nI did not see the patch, but I like this approach better. I mean this\napproach does not check what record types are generated during updagre\ninstead this directly targets that after the confirmed_flush_lsn what\ntype of records shouldn't be generated. So if rmgr says that after\ncommit_flush_lsn no decodable record was generated then we are safe to\nupgrade that slot. So this seems an expandable approach.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Oct 2023 10:12:36 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\n> I'm attaching 0002 patch (on top of v45) which implements the new\r\n> decodable callback approach that I have in mind. IMO, this new\r\n> approach is extensible, better than the current approach (hard-coding\r\n> of certain WAL records that may be generated during pg_upgrade) taken\r\n> by the patch, and helps deal with the issue that custom WAL resource\r\n> managers can have with the current approach taken by the patch.\r\n\r\nThanks for sharing your PoC! I tested yours and worked well. I have also made\r\nthe decoding approach locally, but your approach is conceptually faster. I think\r\nit still checks the type one by one so not sure the acceptable, but at least\r\ncheckings are centerized. We must hear opinions from others. How do other think?\r\n \r\nComments for your patch. I attached the txt file, please include if it is OK.\r\n\r\n1.\r\nAccording to your post, we must have comments to notify developers that\r\nis_decodable API must be implemented. Please share it too if you have idea.\r\n\r\n \r\n2.\r\nThe existence of is_decodable should be checked in RegisterCustomRmgr().\r\n\r\n3.\r\nAnther rmgr API (rm_identify) requries uint8 without doing a bit operation:\r\nthey do \"info & ~XLR_INFO_MASK\" in the callbacks. Should we follow that?\r\n\r\n4.\r\nIt is helpful for developers to add a function to test_custom_rmgrs module.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 3 Oct 2023 09:40:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 9:58 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Yeah, the approach enforces developers to check the decodability.\n> > But the benefit seems smaller than required efforts for it because the function\n> > would be used only by pg_upgrade. Could you tell me if you have another use case\n> > in mind? We may able to adopt if we have...\n>\n> I'm attaching 0002 patch (on top of v45) which implements the new\n> decodable callback approach that I have in mind. IMO, this new\n> approach is extensible, better than the current approach (hard-coding\n> of certain WAL records that may be generated during pg_upgrade) taken\n> by the patch, and helps deal with the issue that custom WAL resource\n> managers can have with the current approach taken by the patch.\n>\n\n+xlog_is_record_decodable(uint8 info)\n+{\n+ switch (info)\n+ {\n+ case XLOG_CHECKPOINT_SHUTDOWN:\n+ case XLOG_END_OF_RECOVERY:\n+ return true;\n+ case XLOG_CHECKPOINT_ONLINE:\n+ case XLOG_PARAMETER_CHANGE:\n...\n+ return false;\n}\n\nI think this won't behave correctly. Without your patch, we consider\nboth XLOG_CHECKPOINT_SHUTDOWN and XLOG_CHECKPOINT_ONLINE as valid\nrecords but after patch only one of these will be considered valid\nwhich won't lead to desired behavior.\n\nBTW, the API proposed in your patch returns the WAL record type as\nvalid if there is something we do for it during decoding but the check\nin upgrade function expects the reverse value. For example, for WAL\nrecord type XLOG_HEAP_INSERT, the API returns true and that is\nindication to the caller that this is an expected record after\nconfirmed_flush LSN location which doesn't seem correct. Am I missing\nsomething?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Oct 2023 15:39:35 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nWhile checking more, I found some problems your PoC.\r\n\r\n1. rm_is_record_decodable() returns true when WAL records are decodable.\r\n Based on that, should is_valid be false when the function is true?\r\n E.g., XLOG_HEAP_INSERT is accepted in the PoC.\r\n2. XLOG_CHECKPOINT_SHUTDOWN and XLOG_RUNNING_XACTS should return false because\r\n these records may be generated during the upgrade but they are acceptable.\r\n3. A bit operations are done for extracting a WAL type, but the mask is\r\n different based on the rmgr. E.g., XLOG uses XLR_INFO_MASK, but XACT uses\r\n XLOG_XACT_OPMASK.\r\n4. There is a possibility that \"XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE\" is inserted,\r\n but it is not handled.\r\n\r\nRegarding the 2., maybe we should say \"if the reorderbuffer is modified while decoding,\r\nrm_is_record_decodable must return false\" or something. If so, the return value\r\nof XLOG_END_OF_RECOVERY and XLOG_HEAP2_NEW_CID should be also changed.\r\n\r\nI attached the fix patch for above. How do you think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 4 Oct 2023 01:00:34 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 3, 2023 at 9:58 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Yeah, the approach enforces developers to check the decodability.\n> > But the benefit seems smaller than required efforts for it because the function\n> > would be used only by pg_upgrade. Could you tell me if you have another use case\n> > in mind? We may able to adopt if we have...\n>\n> I'm attaching 0002 patch (on top of v45) which implements the new\n> decodable callback approach that I have in mind. IMO, this new\n> approach is extensible, better than the current approach (hard-coding\n> of certain WAL records that may be generated during pg_upgrade) taken\n> by the patch, and helps deal with the issue that custom WAL resource\n> managers can have with the current approach taken by the patch.\n>\n\nToday, I discussed this problem with Andres at PGConf NYC and he\nsuggested as following. To verify, if there is any pending unexpected\nWAL after shutdown, we can have an API like\npg_logical_replication_slot_advance() which will simply process\nrecords without actually sending anything downstream. In this new API,\nwe will start with each slot's restart_lsn location and try to process\ntill the end of WAL, if we encounter any WAL that needs to be\nprocessed (like we need to send the decoded WAL downstream) we can\nreturn a false indicating that there is an unexpected WAL. The reason\nto start with restart_lsn is that it is the location that we use to\nstart scanning the WAL anyway.\n\nThen, we should also try to create slots before invoking pg_resetwal.\nThe idea is that we can write a new binary mode function that will do\nexactly what pg_resetwal does to compute the next segment and use that\nlocation as a new location (restart_lsn) to create the slots in a new\nnode. Then, pass it pg_resetwal by using the existing option '-l\nwalfile'. As we don't have any API that takes restart_lsn as input, we\ncan write a new API probably for binary mode to create slots that do\ntake restart_lsn as input. This will ensure that there is no new WAL\ninserted by background processes between resetwal and the creation of\nslots.\n\nThe other potential problem Andres pointed out is that during shutdown\nif due to some reason, the walreceiver goes down, we won't be able to\nsend the required WAL and users won't be able to ensure that because\neven after restart the same situation can happen. The ideal way is to\nhave something that puts the system in READ ONLY state during shutdown\nand then we can probably allow walreceivers to reconnect and receive\nthe required WALs. As we don't have such functionality available and\nit won't be easy to achieve the same, we can leave this for now.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Oct 2023 01:48:02 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 1:48 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Oct 3, 2023 at 9:58 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > Yeah, the approach enforces developers to check the decodability.\n> > > But the benefit seems smaller than required efforts for it because the function\n> > > would be used only by pg_upgrade. Could you tell me if you have another use case\n> > > in mind? We may able to adopt if we have...\n> >\n> > I'm attaching 0002 patch (on top of v45) which implements the new\n> > decodable callback approach that I have in mind. IMO, this new\n> > approach is extensible, better than the current approach (hard-coding\n> > of certain WAL records that may be generated during pg_upgrade) taken\n> > by the patch, and helps deal with the issue that custom WAL resource\n> > managers can have with the current approach taken by the patch.\n> >\n>\n> Today, I discussed this problem with Andres at PGConf NYC and he\n> suggested as following. To verify, if there is any pending unexpected\n> WAL after shutdown, we can have an API like\n> pg_logical_replication_slot_advance() which will simply process\n> records without actually sending anything downstream.\n\nSo I assume in each lower-level decode function (e.g. heap_decode() )\nwe will add the check that if we are checking the WAL for an upgrade\nthen from that level we will return true or false based on whether the\nWAL is decodable or not. Is my understanding correct? At first\nthought this approach look better and generic.\n\n In this new API,\n> we will start with each slot's restart_lsn location and try to process\n> till the end of WAL, if we encounter any WAL that needs to be\n> processed (like we need to send the decoded WAL downstream) we can\n> return a false indicating that there is an unexpected WAL. The reason\n> to start with restart_lsn is that it is the location that we use to\n> start scanning the WAL anyway.\n\nYeah, that makes sense.\n\n> Then, we should also try to create slots before invoking pg_resetwal.\n> The idea is that we can write a new binary mode function that will do\n> exactly what pg_resetwal does to compute the next segment and use that\n> location as a new location (restart_lsn) to create the slots in a new\n> node. Then, pass it pg_resetwal by using the existing option '-l\n> walfile'. As we don't have any API that takes restart_lsn as input, we\n> can write a new API probably for binary mode to create slots that do\n> take restart_lsn as input. This will ensure that there is no new WAL\n> inserted by background processes between resetwal and the creation of\n> slots.\n\nYeah, that looks cleaner IMHO.\n\n> The other potential problem Andres pointed out is that during shutdown\n> if due to some reason, the walreceiver goes down, we won't be able to\n> send the required WAL and users won't be able to ensure that because\n> even after restart the same situation can happen. The ideal way is to\n> have something that puts the system in READ ONLY state during shutdown\n> and then we can probably allow walreceivers to reconnect and receive\n> the required WALs. As we don't have such functionality available and\n> it won't be easy to achieve the same, we can leave this for now.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Oct 2023 14:28:53 +0530",
"msg_from": "Dilip Kumar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit, Andres,\r\n\r\nThank you for giving the decision! Basically I will follow your idea and make\r\na patch accordingly.\r\n\r\n> Today, I discussed this problem with Andres at PGConf NYC and he\r\n> suggested as following. To verify, if there is any pending unexpected\r\n> WAL after shutdown, we can have an API like\r\n> pg_logical_replication_slot_advance() which will simply process\r\n> records without actually sending anything downstream. In this new API,\r\n> we will start with each slot's restart_lsn location and try to process\r\n> till the end of WAL, if we encounter any WAL that needs to be\r\n> processed (like we need to send the decoded WAL downstream) we can\r\n> return a false indicating that there is an unexpected WAL. The reason\r\n> to start with restart_lsn is that it is the location that we use to\r\n> start scanning the WAL anyway.\r\n\r\nI felt the approach seems similar to Hou-san's suggestion[1], but we can avoid to\r\nuse test_decoding. I'm planning to do that the upgrading function decodes WALs\r\nand check whether there are reorderbuffer changes.\r\n\r\n> Then, we should also try to create slots before invoking pg_resetwal.\r\n> The idea is that we can write a new binary mode function that will do\r\n> exactly what pg_resetwal does to compute the next segment and use that\r\n> location as a new location (restart_lsn) to create the slots in a new\r\n> node. Then, pass it pg_resetwal by using the existing option '-l\r\n> walfile'. As we don't have any API that takes restart_lsn as input, we\r\n> can write a new API probably for binary mode to create slots that do\r\n> take restart_lsn as input. This will ensure that there is no new WAL\r\n> inserted by background processes between resetwal and the creation of\r\n> slots.\r\n\r\nIt seems better because we can create every objects before pg_resetwal.\r\n\r\nI will handle above two points and let's see how it work.\r\n\r\n[1]: https://www.postgresql.org/message-id/OS0PR01MB5716506A1A1B20EFBFA7B52994C1A%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n",
"msg_date": "Thu, 5 Oct 2023 10:06:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 2:29 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Oct 5, 2023 at 1:48 AM Amit Kapila <[email protected]>\nwrote:\n> >\n> > On Tue, Oct 3, 2023 at 9:58 AM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > > > Yeah, the approach enforces developers to check the decodability.\n> > > > But the benefit seems smaller than required efforts for it because\nthe function\n> > > > would be used only by pg_upgrade. Could you tell me if you have\nanother use case\n> > > > in mind? We may able to adopt if we have...\n> > >\n> > > I'm attaching 0002 patch (on top of v45) which implements the new\n> > > decodable callback approach that I have in mind. IMO, this new\n> > > approach is extensible, better than the current approach (hard-coding\n> > > of certain WAL records that may be generated during pg_upgrade) taken\n> > > by the patch, and helps deal with the issue that custom WAL resource\n> > > managers can have with the current approach taken by the patch.\n> > >\n> >\n> > Today, I discussed this problem with Andres at PGConf NYC and he\n> > suggested as following. To verify, if there is any pending unexpected\n> > WAL after shutdown, we can have an API like\n> > pg_logical_replication_slot_advance() which will simply process\n> > records without actually sending anything downstream.\n>\n> So I assume in each lower-level decode function (e.g. heap_decode() )\n> we will add the check that if we are checking the WAL for an upgrade\n> then from that level we will return true or false based on whether the\n> WAL is decodable or not. Is my understanding correct?\n>\n\nYes, this is one way to achive but I think this will require changing\nreturn value of many APIs. Can we somehow just get this via\nLogicalDecodingContext or some other way at the caller by allowing to set\nsome variable at required places?\n\nOn Thu, Oct 5, 2023 at 2:29 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Thu, Oct 5, 2023 at 1:48 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Oct 3, 2023 at 9:58 AM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Sep 29, 2023 at 5:27 PM Hayato Kuroda (Fujitsu)\n> > > <[email protected]> wrote:\n> > > >\n> > > > Yeah, the approach enforces developers to check the decodability.\n> > > > But the benefit seems smaller than required efforts for it because the function\n> > > > would be used only by pg_upgrade. Could you tell me if you have another use case\n> > > > in mind? We may able to adopt if we have...\n> > >\n> > > I'm attaching 0002 patch (on top of v45) which implements the new\n> > > decodable callback approach that I have in mind. IMO, this new\n> > > approach is extensible, better than the current approach (hard-coding\n> > > of certain WAL records that may be generated during pg_upgrade) taken\n> > > by the patch, and helps deal with the issue that custom WAL resource\n> > > managers can have with the current approach taken by the patch.\n> > >\n> >\n> > Today, I discussed this problem with Andres at PGConf NYC and he\n> > suggested as following. To verify, if there is any pending unexpected\n> > WAL after shutdown, we can have an API like\n> > pg_logical_replication_slot_advance() which will simply process\n> > records without actually sending anything downstream.\n>\n> So I assume in each lower-level decode function (e.g. heap_decode() )\n> we will add the check that if we are checking the WAL for an upgrade\n> then from that level we will return true or false based on whether the\n> WAL is decodable or not. Is my understanding correct?\n>\n\nYes, this is one way to achive but I think this will require changing return value of many APIs. Can we somehow just get this via LogicalDecodingContext or some other way at the caller by allowing to set some variable at required places?",
"msg_date": "Thu, 5 Oct 2023 06:54:20 -0400",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 4:24 PM Amit Kapila <[email protected]> wrote:\n>\n> > > Today, I discussed this problem with Andres at PGConf NYC and he\n> > > suggested as following. To verify, if there is any pending unexpected\n> > > WAL after shutdown, we can have an API like\n> > > pg_logical_replication_slot_advance() which will simply process\n> > > records without actually sending anything downstream.\n\n+1 for this approach. It looks neat.\n\nI think we also need to add TAP tests to generate decodable WAL\nrecords (RUNNING_XACT, CHECKPOINT_ONLINE, XLOG_FPI_FOR_HINT,\nXLOG_SWITCH, XLOG_PARAMETER_CHANGE, XLOG_HEAP2_PRUNE) during\npg_upgrade as described here\nhttps://www.postgresql.org/message-id/TYAPR01MB58660273EACEFC5BF256B133F50DA%40TYAPR01MB5866.jpnprd01.prod.outlook.com.\nBasically, these were the exceptional WAL records that may be\ngenerated by pg_upgrade, so having tests for them is good.\n\n> > So I assume in each lower-level decode function (e.g. heap_decode() )\n> > we will add the check that if we are checking the WAL for an upgrade\n> > then from that level we will return true or false based on whether the\n> > WAL is decodable or not. Is my understanding correct?\n> >\n>\n> Yes, this is one way to achive but I think this will require changing return value of many APIs. Can we somehow just get this via LogicalDecodingContext or some other way at the caller by allowing to set some variable at required places?\n\n+1 for adding the required flags to the decoding context similar to\nfast_forward.\n\nAnother way without adding any new variables is to pass the WAL record\nto LogicalDecodingProcessRecord, and upon return check the reorder\nbuffer if there's any decoded change generated for the xid associated\nwith the WAL record. If any decoded change related to the WAL record\nxid is found, then that's the end for the new function. Here's what I\nthink [1], haven't tested it.\n\n[1]\nchange_found = false;\nend_of_wal = false;\nctx = CreateDecodingContext();\n\nXLogBeginRead(ctx->reader, MyReplicationSlot->data.restart_lsn);\n\nwhile(!end_of_wal || !change_found)\n{\n XLogRecord *record;\n TransactionId xid;\n ReorderBufferTXN *txn;\n\n record = XLogReadRecord(ctx->reader, &errm);\n\n if (record)\n LogicalDecodingProcessRecord(ctx, ctx->reader);\n\n xid = XLogRecGetXid(record);\n\n txn = ReorderBufferTXNByXid(ctx->reorder, xid, false, NULL,\nInvalidXLogRecPtr,\n false);\n\n if (txn != NULL)\n {\n change_found = true;\n break;\n }\n\n CHECK_FOR_INTERRUPTS();\n}\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Oct 2023 17:26:17 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 1:48 AM Amit Kapila <[email protected]> wrote:\n>\n> Then, we should also try to create slots before invoking pg_resetwal.\n> The idea is that we can write a new binary mode function that will do\n> exactly what pg_resetwal does to compute the next segment and use that\n> location as a new location (restart_lsn) to create the slots in a new\n> node. Then, pass it pg_resetwal by using the existing option '-l\n> walfile'. As we don't have any API that takes restart_lsn as input, we\n> can write a new API probably for binary mode to create slots that do\n> take restart_lsn as input. This will ensure that there is no new WAL\n> inserted by background processes between resetwal and the creation of\n> slots.\n\n+1. I think this approach makes it foolproof. pg_resetwal uses\nFindEndOfXLOG and we need that to be in a binary mode SQL callable\nfunction. FindEndOfXLOG ignores TLI to compute the new WAL file name,\nbut that seems to be okay for the new binary mode function because\npg_upgrade uses TLI 1 anyways and doesn't copy WAL files from old\ncluster.\n\nFWIW, pg_upgrades does use -l in copy_xact_xlog_xid, I'm not sure if\nit has anything to do with the above proposed change.\n\n> The other potential problem Andres pointed out is that during shutdown\n> if due to some reason, the walreceiver goes down, we won't be able to\n> send the required WAL and users won't be able to ensure that because\n> even after restart the same situation can happen. The ideal way is to\n> have something that puts the system in READ ONLY state during shutdown\n> and then we can probably allow walreceivers to reconnect and receive\n> the required WALs. As we don't have such functionality available and\n> it won't be easy to achieve the same, we can leave this for now.\n>\n> Thoughts?\n\nYou mean walreceiver for streaming replication? Or the apply workers\ngoing down for logical replication? If there's yet-to-be-sent-out WAL,\npg_upgrade will fail no? How does the above scenario a problem for\npg_upgrade of a cluster with just logical replication slots?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Oct 2023 18:43:30 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\nBased on comments, I revised my patch. PSA the file.\r\n\r\n> \r\n> > Today, I discussed this problem with Andres at PGConf NYC and he\r\n> > suggested as following. To verify, if there is any pending unexpected\r\n> > WAL after shutdown, we can have an API like\r\n> > pg_logical_replication_slot_advance() which will simply process\r\n> > records without actually sending anything downstream. In this new API,\r\n> > we will start with each slot's restart_lsn location and try to process\r\n> > till the end of WAL, if we encounter any WAL that needs to be\r\n> > processed (like we need to send the decoded WAL downstream) we can\r\n> > return a false indicating that there is an unexpected WAL. The reason\r\n> > to start with restart_lsn is that it is the location that we use to\r\n> > start scanning the WAL anyway.\r\n\r\nI implemented this by using decoding context. The binary upgrade function\r\nprocesses WALs from the confirmed_flush, and returns false if some meaningful\r\nchanges are found.\r\n\r\nInternally, I added a new decoding mode - DECODING_MODE_SILENT - and used it.\r\nIf the decoding context is in the mode, the output plugin is not loaded, but\r\nany WALs are decoded without skipping. Also, a new flag \"did_process\" is also\r\nadded. This flag is set if wrappers for output plugin callbacks are called during\r\nthe silent mode. The upgrading function checks both reorder buffer and the new\r\nflag because both (non-)transactional changes should be detected. If we only\r\ncheck reorder buffer, we miss the non-transactional one.\r\n\r\nfast_forward was changed as a variant of decoding mode.\r\n\r\nCurrently the function is called for all the valid slot. If the approach seems\r\ngood, we can refactor like Bharath said [1].\r\n\r\n> \r\n> > Then, we should also try to create slots before invoking pg_resetwal.\r\n> > The idea is that we can write a new binary mode function that will do\r\n> > exactly what pg_resetwal does to compute the next segment and use that\r\n> > location as a new location (restart_lsn) to create the slots in a new\r\n> > node. Then, pass it pg_resetwal by using the existing option '-l\r\n> > walfile'. As we don't have any API that takes restart_lsn as input, we\r\n> > can write a new API probably for binary mode to create slots that do\r\n> > take restart_lsn as input. This will ensure that there is no new WAL\r\n> > inserted by background processes between resetwal and the creation of\r\n> > slots.\r\n\r\nBased on that, I added another binary function binary_upgrade_create_logical_replication_slot().\r\nThis function is similar to pg_create_logical_replication_slot(), but the\r\nrestart_lsn and confirmed_flush are set to *next* WAL segment. The pointed\r\nfilename is returned and it is passed to pg_resetwal command.\r\n\r\nOne consideration is that pg_log_standby_snapshot() must be executed before\r\nslots consuming changes. New cluster does not have RUNNING_XACTS records so that\r\ndecoding context on new cluster cannot be create a consistent snapshot as-is.\r\nThis may lead to discard changes during the upcoming consuming event. To\r\nprevent it the function is called after the final pg_resetwal.\r\n\r\nHow do you think?\r\n\r\nAcknowledgment: I would like to thank Hou for discussing with me.\r\n\r\n[1]: https://www.postgresql.org/message-id/CALj2ACWAdYxgzOpXrP%3DJMiOaWtAT2VjPiKw7ryGbipkSkocJ%3Dg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 6 Oct 2023 13:00:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher nodeHayato Kuroda\n (Fujitsu) <[email protected]>"
},
{
"msg_contents": "On Fri, Oct 6, 2023 at 6:30 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Based on comments, I revised my patch. PSA the file.\n>\n> >\n> > > Today, I discussed this problem with Andres at PGConf NYC and he\n> > > suggested as following. To verify, if there is any pending unexpected\n> > > WAL after shutdown, we can have an API like\n> > > pg_logical_replication_slot_advance() which will simply process\n> > > records without actually sending anything downstream. In this new API,\n> > > we will start with each slot's restart_lsn location and try to process\n> > > till the end of WAL, if we encounter any WAL that needs to be\n> > > processed (like we need to send the decoded WAL downstream) we can\n> > > return a false indicating that there is an unexpected WAL. The reason\n> > > to start with restart_lsn is that it is the location that we use to\n> > > start scanning the WAL anyway.\n>\n> I implemented this by using decoding context. The binary upgrade function\n> processes WALs from the confirmed_flush, and returns false if some meaningful\n> changes are found.\n>\n> Internally, I added a new decoding mode - DECODING_MODE_SILENT - and used it.\n> If the decoding context is in the mode, the output plugin is not loaded, but\n> any WALs are decoded without skipping.\n>\n\nI think it may be okay not to load the output plugin as we are not\ngoing to process any record in this case but is that the only reason\nor you have something else in mind as well?\n\n> Also, a new flag \"did_process\" is also\n> added. This flag is set if wrappers for output plugin callbacks are called during\n> the silent mode.\n>\n\nIsn't it sufficient to add a test for silent mode in\nbegin/stream_start/begin_prepare kind of APIs and set\nctx->did_process? In all other APIs, we can assert that did_process\nshouldn't be set and we never reach there when decoding mode is\nsilent.\n\n> The upgrading function checks both reorder buffer and the new\n> flag because both (non-)transactional changes should be detected. If we only\n> check reorder buffer, we miss the non-transactional one.\n>\n\n+ /* Check whether the meaningful change was found */\n+ found = (ctx->reorder->by_txn_last_xid != InvalidTransactionId ||\n+ ctx->did_process);\n\nAre you talking about this check in the patch? If so, can you please\nexplain when does the first check help?\n\n> fast_forward was changed as a variant of decoding mode.\n>\n> Currently the function is called for all the valid slot. If the approach seems\n> good, we can refactor like Bharath said [1].\n>\n> >\n> > > Then, we should also try to create slots before invoking pg_resetwal.\n> > > The idea is that we can write a new binary mode function that will do\n> > > exactly what pg_resetwal does to compute the next segment and use that\n> > > location as a new location (restart_lsn) to create the slots in a new\n> > > node. Then, pass it pg_resetwal by using the existing option '-l\n> > > walfile'. As we don't have any API that takes restart_lsn as input, we\n> > > can write a new API probably for binary mode to create slots that do\n> > > take restart_lsn as input. This will ensure that there is no new WAL\n> > > inserted by background processes between resetwal and the creation of\n> > > slots.\n>\n> Based on that, I added another binary function binary_upgrade_create_logical_replication_slot().\n> This function is similar to pg_create_logical_replication_slot(), but the\n> restart_lsn and confirmed_flush are set to *next* WAL segment. The pointed\n> filename is returned and it is passed to pg_resetwal command.\n>\n\nI am not sure if it is a good idea that a\nbinary_upgrade_create_logical_replication_slot() API does the logfile\nname calculation.\n\n> One consideration is that pg_log_standby_snapshot() must be executed before\n> slots consuming changes. New cluster does not have RUNNING_XACTS records so that\n> decoding context on new cluster cannot be create a consistent snapshot as-is.\n> This may lead to discard changes during the upcoming consuming event. To\n> prevent it the function is called after the final pg_resetwal.\n>\n> How do you think?\n>\n\n+ /*\n+ * Also, we mu execute pg_log_standby_snapshot() when logical replication\n+ * slots are migrated. Because RUNNING_XACTS record is required to create\n+ * a consistent snapshot.\n+ */\n+ if (count_old_cluster_logical_slots())\n+ create_consistent_snapshot();\n\nWe shouldn't do this separately. Instead\nbinary_upgrade_create_logical_replication_slot() should ensure that\ncorresponding WAL is reserved similar to what we do in\nReplicationSlotReserveWal() and then similarly invoke\nLogStandbySnapshot() to ensure that we have enough information to\nstart.\n\nFew minor comments:\n==================\n1. The commit message and other comments like atop\nget_old_cluster_logical_slot_infos() needs to be adjusted as per\nrecent changes.\n2.\n@@ -1268,7 +1346,11 @@ stream_start_cb_wrapper(ReorderBuffer *cache,\nReorderBufferTXN *txn,\n LogicalErrorCallbackState state;\n ErrorContextCallback errcallback;\n\n- Assert(!ctx->fast_forward);\n+ /*\n+ * In silent mode all the two-phase callbacks are not set so that the\n+ * wrapper should not be called.\n+ */\n+ Assert(ctx->decoding_mode == DECODING_MODE_NORMAL);\n\nThis and other similar comments doesn't seems to be consistent as the\nfunction name and comments are not matching.\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 7 Oct 2023 03:46:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher nodeHayato Kuroda\n (Fujitsu) <[email protected]>"
},
{
"msg_contents": "On Thu, Oct 5, 2023 at 6:43 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Oct 5, 2023 at 1:48 AM Amit Kapila <[email protected]> wrote:\n> >\n>\n> > The other potential problem Andres pointed out is that during shutdown\n> > if due to some reason, the walreceiver goes down, we won't be able to\n> > send the required WAL and users won't be able to ensure that because\n> > even after restart the same situation can happen. The ideal way is to\n> > have something that puts the system in READ ONLY state during shutdown\n> > and then we can probably allow walreceivers to reconnect and receive\n> > the required WALs. As we don't have such functionality available and\n> > it won't be easy to achieve the same, we can leave this for now.\n> >\n> > Thoughts?\n>\n> You mean walreceiver for streaming replication? Or the apply workers\n> going down for logical replication?\n>\n\nApply workers.\n\n>\n> If there's yet-to-be-sent-out WAL,\n> pg_upgrade will fail no? How does the above scenario a problem for\n> pg_upgrade of a cluster with just logical replication slots?\n>\n\nEven, if there is a WAL yet to be sent, the walsender will simply exit\nas it will receive PqMsg_Terminate ('X') from standby. See\nProcessRepliesIfAny(). After that shutdown checkpoint will finish. So,\nin this case upgrade can fail due to slots. But, I think the server\nshould be able to succeed in consecutive runs. Does this make sense?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 7 Oct 2023 05:39:34 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, 6 Oct 2023 at 18:30, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> Based on comments, I revised my patch. PSA the file.\n>\n> >\n> > > Today, I discussed this problem with Andres at PGConf NYC and he\n> > > suggested as following. To verify, if there is any pending unexpected\n> > > WAL after shutdown, we can have an API like\n> > > pg_logical_replication_slot_advance() which will simply process\n> > > records without actually sending anything downstream. In this new API,\n> > > we will start with each slot's restart_lsn location and try to process\n> > > till the end of WAL, if we encounter any WAL that needs to be\n> > > processed (like we need to send the decoded WAL downstream) we can\n> > > return a false indicating that there is an unexpected WAL. The reason\n> > > to start with restart_lsn is that it is the location that we use to\n> > > start scanning the WAL anyway.\n>\n> I implemented this by using decoding context. The binary upgrade function\n> processes WALs from the confirmed_flush, and returns false if some meaningful\n> changes are found.\n>\n> Internally, I added a new decoding mode - DECODING_MODE_SILENT - and used it.\n> If the decoding context is in the mode, the output plugin is not loaded, but\n> any WALs are decoded without skipping. Also, a new flag \"did_process\" is also\n> added. This flag is set if wrappers for output plugin callbacks are called during\n> the silent mode. The upgrading function checks both reorder buffer and the new\n> flag because both (non-)transactional changes should be detected. If we only\n> check reorder buffer, we miss the non-transactional one.\n>\n> fast_forward was changed as a variant of decoding mode.\n>\n> Currently the function is called for all the valid slot. If the approach seems\n> good, we can refactor like Bharath said [1].\n>\n> >\n> > > Then, we should also try to create slots before invoking pg_resetwal.\n> > > The idea is that we can write a new binary mode function that will do\n> > > exactly what pg_resetwal does to compute the next segment and use that\n> > > location as a new location (restart_lsn) to create the slots in a new\n> > > node. Then, pass it pg_resetwal by using the existing option '-l\n> > > walfile'. As we don't have any API that takes restart_lsn as input, we\n> > > can write a new API probably for binary mode to create slots that do\n> > > take restart_lsn as input. This will ensure that there is no new WAL\n> > > inserted by background processes between resetwal and the creation of\n> > > slots.\n>\n> Based on that, I added another binary function binary_upgrade_create_logical_replication_slot().\n> This function is similar to pg_create_logical_replication_slot(), but the\n> restart_lsn and confirmed_flush are set to *next* WAL segment. The pointed\n> filename is returned and it is passed to pg_resetwal command.\n>\n> One consideration is that pg_log_standby_snapshot() must be executed before\n> slots consuming changes. New cluster does not have RUNNING_XACTS records so that\n> decoding context on new cluster cannot be create a consistent snapshot as-is.\n> This may lead to discard changes during the upcoming consuming event. To\n> prevent it the function is called after the final pg_resetwal.\n\nFew comments:\n1) Should we add binary upgrade check \"CHECK_IS_BINARY_UPGRADE\" for\nthis funcion too:\n+binary_upgrade_create_logical_replication_slot(PG_FUNCTION_ARGS)\n+{\n+ Name name = PG_GETARG_NAME(0);\n+ Name plugin = PG_GETARG_NAME(1);\n+\n+ /* Temporary slots is never handled in this function */\n+ bool two_phase = PG_GETARG_BOOL(2);\n\n2) Generally we are specifying the slot name in this case, is slot\nname null check required:\n+Datum\n+binary_upgrade_validate_wal_logical_end(PG_FUNCTION_ARGS)\n+{\n+ Name slot_name;\n+ XLogRecPtr end_of_wal;\n+ LogicalDecodingContext *ctx = NULL;\n+ bool has_record;\n+\n+ CHECK_IS_BINARY_UPGRADE;\n+\n+ /* Quick exit if the input is NULL */\n+ if (PG_ARGISNULL(0))\n+ PG_RETURN_BOOL(false);\n\n3) Since this is similar to pg_create_logical_replication_slot, can we\nadd a comment saying any change in pg_create_logical_replication_slot\nwould also need the same check to be added in\nbinary_upgrade_create_logical_replication_slot:\n+/*\n+ * SQL function for creating a new logical replication slot.\n+ *\n+ * This function is almost same as pg_create_logical_replication_slot(), but\n+ * this can specify the restart_lsn.\n+ */\n+Datum\n+binary_upgrade_create_logical_replication_slot(PG_FUNCTION_ARGS)\n+{\n+ Name name = PG_GETARG_NAME(0);\n+ Name plugin = PG_GETARG_NAME(1);\n+\n+ /* Temporary slots is never handled in this function */\n\n4) Any conclusion on this try catch comment, do you want to add which\nsetting you want to revert in catch, if try/catch is not required we\ncan remove this comment:\n+ ReplicationSlotAcquire(NameStr(*slot_name), true);\n+\n+ /* XXX: Is PG_TRY/CATCH needed around here? */\n+\n+ /*\n+ * We use silent mode here to decode all changes without\noutputting them,\n+ * allowing us to detect all the records that could be sent downstream.\n+ */\n\n5) I felt these 2 comments can be combined as both are trying to say\nthe same thing:\n+ * This is a special purpose function to ensure that there are no WAL records\n+ * pending to be decoded after the given LSN.\n+ *\n+ * It is used to ensure that there is no pending WAL to be consumed for\n+ * the logical slots.\n\n6) I feel this memset is not required as we are initializing at the\nbeginning of function, if you want to keep the memset, the\ninitialization can be removed:\n+ values[2] = CStringGetTextDatum(xlogfilename);\n+\n+ memset(nulls, 0, sizeof(nulls));\n+\n+ tuple = heap_form_tuple(tupdesc, values, nulls);\n\n7) looks like a typo, \"mu\" should be \"must\":\n+ /*\n+ * Also, we mu execute pg_log_standby_snapshot() when logical\nreplication\n+ * slots are migrated. Because RUNNING_XACTS record is\nrequired to create\n+ * a consistent snapshot.\n+ */\n+ if (count_old_cluster_logical_slots())\n+ create_consistent_snapshot();\n\n8) consitent should be consistent:\n+/*\n+ * Log the details of the current snapshot to the WAL, allowing the snapshot\n+ * state to be reconstructed for logical decoding on the upgraded slots.\n+ */\n+static void\n+create_consistent_snapshot(void)\n+{\n+ DbInfo *old_db = &old_cluster.dbarr.dbs[0];\n+ PGconn *conn;\n+\n+ prep_status(\"Creating a consitent snapshot on new cluster\");\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 9 Oct 2023 14:29:23 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher nodeHayato Kuroda\n (Fujitsu) <[email protected]>"
},
{
"msg_contents": "On Sat, Oct 7, 2023 at 3:46 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 6, 2023 at 6:30 PM Hayato Kuroda (Fujitsu)\n> >\n> > Based on that, I added another binary function binary_upgrade_create_logical_replication_slot().\n> > This function is similar to pg_create_logical_replication_slot(), but the\n> > restart_lsn and confirmed_flush are set to *next* WAL segment. The pointed\n> > filename is returned and it is passed to pg_resetwal command.\n> >\n>\n> I am not sure if it is a good idea that a\n> binary_upgrade_create_logical_replication_slot() API does the logfile\n> name calculation.\n>\n\nThe other problem is that pg_resetwal removes all pre-existing WAL\nfiles which in this case could lead to the removal of the WAL file\ncorresponding to restart_lsn. This is because at least the shutdown\ncheckpoint record will be written after the creation of slots which\ncould be in the new file used for restart_lsn. Then when we invoke\npg_resetwal, it can remove that file.\n\nOne idea to deal with this could be to do the reset WAL stuff\n(FindEndOfXLOG(), KillExistingXLOG(), KillExistingArchiveStatus(),\nWriteEmptyXLOG()) in a separate function (say in pg_upgrade) and then\ncreate slots. If we do this, then we additionally need an option in\npg_resetwal which skips resetting the WAL as that would have been done\nbefore creating the slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Oct 2023 13:10:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> > Internally, I added a new decoding mode - DECODING_MODE_SILENT - and\r\n> used it.\r\n> > If the decoding context is in the mode, the output plugin is not loaded, but\r\n> > any WALs are decoded without skipping.\r\n> >\r\n> \r\n> I think it may be okay not to load the output plugin as we are not\r\n> going to process any record in this case but is that the only reason\r\n> or you have something else in mind as well?\r\n\r\nMy main concern was for skipping to set output plugin options. Even if the\r\npgoutput plugin, some options like protocol_version, publications, etc are\r\nrequired while loading a plugin. We cannot predict requirements for external\r\nplugins. Based on that I thought output plugins should not be loaded during the\r\ndecode.\r\n\r\n> > Also, a new flag \"did_process\" is also\r\n> > added. This flag is set if wrappers for output plugin callbacks are called during\r\n> > the silent mode.\r\n>\r\n> Isn't it sufficient to add a test for silent mode in\r\n> begin/stream_start/begin_prepare kind of APIs and set\r\n> ctx->did_process? In all other APIs, we can assert that did_process\r\n> shouldn't be set and we never reach there when decoding mode is\r\n> silent.\r\n>\r\n> \r\n> + /* Check whether the meaningful change was found */\r\n> + found = (ctx->reorder->by_txn_last_xid != InvalidTransactionId ||\r\n> + ctx->did_process);\r\n> \r\n> Are you talking about this check in the patch? If so, can you please\r\n> explain when does the first check help?\r\n\r\nI changed around here so I describe once again.\r\n\r\nA flag (output_skipped) is set when the transaction is decoded till the end in\r\nsilent mode. It is done in DecodeTXNNeedSkip() because the function is the common\r\npath for both committed/aborted transactions. Also, DecodeTXNNeedSkip() returns\r\ntrue when the decoding context is in the silent mode. Therefore, any cb_wrapper\r\nfunctions would not be called anymore. DecodingContextHasdecodedItems() just\r\nreturns output_skipped.\r\n\r\nThis approach needs to read WALs till end of transactions before returning the\r\nupgrading function, but codes look simpler than the previous version.\r\n\r\n> >\r\n> > Based on that, I added another binary function\r\n> binary_upgrade_create_logical_replication_slot().\r\n> > This function is similar to pg_create_logical_replication_slot(), but the\r\n> > restart_lsn and confirmed_flush are set to *next* WAL segment. The pointed\r\n> > filename is returned and it is passed to pg_resetwal command.\r\n> >\r\n> \r\n> I am not sure if it is a good idea that a\r\n> binary_upgrade_create_logical_replication_slot() API does the logfile\r\n> name calculation.\r\n> \r\n> > One consideration is that pg_log_standby_snapshot() must be executed before\r\n> > slots consuming changes. New cluster does not have RUNNING_XACTS records\r\n> so that\r\n> > decoding context on new cluster cannot be create a consistent snapshot as-is.\r\n> > This may lead to discard changes during the upcoming consuming event. To\r\n> > prevent it the function is called after the final pg_resetwal.\r\n> >\r\n> > How do you think?\r\n> >\r\n> \r\n> + /*\r\n> + * Also, we mu execute pg_log_standby_snapshot() when logical replication\r\n> + * slots are migrated. Because RUNNING_XACTS record is required to create\r\n> + * a consistent snapshot.\r\n> + */\r\n> + if (count_old_cluster_logical_slots())\r\n> + create_consistent_snapshot();\r\n> \r\n> We shouldn't do this separately. Instead\r\n> binary_upgrade_create_logical_replication_slot() should ensure that\r\n> corresponding WAL is reserved similar to what we do in\r\n> ReplicationSlotReserveWal() and then similarly invoke\r\n> LogStandbySnapshot() to ensure that we have enough information to\r\n> start.\r\n\r\nI did not handle these parts because they needed more analysis. Let's discuss\r\nin later versions.\r\n\r\n> \r\n> Few minor comments:\r\n> ==================\r\n> 1. The commit message and other comments like atop\r\n> get_old_cluster_logical_slot_infos() needs to be adjusted as per\r\n> recent changes.\r\n\r\nI revisited comments and updated.\r\n\r\n> 2.\r\n> @@ -1268,7 +1346,11 @@ stream_start_cb_wrapper(ReorderBuffer *cache,\r\n> ReorderBufferTXN *txn,\r\n> LogicalErrorCallbackState state;\r\n> ErrorContextCallback errcallback;\r\n> \r\n> - Assert(!ctx->fast_forward);\r\n> + /*\r\n> + * In silent mode all the two-phase callbacks are not set so that the\r\n> + * wrapper should not be called.\r\n> + */\r\n> + Assert(ctx->decoding_mode == DECODING_MODE_NORMAL);\r\n> \r\n> This and other similar comments doesn't seems to be consistent as the\r\n> function name and comments are not matching.\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 10 Oct 2023 11:21:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for reviewing! You can available new version in [1].\r\n\r\n> \r\n> Few comments:\r\n> 1) Should we add binary upgrade check \"CHECK_IS_BINARY_UPGRADE\" for\r\n> this funcion too:\r\n> +binary_upgrade_create_logical_replication_slot(PG_FUNCTION_ARGS)\r\n> +{\r\n> + Name name = PG_GETARG_NAME(0);\r\n> + Name plugin = PG_GETARG_NAME(1);\r\n> +\r\n> + /* Temporary slots is never handled in this function */\r\n> + bool two_phase = PG_GETARG_BOOL(2);\r\n\r\nYeah, needed. For testing purpose I did not add, but it should have.\r\nAdded.\r\n\r\n> 2) Generally we are specifying the slot name in this case, is slot\r\n> name null check required:\r\n> +Datum\r\n> +binary_upgrade_validate_wal_logical_end(PG_FUNCTION_ARGS)\r\n> +{\r\n> + Name slot_name;\r\n> + XLogRecPtr end_of_wal;\r\n> + LogicalDecodingContext *ctx = NULL;\r\n> + bool has_record;\r\n> +\r\n> + CHECK_IS_BINARY_UPGRADE;\r\n> +\r\n> + /* Quick exit if the input is NULL */\r\n> + if (PG_ARGISNULL(0))\r\n> + PG_RETURN_BOOL(false);\r\n\r\n\r\nNULL check was added. I felt that we should raise an ERROR. \r\n\r\n> 3) Since this is similar to pg_create_logical_replication_slot, can we\r\n> add a comment saying any change in pg_create_logical_replication_slot\r\n> would also need the same check to be added in\r\n> binary_upgrade_create_logical_replication_slot:\r\n> +/*\r\n> + * SQL function for creating a new logical replication slot.\r\n> + *\r\n> + * This function is almost same as pg_create_logical_replication_slot(), but\r\n> + * this can specify the restart_lsn.\r\n> + */\r\n> +Datum\r\n> +binary_upgrade_create_logical_replication_slot(PG_FUNCTION_ARGS)\r\n> +{\r\n> + Name name = PG_GETARG_NAME(0);\r\n> + Name plugin = PG_GETARG_NAME(1);\r\n> +\r\n> + /* Temporary slots is never handled in this function */\r\n\r\nAdded.\r\n\r\n> 4) Any conclusion on this try catch comment, do you want to add which\r\n> setting you want to revert in catch, if try/catch is not required we\r\n> can remove this comment:\r\n> + ReplicationSlotAcquire(NameStr(*slot_name), true);\r\n> +\r\n> + /* XXX: Is PG_TRY/CATCH needed around here? */\r\n> +\r\n> + /*\r\n> + * We use silent mode here to decode all changes without\r\n> outputting them,\r\n> + * allowing us to detect all the records that could be sent downstream.\r\n> + */\r\n\r\nAfter considering more, it's OK to raise an ERROR because caller can detect it.\r\nAlso, there are any setting to be reverted. The comment is removed.\r\n\r\n> 5) I felt these 2 comments can be combined as both are trying to say\r\n> the same thing:\r\n> + * This is a special purpose function to ensure that there are no WAL records\r\n> + * pending to be decoded after the given LSN.\r\n> + *\r\n> + * It is used to ensure that there is no pending WAL to be consumed for\r\n> + * the logical slots.\r\n\r\nLater part was removed.\r\n\r\n> 6) I feel this memset is not required as we are initializing at the\r\n> beginning of function, if you want to keep the memset, the\r\n> initialization can be removed:\r\n> + values[2] = CStringGetTextDatum(xlogfilename);\r\n> +\r\n> + memset(nulls, 0, sizeof(nulls));\r\n> +\r\n> + tuple = heap_form_tuple(tupdesc, values, nulls);\r\n\r\nThe initialization was removed to follow pg_create_logical_replication_slot.\r\n\r\n> 7) looks like a typo, \"mu\" should be \"must\":\r\n> + /*\r\n> + * Also, we mu execute pg_log_standby_snapshot() when logical\r\n> replication\r\n> + * slots are migrated. Because RUNNING_XACTS record is\r\n> required to create\r\n> + * a consistent snapshot.\r\n> + */\r\n> + if (count_old_cluster_logical_slots())\r\n> + create_consistent_snapshot();\r\n\r\nFixed.\r\n\r\n> 8) consitent should be consistent:\r\n> +/*\r\n> + * Log the details of the current snapshot to the WAL, allowing the snapshot\r\n> + * state to be reconstructed for logical decoding on the upgraded slots.\r\n> + */\r\n> +static void\r\n> +create_consistent_snapshot(void)\r\n> +{\r\n> + DbInfo *old_db = &old_cluster.dbarr.dbs[0];\r\n> + PGconn *conn;\r\n> +\r\n> + prep_status(\"Creating a consitent snapshot on new cluster\");\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866068CB6591C8AE1F9690BF5CDA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 10 Oct 2023 11:22:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nThanks for giving comments and apologize for late reply.\r\nNew version is available in [1].\r\n\r\n> +1 for this approach. It looks neat.\r\n> \r\n> I think we also need to add TAP tests to generate decodable WAL\r\n> records (RUNNING_XACT, CHECKPOINT_ONLINE, XLOG_FPI_FOR_HINT,\r\n> XLOG_SWITCH, XLOG_PARAMETER_CHANGE, XLOG_HEAP2_PRUNE) during\r\n> pg_upgrade as described here\r\n> https://www.postgresql.org/message-id/TYAPR01MB58660273EACEFC5BF256\r\n> B133F50DA%40TYAPR01MB5866.jpnprd01.prod.outlook.com.\r\n> Basically, these were the exceptional WAL records that may be\r\n> generated by pg_upgrade, so having tests for them is good.\r\n\r\nHmm, I'm not sure it is really good. If we add such a test, we may have to add\r\nfurther tests in future if new WAL log types during upgrade is introduced.\r\nCurrently we do not have if-statement for each WAL types, so it does not improve\r\ncoverage, I thought. Another concern is that I'm not sure how do we simply and\r\nsurely generate XLOG_HEAP2_PRUNE.\r\n\r\nBased on above, I did not add the test case for now.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866068CB6591C8AE1F9690BF5CDA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 10 Oct 2023 11:23:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 4:51 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> >\n> > Isn't it sufficient to add a test for silent mode in\n> > begin/stream_start/begin_prepare kind of APIs and set\n> > ctx->did_process? In all other APIs, we can assert that did_process\n> > shouldn't be set and we never reach there when decoding mode is\n> > silent.\n> >\n> >\n> > + /* Check whether the meaningful change was found */\n> > + found = (ctx->reorder->by_txn_last_xid != InvalidTransactionId ||\n> > + ctx->did_process);\n> >\n> > Are you talking about this check in the patch? If so, can you please\n> > explain when does the first check help?\n>\n> I changed around here so I describe once again.\n>\n> A flag (output_skipped) is set when the transaction is decoded till the end in\n> silent mode. It is done in DecodeTXNNeedSkip() because the function is the common\n> path for both committed/aborted transactions. Also, DecodeTXNNeedSkip() returns\n> true when the decoding context is in the silent mode. Therefore, any cb_wrapper\n> functions would not be called anymore. DecodingContextHasdecodedItems() just\n> returns output_skipped.\n>\n> This approach needs to read WALs till end of transactions before returning the\n> upgrading function, but codes look simpler than the previous version.\n>\n\n DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,\n Oid txn_dbid, RepOriginId origin_id)\n {\n- return (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||\n- (txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||\n- ctx->fast_forward || FilterByOrigin(ctx, origin_id));\n+ bool need_skip;\n+\n+ need_skip = (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||\n+ (txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||\n+ ctx->decoding_mode != DECODING_MODE_NORMAL ||\n+ FilterByOrigin(ctx, origin_id));\n+\n+ /* Set a flag if we are in the slient mode */\n+ if (ctx->decoding_mode == DECODING_MODE_SILENT)\n+ ctx->output_skipped = true;\n+\n+ return need_skip;\n\nI think you need to set the new flag only when we are not skipping the\ntransaction or in other words when we decide to process the\ntransaction. Otherwise, how will you distinguish the case where the\nxact is already decoded and sent to client?\n\n--\nWith Regards,\nAmit Kapila\n\n\n",
"msg_date": "Tue, 10 Oct 2023 18:17:39 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 6:17 PM Amit Kapila <[email protected]> wrote:\n>\n> DecodeTXNNeedSkip(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,\n> Oid txn_dbid, RepOriginId origin_id)\n> {\n> - return (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||\n> - (txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||\n> - ctx->fast_forward || FilterByOrigin(ctx, origin_id));\n> + bool need_skip;\n> +\n> + need_skip = (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||\n> + (txn_dbid != InvalidOid && txn_dbid != ctx->slot->data.database) ||\n> + ctx->decoding_mode != DECODING_MODE_NORMAL ||\n> + FilterByOrigin(ctx, origin_id));\n> +\n> + /* Set a flag if we are in the slient mode */\n> + if (ctx->decoding_mode == DECODING_MODE_SILENT)\n> + ctx->output_skipped = true;\n> +\n> + return need_skip;\n>\n> I think you need to set the new flag only when we are not skipping the\n> transaction or in other words when we decide to process the\n> transaction. Otherwise, how will you distinguish the case where the\n> xact is already decoded and sent to client?\n>\n\nIn the attached patch atop your v47*, I have changed it to show you\nwhat I have in mind.\n\nA few more comments:\n=================\n1.\n+\n+ /*\n+ * Did the logical decoding context skip outputting any changes?\n+ *\n+ * This flag is used only when the context is in the silent mode.\n+ */\n+ bool output_skipped;\n } LogicalDecodingContext;\n\nThis doesn't seem to convey the meaning to the caller. How about\nprocessing_required? BTW, I have made this change as well in the\npatch.\n\n2.\n@@ -295,7 +295,7 @@ xact_decode(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf)\n*/\nif (TransactionIdIsValid(xid))\n{\n- if (!ctx->fast_forward)\n+ if (ctx->decoding_mode != DECODING_MODE_FAST_FORWARD)\nReorderBufferAddInvalidations(reorder, xid,\n buf->origptr,\n invals->nmsgs,\n@@ -303,7 +303,7 @@ xact_decode(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf)\nReorderBufferXidSetCatalogChanges(ctx->reorder, xid,\n buf->origptr);\n}\n- else if ((!ctx->fast_forward))\n+ else if (ctx->decoding_mode != DECODING_MODE_FAST_FORWARD)\nReorderBufferImmediateInvalidation(ctx->reorder,\n invals->nmsgs,\n invals->msgs);\n\nWe don't to execute the invalidations even in silent mode. Looking at\nthis and other changes in the patch related to silent mode, I wonder\nwhether we really need to introduce 'silent_mode'. Can't we simply set\nprocessing_required when 'fast_forward' mode is true and then let the\ncaller decide whether it needs to further process the WAL?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 11 Oct 2023 13:13:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> > I think you need to set the new flag only when we are not skipping the\r\n> > transaction or in other words when we decide to process the\r\n> > transaction. Otherwise, how will you distinguish the case where the\r\n> > xact is already decoded and sent to client?\r\n\r\nActually, I wondered what should be, but I followed it. Indeed, we should avoid\r\nthe case which the xact has already been sent. But I was not sure other conditions\r\nlike transactions for another database - IIUC previous version regarded it as not\r\nacceptable.\r\n\r\nNow, I reconsider these cases can be ignored because they would not be sent to\r\nsubscriber. The consistency between pub/sub would not be broken even if these\r\nWALs are remained.\r\n\r\n> In the attached patch atop your v47*, I have changed it to show you\r\n> what I have in mind.\r\n\r\nThanks, was included.\r\n\r\n> A few more comments:\r\n> =================\r\n> 1.\r\n> +\r\n> + /*\r\n> + * Did the logical decoding context skip outputting any changes?\r\n> + *\r\n> + * This flag is used only when the context is in the silent mode.\r\n> + */\r\n> + bool output_skipped;\r\n> } LogicalDecodingContext;\r\n> \r\n> This doesn't seem to convey the meaning to the caller. How about\r\n> processing_required? BTW, I have made this change as well in the\r\n> patch.\r\n\r\nLGTM, changed like that.\r\n\r\n> 2.\r\n> @@ -295,7 +295,7 @@ xact_decode(LogicalDecodingContext *ctx,\r\n> XLogRecordBuffer *buf)\r\n> */\r\n> if (TransactionIdIsValid(xid))\r\n> {\r\n> - if (!ctx->fast_forward)\r\n> + if (ctx->decoding_mode != DECODING_MODE_FAST_FORWARD)\r\n> ReorderBufferAddInvalidations(reorder, xid,\r\n> buf->origptr,\r\n> invals->nmsgs,\r\n> @@ -303,7 +303,7 @@ xact_decode(LogicalDecodingContext *ctx,\r\n> XLogRecordBuffer *buf)\r\n> ReorderBufferXidSetCatalogChanges(ctx->reorder, xid,\r\n> buf->origptr);\r\n> }\r\n> - else if ((!ctx->fast_forward))\r\n> + else if (ctx->decoding_mode != DECODING_MODE_FAST_FORWARD)\r\n> ReorderBufferImmediateInvalidation(ctx->reorder,\r\n> invals->nmsgs,\r\n> invals->msgs);\r\n> \r\n> We don't to execute the invalidations even in silent mode. Looking at\r\n> this and other changes in the patch related to silent mode, I wonder\r\n> whether we really need to introduce 'silent_mode'. Can't we simply set\r\n> processing_required when 'fast_forward' mode is true and then let the\r\n> caller decide whether it needs to further process the WAL?\r\n\r\nAfter considering again, I agreed to remove silent mode. Initially, it was\r\nintroduced because did_process flag is set at XXX_cb_wrapper and reorderbuffer\r\nlayer. Now, the processing_required is set in DecodeCommit()->DecodeTXNNeedSkip(),\r\nwhich means that each records does not need to be decoded. Based on that,\r\nI removed the silent mode and use fast-forwarding mode instead.\r\n\r\nAlso, some parts (mostly code comments) were modified.\r\n\r\nAcknowledgement: Thanks Peter and Hou for discussing with me.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 11 Oct 2023 10:57:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Oct 11, 2023 at 4:27 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for reviewing! PSA new version.\n>\n\nSome more comments:\n1. Let's restruture binary_upgrade_validate_wal_logical_end() a bit.\nFirst, let's change its name to binary_upgrade_slot_has_pending_wal()\nor something like that. Then move the context creation and free\nrelated code into DecodingContextHasDecodedItems(). We can rename\nDecodingContextHasDecodedItems() as\npg_logical_replication_slot_has_pending_wal() and place it in\nslotfuncs.c. This will make the code structure similar to other slot\nfunctions like pg_replication_slot_advance().\n\n2. + * Returns true if there are no changes after the confirmed_flush_lsn.\n\nHow about something like: \"Returns true if there are no decodable WAL\nrecords after the confirmed_flush_lsn.\"?\n\n3. Shouldn't we need to call CheckSlotPermissions() in\nbinary_upgrade_validate_wal_logical_end?\n\n4.\n+ /*\n+ * Also, set processing_required flag if the message is not\n+ * transactional. It is needed to notify the message's existence to\n+ * the caller side. Usually, the flag is set when either the COMMIT or\n+ * ABORT records are decoded, but this must be turned on here because\n+ * the non-transactional logical message is decoded without waiting\n+ * for these records.\n+ */\n\nThe first sentence of the comments doesn't seem to be required as that\njust says what the code does. So, let's slightly change it to: \"We\nneed to set processing_required flag to notify the message's existence\nto the caller side. Usually, the flag is set when either the COMMIT or\nABORT records are decoded, but this must be turned on here because the\nnon-transactional logical message is decoded without waiting for these\nrecords.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Oct 2023 14:58:48 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for your suggestion! PSA new version.\r\n\r\n> The other problem is that pg_resetwal removes all pre-existing WAL\r\n> files which in this case could lead to the removal of the WAL file\r\n> corresponding to restart_lsn. This is because at least the shutdown\r\n> checkpoint record will be written after the creation of slots which\r\n> could be in the new file used for restart_lsn. Then when we invoke\r\n> pg_resetwal, it can remove that file.\r\n> \r\n> One idea to deal with this could be to do the reset WAL stuff\r\n> (FindEndOfXLOG(), KillExistingXLOG(), KillExistingArchiveStatus(),\r\n> WriteEmptyXLOG()) in a separate function (say in pg_upgrade) and then\r\n> create slots. If we do this, then we additionally need an option in\r\n> pg_resetwal which skips resetting the WAL as that would have been done\r\n> before creating the slots.\r\n\r\nBased on above idea, I made new version patch which some functionalities were\r\nexported from pg_resetwal. In this approach, pg_upgrade itself removed WALs and\r\nthen create logical slots, then pg_resetwal would be called with new option\r\n--no-switch, which avoid to switch a WAL segment file. The option is only used\r\nfor the upgrading purpose so it is not written in doc and usage(). This option\r\nis not required if pg_resetwal -o does not discard WAL records. Please see the\r\nfork thread [1].\r\n\r\nWe do not have to reserve future restart_lsn while creating a slot, so the binary\r\nfunction binary_upgrade_create_logical_replication_slot() was removed.\r\n\r\nAnother advantage of this approach is to avoid calling pg_log_standby_snapshot()\r\nafter the pg_resetwal. This was needed because of two reasons, but they were\r\nresolved automatically.\r\n 1) pg_resetwal removes all WAL files.\r\n 2) Logical slots requires a RUNNING_XACTS record for building a snapshot.\r\n \r\n[1]: https://www.postgresql.org/message-id/CAA4eK1KRyPMiY4fW98qFofsYrPd87Oc83zDNxSeHfTYh_asdBg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 12 Oct 2023 11:41:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for reviewing! New patch is available at [1].\r\n\r\n> \r\n> Some more comments:\r\n> 1. Let's restruture binary_upgrade_validate_wal_logical_end() a bit.\r\n> First, let's change its name to binary_upgrade_slot_has_pending_wal()\r\n> or something like that. Then move the context creation and free\r\n> related code into DecodingContextHasDecodedItems(). We can rename\r\n> DecodingContextHasDecodedItems() as\r\n> pg_logical_replication_slot_has_pending_wal() and place it in\r\n> slotfuncs.c. This will make the code structure similar to other slot\r\n> functions like pg_replication_slot_advance().\r\n\r\nSeems clearer than mine. Fixed.\r\n\r\n> 2. + * Returns true if there are no changes after the confirmed_flush_lsn.\r\n> \r\n> How about something like: \"Returns true if there are no decodable WAL\r\n> records after the confirmed_flush_lsn.\"?\r\n\r\nFixed.\r\n\r\n> 3. Shouldn't we need to call CheckSlotPermissions() in\r\n> binary_upgrade_validate_wal_logical_end?\r\n\r\nAdded, but actually it is not needed. This is because only superusers can connect\r\nto the server while upgrading. Please see below codes in InitPostgres().\r\n\r\n```\r\n\tif (IsBinaryUpgrade && !am_superuser)\r\n\t{\r\n\t\tereport(FATAL,\r\n\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\r\n\t\t\t\t errmsg(\"must be superuser to connect in binary upgrade mode\")));\r\n\t}\r\n```\r\n\r\n> 4.\r\n> + /*\r\n> + * Also, set processing_required flag if the message is not\r\n> + * transactional. It is needed to notify the message's existence to\r\n> + * the caller side. Usually, the flag is set when either the COMMIT or\r\n> + * ABORT records are decoded, but this must be turned on here because\r\n> + * the non-transactional logical message is decoded without waiting\r\n> + * for these records.\r\n> + */\r\n> \r\n> The first sentence of the comments doesn't seem to be required as that\r\n> just says what the code does. So, let's slightly change it to: \"We\r\n> need to set processing_required flag to notify the message's existence\r\n> to the caller side. Usually, the flag is set when either the COMMIT or\r\n> ABORT records are decoded, but this must be turned on here because the\r\n> non-transactional logical message is decoded without waiting for these\r\n> records.\"\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866B0614F80CE9F5EF051BDF5D3A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 12 Oct 2023 11:42:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\nHere is a new patch.\r\n\r\nPreviously I wrote:\r\n> Based on above idea, I made new version patch which some functionalities were\r\n> exported from pg_resetwal. In this approach, pg_upgrade itself removed WALs and\r\n> then create logical slots, then pg_resetwal would be called with new option\r\n> --no-switch, which avoid to switch a WAL segment file. The option is only used\r\n> for the upgrading purpose so it is not written in doc and usage(). This option\r\n> is not required if pg_resetwal -o does not discard WAL records. Please see the\r\n> fork thread [1].\r\n\r\nBut for now, these changes were reverted because changing pg_resetwal -o stuff\r\nmay be a bit risky. This has been located more than ten years so that we should\r\nbe more careful for modifying.\r\nAlso, I cannot come up with problems if slots are created after the pg_resetwal.\r\nBackground processes would not generate decodable changes (listed in [1]), and\r\nBGworkers by extensions could be ignored [2].\r\nBased on the discussion on forked thread [3] and if it is accepted, we will apply\r\nagain.\r\n\r\nAlso. some comments and function name was improved.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58660273EACEFC5BF256B133F50DA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1L4JB%2BKH_4EQryDEhyaLBPW6V20LqjdzOxCWyL7rbxqsA%40mail.gmail.com\r\n[3]: https://www.postgresql.org/message-id/flat/CAA4eK1KRyPMiY4fW98qFofsYrPd87Oc83zDNxSeHfTYh_asdBg%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Sat, 14 Oct 2023 05:15:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Oct 14, 2023 at 10:45 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is a new patch.\n>\n> Previously I wrote:\n> > Based on above idea, I made new version patch which some functionalities were\n> > exported from pg_resetwal. In this approach, pg_upgrade itself removed WALs and\n> > then create logical slots, then pg_resetwal would be called with new option\n> > --no-switch, which avoid to switch a WAL segment file. The option is only used\n> > for the upgrading purpose so it is not written in doc and usage(). This option\n> > is not required if pg_resetwal -o does not discard WAL records. Please see the\n> > fork thread [1].\n>\n> But for now, these changes were reverted because changing pg_resetwal -o stuff\n> may be a bit risky. This has been located more than ten years so that we should\n> be more careful for modifying.\n> Also, I cannot come up with problems if slots are created after the pg_resetwal.\n> Background processes would not generate decodable changes (listed in [1]), and\n> BGworkers by extensions could be ignored [2].\n> Based on the discussion on forked thread [3] and if it is accepted, we will apply\n> again.\n>\n\nYeah, I think introducing additional complexity unless it is really\nrequired sounds a bit scary to me as well. BTW, please find attached\nsome cosmetic changes.\n\nOne minor additional comment:\n+# Initialize subscriber cluster\n+my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n+$subscriber->init(allows_streaming => 'logical');\n\nWhy do we need to set wal_level as logical for subscribers?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 16 Oct 2023 14:43:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, 16 Oct 2023 at 14:44, Amit Kapila <[email protected]> wrote:\n>\n> On Sat, Oct 14, 2023 at 10:45 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Here is a new patch.\n> >\n> > Previously I wrote:\n> > > Based on above idea, I made new version patch which some functionalities were\n> > > exported from pg_resetwal. In this approach, pg_upgrade itself removed WALs and\n> > > then create logical slots, then pg_resetwal would be called with new option\n> > > --no-switch, which avoid to switch a WAL segment file. The option is only used\n> > > for the upgrading purpose so it is not written in doc and usage(). This option\n> > > is not required if pg_resetwal -o does not discard WAL records. Please see the\n> > > fork thread [1].\n> >\n> > But for now, these changes were reverted because changing pg_resetwal -o stuff\n> > may be a bit risky. This has been located more than ten years so that we should\n> > be more careful for modifying.\n> > Also, I cannot come up with problems if slots are created after the pg_resetwal.\n> > Background processes would not generate decodable changes (listed in [1]), and\n> > BGworkers by extensions could be ignored [2].\n> > Based on the discussion on forked thread [3] and if it is accepted, we will apply\n> > again.\n> >\n\n1) Should this:\n+# Copyright (c) 2023, PostgreSQL Global Development Group\n+\n+# Tests for upgrading replication slots\n+\nbe:\n\"Tests for upgrading logical replication slots\"\n\n2) This statement is not entirely true:\n+ <listitem>\n+ <para>\n+ The old cluster has replicated all the changes to subscribers.\n+ </para>\n\nIf we have some changes like shutdown_checkpoint the upgrade passes,\nif we have some changes like create view whose changes will not be\nreplicated the upgrade fails.\n\n3) All these includes are not required except for \"logical.h\"\n--- a/src/backend/utils/adt/pg_upgrade_support.c\n+++ b/src/backend/utils/adt/pg_upgrade_support.c\n@@ -11,14 +11,20 @@\n\n #include \"postgres.h\"\n\n+#include \"access/xlogutils.h\"\n+#include \"access/xlog_internal.h\"\n #include \"catalog/binary_upgrade.h\"\n #include \"catalog/heap.h\"\n #include \"catalog/namespace.h\"\n #include \"catalog/pg_type.h\"\n #include \"commands/extension.h\"\n+#include \"funcapi.h\"\n #include \"miscadmin.h\"\n+#include \"replication/logical.h\"\n+#include \"replication/slot.h\"\n #include \"utils/array.h\"\n #include \"utils/builtins.h\"\n+#include \"utils/pg_lsn.h\"\n\n4) We could print two_phase as true/false instead of 0/1:\n+static void\n+print_slot_infos(LogicalSlotInfoArr *slot_arr)\n+{\n+ /* Quick return if there are no logical slots. */\n+ if (slot_arr->nslots == 0)\n+ return;\n+\n+ pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\n+\n+ for (int slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\n+ {\n+ LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\n+\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\",\ntwo_phase: %d\",\n+ slot_info->slotname,\n+ slot_info->plugin,\n+ slot_info->two_phase);\n+ }\n+}\n\n5) test passes without the below, maybe this is not required:\n+# 2. Consume WAL records to avoid another type of upgrade failure. It will be\n+# tested in subsequent cases.\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT count(*) FROM\npg_logical_slot_get_changes('test_slot1', NULL, NULL);\"\n+);\n\n6) This message \"run of pg_upgrade of old cluster with idle\nreplication slots\" seems wrong:\n+# pg_upgrade will fail because the slot still has unconsumed WAL records\n+command_checks_all(\n+ [\n+ 'pg_upgrade', '--no-sync',\n+ '-d', $old_publisher->data_dir,\n+ '-D', $new_publisher->data_dir,\n+ '-b', $bindir,\n+ '-B', $bindir,\n+ '-s', $new_publisher->host,\n+ '-p', $old_publisher->port,\n+ '-P', $new_publisher->port,\n+ $mode,\n+ ],\n+ 1,\n+ [\n+ qr/Your installation contains invalid logical\nreplication slots./\n+ ],\n+ [qr//],\n+ 'run of pg_upgrade of old cluster with idle replication slots');\n+ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\n+ \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\n\n7) You could run pgindent and pgperlytidy, it shows there are few\nissues present with the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 16 Oct 2023 20:28:28 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for reviewing! PSA new version.\r\n\r\n> \r\n> Yeah, I think introducing additional complexity unless it is really\r\n> required sounds a bit scary to me as well. BTW, please find attached\r\n> some cosmetic changes.\r\n\r\nBasically LGTM, but below part was conflicted with Bharath's comment [1].\r\n\r\n```\r\n@@ -1607,7 +1605,7 @@ check_old_cluster_for_valid_slots(bool live_check)\r\n \t\tfclose(script);\r\n \r\n \t\tpg_log(PG_REPORT, \"fatal\");\r\n-\t\tpg_fatal(\"Your installation contains logical replication slots that cannot be upgraded.\\n\"\r\n+\t\tpg_fatal(\"Your installation contains invalid logical replication slots.\\n\"\r\n```\r\n\r\nHow about \" Your installation contains logical replication slots that can't be upgraded.\"?\r\n\r\n> One minor additional comment:\r\n> +# Initialize subscriber cluster\r\n> +my $subscriber = PostgreSQL::Test::Cluster->new('subscriber');\r\n> +$subscriber->init(allows_streaming => 'logical');\r\n> \r\n> Why do we need to set wal_level as logical for subscribers?\r\n\r\nIt is not mandatory. The line was copied from tests in src/test/subscription.\r\nRemoved the setting from my patch. I felt that it could be removed from other\r\npatches. I will fork new thread and post the patch.\r\n\r\n\r\nAlso, I did some improvements based on the v50, basically for tests.\r\n\r\n1. Test file was refactored. pg_uprade was executed many times in the test so the\r\n test time was increasing. Below refactorings were done.\r\n\r\n===\r\na. Checks for both transactional and non-transactional changes were done at the\r\n same time.\r\nb. Removed the dry-run test. It did not improve the coverage.\r\nc. Removed the wal_level test. Other tests like subscriptions and test_decoding\r\n do not contain test for GUCs, so I thought it could be acceptable. Removing\r\n all the GUC test (for max_replication_slots) might be risky, so it was remained.\r\n===\r\n\r\n2. Supported the cross-version checks. If an environment variable \"oldinstall\"\r\n is set, use the binary as old cluster. If the specified one is PG16-, the\r\n test verifies that logical replication slots would not be migrated.\r\n 002_pg_upgrade.pl requires that $ENV(olddump) must be also defined, but it\r\n is not needed for our test. I tried to support from PG9.2, which is the oldest\r\n version for Xupgrade test [2]. You can see 0002 patch for it.\r\n IIUC pg_create_logical_replication_slot() can be available since PG9.4, so tests\r\n will be skipped if older executables are specified, like:\r\n\r\n```\r\n$ oldinstall=/home/hayato/older/pg92/ make check PROVE_TESTS='t/003_upgrade_logical_replication_slots.pl'\r\n...\r\n# +++ tap check in src/bin/pg_upgrade +++\r\nt/003_upgrade_logical_replication_slots.pl .. skipped: Logical replication slots can be available since PG9.4\r\nFiles=1, Tests=0, 0 wallclock secs ( 0.03 usr 0.00 sys + 0.08 cusr 0.02 csys = 0.13 CPU)\r\nResult: NOTESTS\r\n```\r\n\r\n[1]: https://www.postgresql.org/message-id/CALj2ACXp%2BLXioY_%3D9mboEbLD--4c4nnpJCZ%2Bj4fckBdSOQhENA%40mail.gmail.com\r\n[2]: https://github.com/PGBuildFarm/client-code/releases#:~:text=support%20for%20testing%20cross%20version%20upgrade%20extended%20back%20to%209.2\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 17 Oct 2023 12:15:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! New version can be available in [1].\r\n\r\n> 1) Should this:\r\n> +# Copyright (c) 2023, PostgreSQL Global Development Group\r\n> +\r\n> +# Tests for upgrading replication slots\r\n> +\r\n> be:\r\n> \"Tests for upgrading logical replication slots\"\r\n\r\nFixed.\r\n\r\n> 2) This statement is not entirely true:\r\n> + <listitem>\r\n> + <para>\r\n> + The old cluster has replicated all the changes to subscribers.\r\n> + </para>\r\n> \r\n> If we have some changes like shutdown_checkpoint the upgrade passes,\r\n> if we have some changes like create view whose changes will not be\r\n> replicated the upgrade fails.\r\n\r\nHmm, I felt current description seems sufficient, but how about the below?\r\n\"The old cluster has replicated all the transactions and logical decoding\r\n messages to subscribers.\"\r\n\r\n> 3) All these includes are not required except for \"logical.h\"\r\n> --- a/src/backend/utils/adt/pg_upgrade_support.c\r\n> +++ b/src/backend/utils/adt/pg_upgrade_support.c\r\n> @@ -11,14 +11,20 @@\r\n> \r\n> #include \"postgres.h\"\r\n> \r\n> +#include \"access/xlogutils.h\"\r\n> +#include \"access/xlog_internal.h\"\r\n> #include \"catalog/binary_upgrade.h\"\r\n> #include \"catalog/heap.h\"\r\n> #include \"catalog/namespace.h\"\r\n> #include \"catalog/pg_type.h\"\r\n> #include \"commands/extension.h\"\r\n> +#include \"funcapi.h\"\r\n> #include \"miscadmin.h\"\r\n> +#include \"replication/logical.h\"\r\n> +#include \"replication/slot.h\"\r\n> #include \"utils/array.h\"\r\n> #include \"utils/builtins.h\"\r\n> +#include \"utils/pg_lsn.h\"\r\n\r\nI preferred to include all the needed items in each C files, but removed.\r\n\r\n> 4) We could print two_phase as true/false instead of 0/1:\r\n> +static void\r\n> +print_slot_infos(LogicalSlotInfoArr *slot_arr)\r\n> +{\r\n> + /* Quick return if there are no logical slots. */\r\n> + if (slot_arr->nslots == 0)\r\n> + return;\r\n> +\r\n> + pg_log(PG_VERBOSE, \"Logical replication slots within the database:\");\r\n> +\r\n> + for (int slotnum = 0; slotnum < slot_arr->nslots; slotnum++)\r\n> + {\r\n> + LogicalSlotInfo *slot_info = &slot_arr->slots[slotnum];\r\n> +\r\n> + pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\",\r\n> two_phase: %d\",\r\n> + slot_info->slotname,\r\n> + slot_info->plugin,\r\n> + slot_info->two_phase);\r\n> + }\r\n> +}\r\n\r\nFixed.\r\n\r\n> 5) test passes without the below, maybe this is not required:\r\n> +# 2. Consume WAL records to avoid another type of upgrade failure. It will be\r\n> +# tested in subsequent cases.\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM\r\n> pg_logical_slot_get_changes('test_slot1', NULL, NULL);\"\r\n> +);\r\n\r\nThis part is removed because of the refactoring.\r\n\r\n> 6) This message \"run of pg_upgrade of old cluster with idle\r\n> replication slots\" seems wrong:\r\n> +# pg_upgrade will fail because the slot still has unconsumed WAL records\r\n> +command_checks_all(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync',\r\n> + '-d', $old_publisher->data_dir,\r\n> + '-D', $new_publisher->data_dir,\r\n> + '-b', $bindir,\r\n> + '-B', $bindir,\r\n> + '-s', $new_publisher->host,\r\n> + '-p', $old_publisher->port,\r\n> + '-P', $new_publisher->port,\r\n> + $mode,\r\n> + ],\r\n> + 1,\r\n> + [\r\n> + qr/Your installation contains invalid logical\r\n> replication slots./\r\n> + ],\r\n> + [qr//],\r\n> + 'run of pg_upgrade of old cluster with idle replication slots');\r\n> +ok( -d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\r\n> + \"pg_upgrade_output.d/ not removed after pg_upgrade failure\");\r\n\r\nRephased.\r\n\r\n> 7) You could run pgindent and pgperlytidy, it shows there are few\r\n> issues present with the patch.\r\n\r\nI ran both.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866AC8A7694113BCBE0A71EF5D6A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 17 Oct 2023 12:18:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for v51-0001\n\n======\nsrc/bin/pg_upgrade/check.c\n\n0.\n+check_old_cluster_for_valid_slots(bool live_check)\n+{\n+ char output_path[MAXPGPATH];\n+ FILE *script = NULL;\n+\n+ prep_status(\"Checking for valid logical replication slots\");\n+\n+ snprintf(output_path, sizeof(output_path), \"%s/%s\",\n+ log_opts.basedir,\n+ \"invalid_logical_relication_slots.txt\");\n\n0a\ntypo /invalid_logical_relication_slots/invalid_logical_replication_slots/\n\n~\n\n0b.\nSince the non-upgradable slots are not strictly \"invalid\", is this an\nappropriate filename for the bad ones?\n\nBut I don't have very good alternatives. Maybe:\n- non_upgradable_logical_replication_slots.txt\n- problem_logical_replication_slots.txt\n\n======\nsrc/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n\n1.\n+# ------------------------------\n+# TEST: Confirm pg_upgrade fails when wrong GUC is set on new cluster\n+#\n+# There are two requirements for GUCs - wal_level and max_replication_slots,\n+# but only max_replication_slots will be tested here. This is because to\n+# reduce the execution time of the test.\n\n\nSUGGESTION\n# TEST: Confirm pg_upgrade fails when the new cluster has wrong GUC values.\n#\n# Two GUCs are required - 'wal_level' and 'max_replication_slots' - but to\n# reduce the test execution time, only 'max_replication_slots' is tested here.\n\n~~~\n\n2.\n+# Preparations for the subsequent test:\n+# 1. Create two slots on the old cluster\n+$old_publisher->start;\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT pg_create_logical_replication_slot('test_slot1',\n'test_decoding', false, true);\"\n+);\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT pg_create_logical_replication_slot('test_slot2',\n'test_decoding', false, true);\"\n+);\n\n\nCan't you combine those SQL in the same $old_publisher->safe_psql.\n\n~~~\n\n3.\n+# Clean up\n+rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\n+# Set max_replication_slots to the same value as the number of slots. Both of\n+# slots will be used for subsequent tests.\n+$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\n\nThe code doesn't seem to match the comment - is this correct? The\nold_publisher created 2 slots, so why are you setting new_publisher\n\"max_replication_slots = 1\" again?\n\n~~~\n\n4.\n+# Preparations for the subsequent test:\n+# 1. Generate extra WAL records. Because these WAL records do not get consumed\n+# it will cause the upcoming pg_upgrade test to fail.\n+$old_publisher->start;\n+$old_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\n+\n+# 2. Advance the slot test_slot2 up to the current WAL location\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n+\n+# 3. Emit a non-transactional message. test_slot2 detects the message so that\n+# this slot will be also reported by upcoming pg_upgrade.\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n'This is a non-transactional message');\"\n+);\n\n\nI felt this test would be clearer if you emphasised the state of the\ntest_slot1 also. e.g.\n\n4a.\nBEFORE\n+# 1. Generate extra WAL records. Because these WAL records do not get consumed\n+# it will cause the upcoming pg_upgrade test to fail.\n\nSUGGESTION\n# 1. Generate extra WAL records. At this point neither test_slot1 nor test_slot2\n# has consumed them.\n\n~\n\n4b.\nBEFORE\n+# 2. Advance the slot test_slot2 up to the current WAL location\n\nSUGGESTION\n# 2. Advance the slot test_slot2 up to the current WAL location, but test_slot2\n# still has unconsumed WAL records.\n\n~~~\n\n5.\n+# pg_upgrade will fail because the slot still has unconsumed WAL records\n+command_checks_all(\n\n/because the slot still has/because there are slots still having/\n\n~~~\n\n6.\n+ [qr//],\n+ 'run of pg_upgrade of old cluster with slot having unconsumed WAL records'\n+);\n\n/slot/slots/\n\n~~~\n\n7.\n+# And check the content. Both of slots must be reported that they have\n+# unconsumed WALs after confirmed_flush_lsn.\n\nSUGGESTION\n# Check the file content. Both slots should be reporting that they have\n# unconsumed WAL records.\n\n\n~~~\n\n8.\n+# Preparations for the subsequent test:\n+# 1. Setup logical replication\n+my $old_connstr = $old_publisher->connstr . ' dbname=postgres';\n+\n+$old_publisher->start;\n+\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT * FROM pg_drop_replication_slot('test_slot1');\");\n+$old_publisher->safe_psql('postgres',\n+ \"SELECT * FROM pg_drop_replication_slot('test_slot2');\");\n+\n+$old_publisher->safe_psql('postgres',\n+ \"CREATE PUBLICATION regress_pub FOR ALL TABLES;\");\n\n\n8a.\n/Setup logical replication/Setup logical replication (first, cleanup\nslots from the previous tests)/\n\n~\n\n8b.\nCan't you combine all those SQL in the same $old_publisher->safe_psql.\n\n~~~\n\n9.\n+\n+# Actual run, successful upgrade is expected\n+command_ok(\n+ [\n+ 'pg_upgrade', '--no-sync',\n+ '-d', $old_publisher->data_dir,\n+ '-D', $new_publisher->data_dir,\n+ '-b', $bindir,\n+ '-B', $bindir,\n+ '-s', $new_publisher->host,\n+ '-p', $old_publisher->port,\n+ '-P', $new_publisher->port,\n+ $mode,\n+ ],\n+ 'run of pg_upgrade of old cluster');\n\nNow that the \"Dry run\" part is removed, it seems unnecessary to say\n\"Actual run\" for this part.\n\n\nSUGGESTION\n# pg_upgrade should be successful.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 18 Oct 2023 13:01:17 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Oct 18, 2023 at 7:31 AM Peter Smith <[email protected]> wrote:\n>\n> ======\n> src/bin/pg_upgrade/check.c\n>\n> 0.\n> +check_old_cluster_for_valid_slots(bool live_check)\n> +{\n> + char output_path[MAXPGPATH];\n> + FILE *script = NULL;\n> +\n> + prep_status(\"Checking for valid logical replication slots\");\n> +\n> + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> + log_opts.basedir,\n> + \"invalid_logical_relication_slots.txt\");\n>\n> 0a\n> typo /invalid_logical_relication_slots/invalid_logical_replication_slots/\n>\n> ~\n>\n> 0b.\n> Since the non-upgradable slots are not strictly \"invalid\", is this an\n> appropriate filename for the bad ones?\n>\n> But I don't have very good alternatives. Maybe:\n> - non_upgradable_logical_replication_slots.txt\n> - problem_logical_replication_slots.txt\n>\n\nI prefer the current naming. I think 'invalid' here indicates both\ntypes of slots that are invalidated by the checkpointer and those that\nhave pending WAL to be consumed.\n\n> ======\n> src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n>\n> 1.\n> +# ------------------------------\n> +# TEST: Confirm pg_upgrade fails when wrong GUC is set on new cluster\n> +#\n> +# There are two requirements for GUCs - wal_level and max_replication_slots,\n> +# but only max_replication_slots will be tested here. This is because to\n> +# reduce the execution time of the test.\n>\n>\n> SUGGESTION\n> # TEST: Confirm pg_upgrade fails when the new cluster has wrong GUC values.\n> #\n> # Two GUCs are required - 'wal_level' and 'max_replication_slots' - but to\n> # reduce the test execution time, only 'max_replication_slots' is tested here.\n>\n\nI think we don't need the second part of the comment: \"Two GUCs ...\".\nIdeally, we should test each parameter's invalid value but that could\nbe costly, so I think it is okay to test a few of them.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Oct 2023 08:53:20 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some comments for the patch v51-0002\n\n======\nsrc/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n\n1.\n+# Set max_wal_senders to a lower value if the old cluster is prior to PG12.\n+# Such clusters regard max_wal_senders as part of max_connections, but the\n+# current TAP tester sets these GUCs to the same value.\n+if ($old_publisher->pg_version < 12)\n+{\n+ $old_publisher->append_conf('postgresql.conf', \"max_wal_senders = 5\");\n+}\n\n1a.\nI was initially unsure what the above comment meant -- thanks for the\noffline explanation.\n\nSUGGESTION\nThe TAP Cluster.pm assigns default 'max_wal_senders' and\n'max_connections' to the same value (10) but PG12 and prior considered\nmax_walsenders as a subset of max_connections, so setting the same\nvalue will fail.\n\n~\n\n1b.\nI also felt it is better to explicitly set both values in the < PG12\nconfiguration because otherwise, you are still assuming knowledge that\nthe TAP default max_connections is 10.\n\nSUGGESTION\n$old_publisher->append_conf('postgresql.conf', qq{\nmax_wal_senders = 5\nmax_connections = 10\n});\n\n~~~\n\n2.\n+# Switch workloads depend on the major version of the old cluster. Upgrading\n+# logical replication slots has been supported since PG17.\n+if ($old_publisher->pg_version <= 16)\n+{\n+ test_for_16_and_prior($old_publisher, $new_publisher, $mode);\n+}\n+else\n+{\n+ test_for_17_and_later($old_publisher, $new_publisher, $mode);\n+}\n\nIMO it is less confusing to have fewer version numbers floating around\nin comments and names and code. So instead of referring to 16 and 17,\nhow about just referring to 17 everywhere?\n\nFor example\n\nSUGGESTION\n# Test according to the major version of the old cluster.\n# Upgrading logical replication slots has been supported only since PG17.\n\nif ($old_publisher->pg_version >= 17)\n{\n test_upgrade_from_PG17_and_later($old_publisher, $new_publisher, $mode);\n}\nelse\n{\n test_upgrade_from_pre_PG17($old_publisher, $new_publisher, $mode);\n}\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 18 Oct 2023 17:06:28 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\nNote that 0001 and 0002 are combined into one patch.\r\n\r\n> Here are some review comments for v51-0001\r\n> \r\n> ======\r\n> src/bin/pg_upgrade/check.c\r\n> \r\n> 0.\r\n> +check_old_cluster_for_valid_slots(bool live_check)\r\n> +{\r\n> + char output_path[MAXPGPATH];\r\n> + FILE *script = NULL;\r\n> +\r\n> + prep_status(\"Checking for valid logical replication slots\");\r\n> +\r\n> + snprintf(output_path, sizeof(output_path), \"%s/%s\",\r\n> + log_opts.basedir,\r\n> + \"invalid_logical_relication_slots.txt\");\r\n> \r\n> 0a\r\n> typo /invalid_logical_relication_slots/invalid_logical_replication_slots/\r\n\r\nFixed.\r\n\r\n> 0b.\r\n> Since the non-upgradable slots are not strictly \"invalid\", is this an\r\n> appropriate filename for the bad ones?\r\n> \r\n> But I don't have very good alternatives. Maybe:\r\n> - non_upgradable_logical_replication_slots.txt\r\n> - problem_logical_replication_slots.txt\r\n\r\nPer discussion [1], I kept current style.\r\n\r\n> src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\r\n> \r\n> 1.\r\n> +# ------------------------------\r\n> +# TEST: Confirm pg_upgrade fails when wrong GUC is set on new cluster\r\n> +#\r\n> +# There are two requirements for GUCs - wal_level and max_replication_slots,\r\n> +# but only max_replication_slots will be tested here. This is because to\r\n> +# reduce the execution time of the test.\r\n> \r\n> \r\n> SUGGESTION\r\n> # TEST: Confirm pg_upgrade fails when the new cluster has wrong GUC values.\r\n> #\r\n> # Two GUCs are required - 'wal_level' and 'max_replication_slots' - but to\r\n> # reduce the test execution time, only 'max_replication_slots' is tested here.\r\n\r\nFirst part was fixed. Second part was removed per [1].\r\n\r\n> 2.\r\n> +# Preparations for the subsequent test:\r\n> +# 1. Create two slots on the old cluster\r\n> +$old_publisher->start;\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT pg_create_logical_replication_slot('test_slot1',\r\n> 'test_decoding', false, true);\"\r\n> +);\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT pg_create_logical_replication_slot('test_slot2',\r\n> 'test_decoding', false, true);\"\r\n> +);\r\n> \r\n> \r\n> Can't you combine those SQL in the same $old_publisher->safe_psql.\r\n\r\nCombined.\r\n\r\n> 3.\r\n> +# Clean up\r\n> +rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\r\n> +# Set max_replication_slots to the same value as the number of slots. Both of\r\n> +# slots will be used for subsequent tests.\r\n> +$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\r\n> \r\n> The code doesn't seem to match the comment - is this correct? The\r\n> old_publisher created 2 slots, so why are you setting new_publisher\r\n> \"max_replication_slots = 1\" again?\r\n\r\nFixed to \"max_replication_slots = 2\" Note that previous test worked well because\r\nGUC checking on new cluster is done after checking the status of slots.\r\n\r\n> 4.\r\n> +# Preparations for the subsequent test:\r\n> +# 1. Generate extra WAL records. Because these WAL records do not get\r\n> consumed\r\n> +# it will cause the upcoming pg_upgrade test to fail.\r\n> +$old_publisher->start;\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\r\n> +\r\n> +# 2. Advance the slot test_slot2 up to the current WAL location\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\r\n> +\r\n> +# 3. Emit a non-transactional message. test_slot2 detects the message so that\r\n> +# this slot will be also reported by upcoming pg_upgrade.\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\r\n> 'This is a non-transactional message');\"\r\n> +);\r\n> \r\n> \r\n> I felt this test would be clearer if you emphasised the state of the\r\n> test_slot1 also. e.g.\r\n> \r\n> 4a.\r\n> BEFORE\r\n> +# 1. Generate extra WAL records. Because these WAL records do not get\r\n> consumed\r\n> +# it will cause the upcoming pg_upgrade test to fail.\r\n> \r\n> SUGGESTION\r\n> # 1. Generate extra WAL records. At this point neither test_slot1 nor test_slot2\r\n> # has consumed them.\r\n\r\nFixed.\r\n\r\n> 4b.\r\n> BEFORE\r\n> +# 2. Advance the slot test_slot2 up to the current WAL location\r\n> \r\n> SUGGESTION\r\n> # 2. Advance the slot test_slot2 up to the current WAL location, but test_slot2\r\n> # still has unconsumed WAL records.\r\n\r\nIIUC, test_slot2 is caught up by pg_replication_slot_advance('test_slot2'). I think \r\n\"but test_slot1 still has unconsumed WAL records.\" is appropriate. Fixed.\r\n\r\n> 5.\r\n> +# pg_upgrade will fail because the slot still has unconsumed WAL records\r\n> +command_checks_all(\r\n> \r\n> /because the slot still has/because there are slots still having/\r\n\r\nFixed.\r\n\r\n> 6.\r\n> + [qr//],\r\n> + 'run of pg_upgrade of old cluster with slot having unconsumed WAL records'\r\n> +);\r\n> \r\n> /slot/slots/\r\n\r\nFixed.\r\n\r\n> 7.\r\n> +# And check the content. Both of slots must be reported that they have\r\n> +# unconsumed WALs after confirmed_flush_lsn.\r\n> \r\n> SUGGESTION\r\n> # Check the file content. Both slots should be reporting that they have\r\n> # unconsumed WAL records.\r\n\r\nFixed.\r\n\r\n> \r\n> 8.\r\n> +# Preparations for the subsequent test:\r\n> +# 1. Setup logical replication\r\n> +my $old_connstr = $old_publisher->connstr . ' dbname=postgres';\r\n> +\r\n> +$old_publisher->start;\r\n> +\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT * FROM pg_drop_replication_slot('test_slot1');\");\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"SELECT * FROM pg_drop_replication_slot('test_slot2');\");\r\n> +\r\n> +$old_publisher->safe_psql('postgres',\r\n> + \"CREATE PUBLICATION regress_pub FOR ALL TABLES;\");\r\n> \r\n> \r\n> 8a.\r\n> /Setup logical replication/Setup logical replication (first, cleanup\r\n> slots from the previous tests)/\r\n\r\nFixed.\r\n\r\n> 8b.\r\n> Can't you combine all those SQL in the same $old_publisher->safe_psql.\r\n\r\nCombined.\r\n\r\n> 9.\r\n> +\r\n> +# Actual run, successful upgrade is expected\r\n> +command_ok(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync',\r\n> + '-d', $old_publisher->data_dir,\r\n> + '-D', $new_publisher->data_dir,\r\n> + '-b', $bindir,\r\n> + '-B', $bindir,\r\n> + '-s', $new_publisher->host,\r\n> + '-p', $old_publisher->port,\r\n> + '-P', $new_publisher->port,\r\n> + $mode,\r\n> + ],\r\n> + 'run of pg_upgrade of old cluster');\r\n> \r\n> Now that the \"Dry run\" part is removed, it seems unnecessary to say\r\n> \"Actual run\" for this part.\r\n> \r\n> \r\n> SUGGESTION\r\n> # pg_upgrade should be successful.\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1%2BAHSWPs2_jn%3DftJKRqz-NXU6o%3DrPQ3f%3DH-gcPsgpPFrw%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 18 Oct 2023 09:25:38 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! New patch is available in [1].\r\n\r\n> ======\r\n> src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\r\n> \r\n> 1.\r\n> +# Set max_wal_senders to a lower value if the old cluster is prior to PG12.\r\n> +# Such clusters regard max_wal_senders as part of max_connections, but the\r\n> +# current TAP tester sets these GUCs to the same value.\r\n> +if ($old_publisher->pg_version < 12)\r\n> +{\r\n> + $old_publisher->append_conf('postgresql.conf', \"max_wal_senders = 5\");\r\n> +}\r\n> \r\n> 1a.\r\n> I was initially unsure what the above comment meant -- thanks for the\r\n> offline explanation.\r\n> \r\n> SUGGESTION\r\n> The TAP Cluster.pm assigns default 'max_wal_senders' and\r\n> 'max_connections' to the same value (10) but PG12 and prior considered\r\n> max_walsenders as a subset of max_connections, so setting the same\r\n> value will fail.\r\n\r\nFixed.\r\n\r\n> 1b.\r\n> I also felt it is better to explicitly set both values in the < PG12\r\n> configuration because otherwise, you are still assuming knowledge that\r\n> the TAP default max_connections is 10.\r\n> \r\n> SUGGESTION\r\n> $old_publisher->append_conf('postgresql.conf', qq{\r\n> max_wal_senders = 5\r\n> max_connections = 10\r\n> });\r\n\r\nFixed.\r\n\r\n> 2.\r\n> +# Switch workloads depend on the major version of the old cluster. Upgrading\r\n> +# logical replication slots has been supported since PG17.\r\n> +if ($old_publisher->pg_version <= 16)\r\n> +{\r\n> + test_for_16_and_prior($old_publisher, $new_publisher, $mode);\r\n> +}\r\n> +else\r\n> +{\r\n> + test_for_17_and_later($old_publisher, $new_publisher, $mode);\r\n> +}\r\n> \r\n> IMO it is less confusing to have fewer version numbers floating around\r\n> in comments and names and code. So instead of referring to 16 and 17,\r\n> how about just referring to 17 everywhere?\r\n> \r\n> For example\r\n> \r\n> SUGGESTION\r\n> # Test according to the major version of the old cluster.\r\n> # Upgrading logical replication slots has been supported only since PG17.\r\n> \r\n> if ($old_publisher->pg_version >= 17)\r\n> {\r\n> test_upgrade_from_PG17_and_later($old_publisher, $new_publisher, $mode);\r\n> }\r\n> else\r\n> {\r\n> test_upgrade_from_pre_PG17($old_publisher, $new_publisher, $mode);\r\n> }\r\n\r\nIn HEAD code, the pg_version seems \"17devel\". The string seemed smaller than 17 for Perl.\r\n(i.e., \"17devel\" >= 17 means false)\r\nFor the purpose of comparing only the major version, pg_version->major was used.\r\n\r\nAlso, I removed the support for ~PG9.4. I cannot find descriptions, but according to [2],\r\nCluster.pm does not support such binaries.\r\n(cluster_name is set when the server process is started, but the GUC has been added in PG9.5)\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB5870EBEBC89F5224F6B3788CF5D5A%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/YsUrUDrRhUbuU/6k%40paquier.xyz\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 18 Oct 2023 09:27:09 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for v52-0001\n\n======\nsrc/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n\n1.\n+ # 2. max_replication_slots is set to smaller than the number of slots (2)\n+ # present on the old cluster\n\nSUGGESTION\n2. Set 'max_replication_slots' to be less than the number of slots (2)\npresent on the old cluster.\n\n~~~\n\n2.\n+ # Set max_replication_slots to the same value as the number of slots. Both\n+ # of slots will be used for subsequent tests.\n\nSUGGESTION\nSet 'max_replication_slots' to match the number of slots (2) present\non the old cluster.\nBoth slots will be used for subsequent tests.\n\n~~~\n\n3.\n+ # 3. Emit a non-transactional message. test_slot2 detects the message so\n+ # that this slot will be also reported by upcoming pg_upgrade.\n+ $old_publisher->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n'This is a non-transactional message');\"\n+ );\n\nSUGGESTION\n3. Emit a non-transactional message. This will cause test_slot2 to\ndetect the unconsumed WAL record.\n\n~~~\n\n4.\n+ # Preparations for the subsequent test:\n+ # 1. Generate extra WAL records. At this point neither test_slot1 nor\n+ # test_slot2 has consumed them.\n+ $old_publisher->start;\n+ $old_publisher->safe_psql('postgres',\n+ \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\n+\n+ # 2. Advance the slot test_slot2 up to the current WAL location, but\n+ # test_slot1 still has unconsumed WAL records.\n+ $old_publisher->safe_psql('postgres',\n+ \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n+\n+ # 3. Emit a non-transactional message. test_slot2 detects the message so\n+ # that this slot will be also reported by upcoming pg_upgrade.\n+ $old_publisher->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n'This is a non-transactional message');\"\n+ );\n+\n+ $old_publisher->stop;\n\nAll of the above are sequentially executed on the\nold_publisher->safe_psql, so consider if it is worth combining them\nall in a single call (keeping the comments 1,2,3 separate still)\n\nFor example,\n\n$old_publisher->start;\n$old_publisher->safe_psql('postgres', qq[\n CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\n SELECT pg_replication_slot_advance('test_slot2', NULL);\n SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n'This is a non-transactional message');\n]);\n$old_publisher->stop;\n\n~~~\n\n5.\n+ # Clean up\n+ $subscriber->stop();\n+ $new_publisher->stop();\n\nShould this also drop the 'test_slot1' and 'test_slot2'?\n\n~~~\n\n6.\n+# Verify that logical replication slots cannot be migrated. This function\n+# will be executed when the old cluster is PG16 and prior.\n+sub test_upgrade_from_pre_PG17\n+{\n+ my ($old_publisher, $new_publisher, $mode) = @_;\n+\n+ my $oldbindir = $old_publisher->config_data('--bindir');\n+ my $newbindir = $new_publisher->config_data('--bindir');\n\nSUGGESTION (let's not mention lots of different numbers; just refer to 17)\nThis function will be executed when the old cluster version is prior to PG17.\n\n~~\n\n7.\n+ # Actual run, successful upgrade is expected\n+ command_ok(\n+ [\n+ 'pg_upgrade', '--no-sync',\n+ '-d', $old_publisher->data_dir,\n+ '-D', $new_publisher->data_dir,\n+ '-b', $oldbindir,\n+ '-B', $newbindir,\n+ '-s', $new_publisher->host,\n+ '-p', $old_publisher->port,\n+ '-P', $new_publisher->port,\n+ $mode,\n+ ],\n+ 'run of pg_upgrade of old cluster');\n+\n+ ok( !-d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\n+ \"pg_upgrade_output.d/ removed after pg_upgrade success\");\n\n7a.\nThe comment is wrong?\n\nSUGGESTION\n# pg_upgrade should NOT be successful\n\n~\n\n7b.\nThere is a blank line here before the ok() function, but in the other\ntests, there was none. Better to be consistent.\n\n~~~\n\n8.\n+ # Clean up\n+ $new_publisher->stop();\n\nShould this also drop the 'test_slot'?\n\n~~~\n\n9.\n+# The TAP Cluster.pm assigns default 'max_wal_senders' and 'max_connections' to\n+# the same value (10) but PG12 and prior considered max_walsenders as a subset\n+# of max_connections, so setting the same value will fail.\n+if ($old_publisher->pg_version->major < 12)\n+{\n+ $old_publisher->append_conf(\n+ 'postgresql.conf', qq[\n+ max_wal_senders = 5\n+ max_connections = 10\n+ ]);\n+}\n\nIf the comment is correct, then PG12 *and* prior, should be testing\n\"<= 12\", not \"< 12\". right?\n\n~~~\n\n10.\n+# Test according to the major version of the old cluster.\n+# Upgrading logical replication slots has been supported only since PG17.\n+if ($old_publisher->pg_version->major >= 17)\n\nThis comment seems wrong IMO. I think we always running the latest\nversion of pg_upgrade so slot migration is always \"supported\" from now\non. IIUC you intended this comment to be saying something about the\nold_publisher slots.\n\nBEFORE\nUpgrading logical replication slots has been supported only since PG17.\n\nSUGGESTION\nUpgrading logical replication slots from versions older than PG17 is\nnot supported.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 Oct 2023 12:29:56 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, 18 Oct 2023 at 14:55, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> Thank you for reviewing! PSA new version.\n> Note that 0001 and 0002 are combined into one patch.\n>\n> > Here are some review comments for v51-0001\n> >\n> > ======\n> > src/bin/pg_upgrade/check.c\n> >\n> > 0.\n> > +check_old_cluster_for_valid_slots(bool live_check)\n> > +{\n> > + char output_path[MAXPGPATH];\n> > + FILE *script = NULL;\n> > +\n> > + prep_status(\"Checking for valid logical replication slots\");\n> > +\n> > + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> > + log_opts.basedir,\n> > + \"invalid_logical_relication_slots.txt\");\n> >\n> > 0a\n> > typo /invalid_logical_relication_slots/invalid_logical_replication_slots/\n>\n> Fixed.\n>\n> > 0b.\n> > Since the non-upgradable slots are not strictly \"invalid\", is this an\n> > appropriate filename for the bad ones?\n> >\n> > But I don't have very good alternatives. Maybe:\n> > - non_upgradable_logical_replication_slots.txt\n> > - problem_logical_replication_slots.txt\n>\n> Per discussion [1], I kept current style.\n>\n> > src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n> >\n> > 1.\n> > +# ------------------------------\n> > +# TEST: Confirm pg_upgrade fails when wrong GUC is set on new cluster\n> > +#\n> > +# There are two requirements for GUCs - wal_level and max_replication_slots,\n> > +# but only max_replication_slots will be tested here. This is because to\n> > +# reduce the execution time of the test.\n> >\n> >\n> > SUGGESTION\n> > # TEST: Confirm pg_upgrade fails when the new cluster has wrong GUC values.\n> > #\n> > # Two GUCs are required - 'wal_level' and 'max_replication_slots' - but to\n> > # reduce the test execution time, only 'max_replication_slots' is tested here.\n>\n> First part was fixed. Second part was removed per [1].\n>\n> > 2.\n> > +# Preparations for the subsequent test:\n> > +# 1. Create two slots on the old cluster\n> > +$old_publisher->start;\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_create_logical_replication_slot('test_slot1',\n> > 'test_decoding', false, true);\"\n> > +);\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_create_logical_replication_slot('test_slot2',\n> > 'test_decoding', false, true);\"\n> > +);\n> >\n> >\n> > Can't you combine those SQL in the same $old_publisher->safe_psql.\n>\n> Combined.\n>\n> > 3.\n> > +# Clean up\n> > +rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\n> > +# Set max_replication_slots to the same value as the number of slots. Both of\n> > +# slots will be used for subsequent tests.\n> > +$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\n> >\n> > The code doesn't seem to match the comment - is this correct? The\n> > old_publisher created 2 slots, so why are you setting new_publisher\n> > \"max_replication_slots = 1\" again?\n>\n> Fixed to \"max_replication_slots = 2\" Note that previous test worked well because\n> GUC checking on new cluster is done after checking the status of slots.\n>\n> > 4.\n> > +# Preparations for the subsequent test:\n> > +# 1. Generate extra WAL records. Because these WAL records do not get\n> > consumed\n> > +# it will cause the upcoming pg_upgrade test to fail.\n> > +$old_publisher->start;\n> > +$old_publisher->safe_psql('postgres',\n> > + \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\n> > +\n> > +# 2. Advance the slot test_slot2 up to the current WAL location\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n> > +\n> > +# 3. Emit a non-transactional message. test_slot2 detects the message so that\n> > +# this slot will be also reported by upcoming pg_upgrade.\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n> > 'This is a non-transactional message');\"\n> > +);\n> >\n> >\n> > I felt this test would be clearer if you emphasised the state of the\n> > test_slot1 also. e.g.\n> >\n> > 4a.\n> > BEFORE\n> > +# 1. Generate extra WAL records. Because these WAL records do not get\n> > consumed\n> > +# it will cause the upcoming pg_upgrade test to fail.\n> >\n> > SUGGESTION\n> > # 1. Generate extra WAL records. At this point neither test_slot1 nor test_slot2\n> > # has consumed them.\n>\n> Fixed.\n>\n> > 4b.\n> > BEFORE\n> > +# 2. Advance the slot test_slot2 up to the current WAL location\n> >\n> > SUGGESTION\n> > # 2. Advance the slot test_slot2 up to the current WAL location, but test_slot2\n> > # still has unconsumed WAL records.\n>\n> IIUC, test_slot2 is caught up by pg_replication_slot_advance('test_slot2'). I think\n> \"but test_slot1 still has unconsumed WAL records.\" is appropriate. Fixed.\n>\n> > 5.\n> > +# pg_upgrade will fail because the slot still has unconsumed WAL records\n> > +command_checks_all(\n> >\n> > /because the slot still has/because there are slots still having/\n>\n> Fixed.\n>\n> > 6.\n> > + [qr//],\n> > + 'run of pg_upgrade of old cluster with slot having unconsumed WAL records'\n> > +);\n> >\n> > /slot/slots/\n>\n> Fixed.\n>\n> > 7.\n> > +# And check the content. Both of slots must be reported that they have\n> > +# unconsumed WALs after confirmed_flush_lsn.\n> >\n> > SUGGESTION\n> > # Check the file content. Both slots should be reporting that they have\n> > # unconsumed WAL records.\n>\n> Fixed.\n>\n> >\n> > 8.\n> > +# Preparations for the subsequent test:\n> > +# 1. Setup logical replication\n> > +my $old_connstr = $old_publisher->connstr . ' dbname=postgres';\n> > +\n> > +$old_publisher->start;\n> > +\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT * FROM pg_drop_replication_slot('test_slot1');\");\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT * FROM pg_drop_replication_slot('test_slot2');\");\n> > +\n> > +$old_publisher->safe_psql('postgres',\n> > + \"CREATE PUBLICATION regress_pub FOR ALL TABLES;\");\n> >\n> >\n> > 8a.\n> > /Setup logical replication/Setup logical replication (first, cleanup\n> > slots from the previous tests)/\n>\n> Fixed.\n>\n> > 8b.\n> > Can't you combine all those SQL in the same $old_publisher->safe_psql.\n>\n> Combined.\n>\n> > 9.\n> > +\n> > +# Actual run, successful upgrade is expected\n> > +command_ok(\n> > + [\n> > + 'pg_upgrade', '--no-sync',\n> > + '-d', $old_publisher->data_dir,\n> > + '-D', $new_publisher->data_dir,\n> > + '-b', $bindir,\n> > + '-B', $bindir,\n> > + '-s', $new_publisher->host,\n> > + '-p', $old_publisher->port,\n> > + '-P', $new_publisher->port,\n> > + $mode,\n> > + ],\n> > + 'run of pg_upgrade of old cluster');\n> >\n> > Now that the \"Dry run\" part is removed, it seems unnecessary to say\n> > \"Actual run\" for this part.\n> >\n> >\n> > SUGGESTION\n> > # pg_upgrade should be successful.\n>\n> Fixed.\n\nFew comments:\n1) We will be able to override the value of max_slot_wal_keep_size by\nusing --new-options like '--new-options \"-c\nmax_slot_wal_keep_size=val\"':\n+ /*\n+ * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n+ * checkpointer process. If WALs required by logical replication slots\n+ * are removed, the slots are unusable. This setting prevents the\n+ * invalidation of slots during the upgrade. We set this option when\n+ * cluster is PG17 or later because logical replication slots\ncan only be\n+ * migrated since then. Besides, max_slot_wal_keep_size is\nadded in PG13.\n+ */\n+ if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n+ appendPQExpBufferStr(&pgoptions, \" -c\nmax_slot_wal_keep_size=-1\");\n\nShould there be a check to throw an error if this option is specified\nor do we need some documentation that this option should not be\nspecified?\n\n2) Because we are able to override max_slot_wal_keep_size there is a\nchance of slot getting invalidated and Assert being hit:\n+ /*\n+ * The logical replication slots shouldn't be invalidated as\n+ * max_slot_wal_keep_size GUC is set to -1 during the upgrade.\n+ *\n+ * The following is just a sanity check.\n+ */\n+ if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n+ {\n+ Assert(max_slot_wal_keep_size_mb == -1);\n+ elog(ERROR, \"replication slots must not be\ninvalidated during the upgrade\");\n+ }\n\n3) File 003_logical_replication_slots.pl is now changed to\n003_upgrade_logical_replication_slots.pl, it should be change here too\naccordingly:\nindex 5834513add..815d1a7ca1 100644\n--- a/src/bin/pg_upgrade/Makefile\n+++ b/src/bin/pg_upgrade/Makefile\n@@ -3,6 +3,9 @@\n PGFILEDESC = \"pg_upgrade - an in-place binary upgrade utility\"\n PGAPPICON = win32\n\n+# required for 003_logical_replication_slots.pl\n+EXTRA_INSTALL=contrib/test_decoding\n+\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Oct 2023 08:32:00 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wednesday, October 18, 2023 5:26 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Thank you for reviewing! PSA new version.\r\n> Note that 0001 and 0002 are combined into one patch.\r\n\r\nThanks for updating the patch, here are few comments for the test.\r\n\r\n1.\r\n\r\n>\r\n# The TAP Cluster.pm assigns default 'max_wal_senders' and 'max_connections' to\r\n# the same value (10) but PG12 and prior considered max_walsenders as a subset\r\n# of max_connections, so setting the same value will fail.\r\nif ($old_publisher->pg_version->major < 12)\r\n{\r\n\t$old_publisher->append_conf(\r\n\t\t'postgresql.conf', qq[\r\n\tmax_wal_senders = 5\r\n\tmax_connections = 10\r\n\t]);\r\n>\r\n\r\nI think we already set max_wal_senders to 5 in init() function(in Cluster.pm),\r\nso is this necessary ? And 002_pg_upgrade.pl doesn't seems set this.\r\n\r\n2.\r\n\r\n\t\tSELECT pg_create_logical_replication_slot('test_slot1', 'test_decoding', false, true);\r\n\t\tSELECT pg_create_logical_replication_slot('test_slot2', 'test_decoding', false, true);\r\n\r\nI think we don't need to set the last two parameters here as we don't check\r\nthese info in the tests.\r\n\r\n3.\r\n\r\n# Set extra params if cross-version checks are required. This is needed to\r\n# avoid using previously initdb'd cluster\r\nif (defined($ENV{oldinstall}))\r\n{\r\n\tmy @initdb_params = ();\r\n\tpush @initdb_params, ('--encoding', 'UTF-8');\r\n\tpush @initdb_params, ('--locale', 'C');\r\n\r\nI am not sure I understand the comment, would it be possible provide a bit more\r\nexplanation about the purpose of this setting ? And I see 002_pg_upgrade always\r\nhave these setting even if oldinstall is not defined, so shall we follow the\r\nsame ?\r\n\r\n4.\r\n\r\n+\tcommand_ok(\r\n+\t\t[\r\n+\t\t\t'pg_upgrade', '--no-sync',\r\n+\t\t\t'-d', $old_publisher->data_dir,\r\n+\t\t\t'-D', $new_publisher->data_dir,\r\n+\t\t\t'-b', $oldbindir,\r\n+\t\t\t'-B', $newbindir,\r\n+\t\t\t'-s', $new_publisher->host,\r\n+\t\t\t'-p', $old_publisher->port,\r\n+\t\t\t'-P', $new_publisher->port,\r\n+\t\t\t$mode,\r\n+\t\t],\r\n\r\nI think all the pg_upgrade commands in the test are the same, so we can save the cmd\r\nin a variable and pass them to command_xx(). I think it can save some effort to\r\ncheck the difference of each command and can also reduce some codes.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Thu, 19 Oct 2023 04:46:07 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "> Few comments:\n> 1) We will be able to override the value of max_slot_wal_keep_size by\n> using --new-options like '--new-options \"-c\n> max_slot_wal_keep_size=val\"':\n> + /*\n> + * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n> + * checkpointer process. If WALs required by logical replication slots\n> + * are removed, the slots are unusable. This setting prevents the\n> + * invalidation of slots during the upgrade. We set this option when\n> + * cluster is PG17 or later because logical replication slots\n> can only be\n> + * migrated since then. Besides, max_slot_wal_keep_size is\n> added in PG13.\n> + */\n> + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n> + appendPQExpBufferStr(&pgoptions, \" -c\n> max_slot_wal_keep_size=-1\");\n>\n> Should there be a check to throw an error if this option is specified\n> or do we need some documentation that this option should not be\n> specified?\n\nI have tested the above scenario. We are able to override the\nmax_slot_wal_keep_size by using '--new-options \"-c\nmax_slot_wal_keep_size=val\"'. And also with some insert statements\nduring pg_upgrade, old WAL file were deleted and logical replication\nslots were invalidated. Since the slots were invalidated replication\nwas not happening after the upgrade.\n\nThanks,\nShlok Kumar Kyal\n\n\n",
"msg_date": "Thu, 19 Oct 2023 11:51:27 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, 18 Oct 2023 at 14:55, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> Thank you for reviewing! PSA new version.\n> Note that 0001 and 0002 are combined into one patch.\n>\n> > Here are some review comments for v51-0001\n> >\n> > ======\n> > src/bin/pg_upgrade/check.c\n> >\n> > 0.\n> > +check_old_cluster_for_valid_slots(bool live_check)\n> > +{\n> > + char output_path[MAXPGPATH];\n> > + FILE *script = NULL;\n> > +\n> > + prep_status(\"Checking for valid logical replication slots\");\n> > +\n> > + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> > + log_opts.basedir,\n> > + \"invalid_logical_relication_slots.txt\");\n> >\n> > 0a\n> > typo /invalid_logical_relication_slots/invalid_logical_replication_slots/\n>\n> Fixed.\n>\n> > 0b.\n> > Since the non-upgradable slots are not strictly \"invalid\", is this an\n> > appropriate filename for the bad ones?\n> >\n> > But I don't have very good alternatives. Maybe:\n> > - non_upgradable_logical_replication_slots.txt\n> > - problem_logical_replication_slots.txt\n>\n> Per discussion [1], I kept current style.\n>\n> > src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n> >\n> > 1.\n> > +# ------------------------------\n> > +# TEST: Confirm pg_upgrade fails when wrong GUC is set on new cluster\n> > +#\n> > +# There are two requirements for GUCs - wal_level and max_replication_slots,\n> > +# but only max_replication_slots will be tested here. This is because to\n> > +# reduce the execution time of the test.\n> >\n> >\n> > SUGGESTION\n> > # TEST: Confirm pg_upgrade fails when the new cluster has wrong GUC values.\n> > #\n> > # Two GUCs are required - 'wal_level' and 'max_replication_slots' - but to\n> > # reduce the test execution time, only 'max_replication_slots' is tested here.\n>\n> First part was fixed. Second part was removed per [1].\n>\n> > 2.\n> > +# Preparations for the subsequent test:\n> > +# 1. Create two slots on the old cluster\n> > +$old_publisher->start;\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_create_logical_replication_slot('test_slot1',\n> > 'test_decoding', false, true);\"\n> > +);\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_create_logical_replication_slot('test_slot2',\n> > 'test_decoding', false, true);\"\n> > +);\n> >\n> >\n> > Can't you combine those SQL in the same $old_publisher->safe_psql.\n>\n> Combined.\n>\n> > 3.\n> > +# Clean up\n> > +rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\n> > +# Set max_replication_slots to the same value as the number of slots. Both of\n> > +# slots will be used for subsequent tests.\n> > +$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\n> >\n> > The code doesn't seem to match the comment - is this correct? The\n> > old_publisher created 2 slots, so why are you setting new_publisher\n> > \"max_replication_slots = 1\" again?\n>\n> Fixed to \"max_replication_slots = 2\" Note that previous test worked well because\n> GUC checking on new cluster is done after checking the status of slots.\n>\n> > 4.\n> > +# Preparations for the subsequent test:\n> > +# 1. Generate extra WAL records. Because these WAL records do not get\n> > consumed\n> > +# it will cause the upcoming pg_upgrade test to fail.\n> > +$old_publisher->start;\n> > +$old_publisher->safe_psql('postgres',\n> > + \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\n> > +\n> > +# 2. Advance the slot test_slot2 up to the current WAL location\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n> > +\n> > +# 3. Emit a non-transactional message. test_slot2 detects the message so that\n> > +# this slot will be also reported by upcoming pg_upgrade.\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n> > 'This is a non-transactional message');\"\n> > +);\n> >\n> >\n> > I felt this test would be clearer if you emphasised the state of the\n> > test_slot1 also. e.g.\n> >\n> > 4a.\n> > BEFORE\n> > +# 1. Generate extra WAL records. Because these WAL records do not get\n> > consumed\n> > +# it will cause the upcoming pg_upgrade test to fail.\n> >\n> > SUGGESTION\n> > # 1. Generate extra WAL records. At this point neither test_slot1 nor test_slot2\n> > # has consumed them.\n>\n> Fixed.\n>\n> > 4b.\n> > BEFORE\n> > +# 2. Advance the slot test_slot2 up to the current WAL location\n> >\n> > SUGGESTION\n> > # 2. Advance the slot test_slot2 up to the current WAL location, but test_slot2\n> > # still has unconsumed WAL records.\n>\n> IIUC, test_slot2 is caught up by pg_replication_slot_advance('test_slot2'). I think\n> \"but test_slot1 still has unconsumed WAL records.\" is appropriate. Fixed.\n>\n> > 5.\n> > +# pg_upgrade will fail because the slot still has unconsumed WAL records\n> > +command_checks_all(\n> >\n> > /because the slot still has/because there are slots still having/\n>\n> Fixed.\n>\n> > 6.\n> > + [qr//],\n> > + 'run of pg_upgrade of old cluster with slot having unconsumed WAL records'\n> > +);\n> >\n> > /slot/slots/\n>\n> Fixed.\n>\n> > 7.\n> > +# And check the content. Both of slots must be reported that they have\n> > +# unconsumed WALs after confirmed_flush_lsn.\n> >\n> > SUGGESTION\n> > # Check the file content. Both slots should be reporting that they have\n> > # unconsumed WAL records.\n>\n> Fixed.\n>\n> >\n> > 8.\n> > +# Preparations for the subsequent test:\n> > +# 1. Setup logical replication\n> > +my $old_connstr = $old_publisher->connstr . ' dbname=postgres';\n> > +\n> > +$old_publisher->start;\n> > +\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT * FROM pg_drop_replication_slot('test_slot1');\");\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT * FROM pg_drop_replication_slot('test_slot2');\");\n> > +\n> > +$old_publisher->safe_psql('postgres',\n> > + \"CREATE PUBLICATION regress_pub FOR ALL TABLES;\");\n> >\n> >\n> > 8a.\n> > /Setup logical replication/Setup logical replication (first, cleanup\n> > slots from the previous tests)/\n>\n> Fixed.\n>\n> > 8b.\n> > Can't you combine all those SQL in the same $old_publisher->safe_psql.\n>\n> Combined.\n>\n> > 9.\n> > +\n> > +# Actual run, successful upgrade is expected\n> > +command_ok(\n> > + [\n> > + 'pg_upgrade', '--no-sync',\n> > + '-d', $old_publisher->data_dir,\n> > + '-D', $new_publisher->data_dir,\n> > + '-b', $bindir,\n> > + '-B', $bindir,\n> > + '-s', $new_publisher->host,\n> > + '-p', $old_publisher->port,\n> > + '-P', $new_publisher->port,\n> > + $mode,\n> > + ],\n> > + 'run of pg_upgrade of old cluster');\n> >\n> > Now that the \"Dry run\" part is removed, it seems unnecessary to say\n> > \"Actual run\" for this part.\n> >\n> >\n> > SUGGESTION\n> > # pg_upgrade should be successful.\n>\n> Fixed.\n\nFew comments:\n1) Even if we comment 3rd point \"Emit a non-transactional message\",\ntest_slot2 still appears in the invalid_logical_replication_slots.txt\nfile. There is something wrong here.\n+ # 2. Advance the slot test_slot2 up to the current WAL location, but\n+ # test_slot1 still has unconsumed WAL records.\n+ $old_publisher->safe_psql('postgres',\n+ \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n+\n+ # 3. Emit a non-transactional message. test_slot2 detects the message so\n+ # that this slot will be also reported by upcoming pg_upgrade.\n+ $old_publisher->safe_psql('postgres',\n+ \"SELECT count(*) FROM pg_logical_emit_message('false',\n'prefix', 'This is a non-transactional message');\"\n+ );\n\n2) If the test fails here, it is difficult to debug as the\npg_upgrade_output.d directory was removed, so better to keep the\ndirectory as it is this case:\n+ # Check the file content. Both slots should be reporting that they have\n+ # unconsumed WAL records.\n+ like(\n+ slurp_file($slots_filename),\n+ qr/The slot \\\"test_slot1\\\" has not consumed the WAL yet/m,\n+ 'the previous test failed due to unconsumed WALs');\n+ like(\n+ slurp_file($slots_filename),\n+ qr/The slot \\\"test_slot2\\\" has not consumed the WAL yet/m,\n+ 'the previous test failed due to unconsumed WALs');\n+\n+ # Clean up\n+ rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\n\n3) The below could be changed:\n+ # Check the file content. Both slots should be reporting that they have\n+ # unconsumed WAL records.\n+ like(\n+ slurp_file($slots_filename),\n+ qr/The slot \\\"test_slot1\\\" has not consumed the WAL yet/m,\n+ 'the previous test failed due to unconsumed WALs');\n+ like(\n+ slurp_file($slots_filename),\n+ qr/The slot \\\"test_slot2\\\" has not consumed the WAL yet/m,\n+ 'the previous test failed due to unconsumed WALs');\n\nto:\nmy $result = slurp_file($slots_filename);\nis( $result, qq(The slot \"test_slot1\" has not consumed the WAL yet\nThe slot \"test_slot2\" has not consumed the WAL yet\n),\n'the previous test failed due to unconsumed WALs');\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Oct 2023 11:58:12 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "I tested a test scenario:\nI started a new publisher with 'max_replication_slots' parameter set\nto '1' and created a streaming replication with the new publisher as\nprimary node.\nThen I did a pg_upgrade from old publisher to new publisher. The\nupgrade failed with following error:\n\nRestoring logical replication slots in the new cluster\nSQL command failed\nSELECT * FROM pg_catalog.pg_create_logical_replication_slot('test1',\n'pgoutput', false, false);\nERROR: all replication slots are in use\nHINT: Free one or increase max_replication_slots.\n\nFailure, exiting\n\nShould we document that the existing replication slots are taken in\nconsideration while setting 'max_replication_slots' value in the new\npublisher?\n\nThanks\nShlok Kumar Kyal\n\nOn Wed, 18 Oct 2023 at 15:01, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Peter,\n>\n> Thank you for reviewing! PSA new version.\n> Note that 0001 and 0002 are combined into one patch.\n>\n> > Here are some review comments for v51-0001\n> >\n> > ======\n> > src/bin/pg_upgrade/check.c\n> >\n> > 0.\n> > +check_old_cluster_for_valid_slots(bool live_check)\n> > +{\n> > + char output_path[MAXPGPATH];\n> > + FILE *script = NULL;\n> > +\n> > + prep_status(\"Checking for valid logical replication slots\");\n> > +\n> > + snprintf(output_path, sizeof(output_path), \"%s/%s\",\n> > + log_opts.basedir,\n> > + \"invalid_logical_relication_slots.txt\");\n> >\n> > 0a\n> > typo /invalid_logical_relication_slots/invalid_logical_replication_slots/\n>\n> Fixed.\n>\n> > 0b.\n> > Since the non-upgradable slots are not strictly \"invalid\", is this an\n> > appropriate filename for the bad ones?\n> >\n> > But I don't have very good alternatives. Maybe:\n> > - non_upgradable_logical_replication_slots.txt\n> > - problem_logical_replication_slots.txt\n>\n> Per discussion [1], I kept current style.\n>\n> > src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n> >\n> > 1.\n> > +# ------------------------------\n> > +# TEST: Confirm pg_upgrade fails when wrong GUC is set on new cluster\n> > +#\n> > +# There are two requirements for GUCs - wal_level and max_replication_slots,\n> > +# but only max_replication_slots will be tested here. This is because to\n> > +# reduce the execution time of the test.\n> >\n> >\n> > SUGGESTION\n> > # TEST: Confirm pg_upgrade fails when the new cluster has wrong GUC values.\n> > #\n> > # Two GUCs are required - 'wal_level' and 'max_replication_slots' - but to\n> > # reduce the test execution time, only 'max_replication_slots' is tested here.\n>\n> First part was fixed. Second part was removed per [1].\n>\n> > 2.\n> > +# Preparations for the subsequent test:\n> > +# 1. Create two slots on the old cluster\n> > +$old_publisher->start;\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_create_logical_replication_slot('test_slot1',\n> > 'test_decoding', false, true);\"\n> > +);\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_create_logical_replication_slot('test_slot2',\n> > 'test_decoding', false, true);\"\n> > +);\n> >\n> >\n> > Can't you combine those SQL in the same $old_publisher->safe_psql.\n>\n> Combined.\n>\n> > 3.\n> > +# Clean up\n> > +rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\n> > +# Set max_replication_slots to the same value as the number of slots. Both of\n> > +# slots will be used for subsequent tests.\n> > +$new_publisher->append_conf('postgresql.conf', \"max_replication_slots = 1\");\n> >\n> > The code doesn't seem to match the comment - is this correct? The\n> > old_publisher created 2 slots, so why are you setting new_publisher\n> > \"max_replication_slots = 1\" again?\n>\n> Fixed to \"max_replication_slots = 2\" Note that previous test worked well because\n> GUC checking on new cluster is done after checking the status of slots.\n>\n> > 4.\n> > +# Preparations for the subsequent test:\n> > +# 1. Generate extra WAL records. Because these WAL records do not get\n> > consumed\n> > +# it will cause the upcoming pg_upgrade test to fail.\n> > +$old_publisher->start;\n> > +$old_publisher->safe_psql('postgres',\n> > + \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\n> > +\n> > +# 2. Advance the slot test_slot2 up to the current WAL location\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n> > +\n> > +# 3. Emit a non-transactional message. test_slot2 detects the message so that\n> > +# this slot will be also reported by upcoming pg_upgrade.\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n> > 'This is a non-transactional message');\"\n> > +);\n> >\n> >\n> > I felt this test would be clearer if you emphasised the state of the\n> > test_slot1 also. e.g.\n> >\n> > 4a.\n> > BEFORE\n> > +# 1. Generate extra WAL records. Because these WAL records do not get\n> > consumed\n> > +# it will cause the upcoming pg_upgrade test to fail.\n> >\n> > SUGGESTION\n> > # 1. Generate extra WAL records. At this point neither test_slot1 nor test_slot2\n> > # has consumed them.\n>\n> Fixed.\n>\n> > 4b.\n> > BEFORE\n> > +# 2. Advance the slot test_slot2 up to the current WAL location\n> >\n> > SUGGESTION\n> > # 2. Advance the slot test_slot2 up to the current WAL location, but test_slot2\n> > # still has unconsumed WAL records.\n>\n> IIUC, test_slot2 is caught up by pg_replication_slot_advance('test_slot2'). I think\n> \"but test_slot1 still has unconsumed WAL records.\" is appropriate. Fixed.\n>\n> > 5.\n> > +# pg_upgrade will fail because the slot still has unconsumed WAL records\n> > +command_checks_all(\n> >\n> > /because the slot still has/because there are slots still having/\n>\n> Fixed.\n>\n> > 6.\n> > + [qr//],\n> > + 'run of pg_upgrade of old cluster with slot having unconsumed WAL records'\n> > +);\n> >\n> > /slot/slots/\n>\n> Fixed.\n>\n> > 7.\n> > +# And check the content. Both of slots must be reported that they have\n> > +# unconsumed WALs after confirmed_flush_lsn.\n> >\n> > SUGGESTION\n> > # Check the file content. Both slots should be reporting that they have\n> > # unconsumed WAL records.\n>\n> Fixed.\n>\n> >\n> > 8.\n> > +# Preparations for the subsequent test:\n> > +# 1. Setup logical replication\n> > +my $old_connstr = $old_publisher->connstr . ' dbname=postgres';\n> > +\n> > +$old_publisher->start;\n> > +\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT * FROM pg_drop_replication_slot('test_slot1');\");\n> > +$old_publisher->safe_psql('postgres',\n> > + \"SELECT * FROM pg_drop_replication_slot('test_slot2');\");\n> > +\n> > +$old_publisher->safe_psql('postgres',\n> > + \"CREATE PUBLICATION regress_pub FOR ALL TABLES;\");\n> >\n> >\n> > 8a.\n> > /Setup logical replication/Setup logical replication (first, cleanup\n> > slots from the previous tests)/\n>\n> Fixed.\n>\n> > 8b.\n> > Can't you combine all those SQL in the same $old_publisher->safe_psql.\n>\n> Combined.\n>\n> > 9.\n> > +\n> > +# Actual run, successful upgrade is expected\n> > +command_ok(\n> > + [\n> > + 'pg_upgrade', '--no-sync',\n> > + '-d', $old_publisher->data_dir,\n> > + '-D', $new_publisher->data_dir,\n> > + '-b', $bindir,\n> > + '-B', $bindir,\n> > + '-s', $new_publisher->host,\n> > + '-p', $old_publisher->port,\n> > + '-P', $new_publisher->port,\n> > + $mode,\n> > + ],\n> > + 'run of pg_upgrade of old cluster');\n> >\n> > Now that the \"Dry run\" part is removed, it seems unnecessary to say\n> > \"Actual run\" for this part.\n> >\n> >\n> > SUGGESTION\n> > # pg_upgrade should be successful.\n>\n> Fixed.\n>\n> [1]: https://www.postgresql.org/message-id/CAA4eK1%2BAHSWPs2_jn%3DftJKRqz-NXU6o%3DrPQ3f%3DH-gcPsgpPFrw%40mail.gmail.com\n>\n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n>\n\n\n",
"msg_date": "Thu, 19 Oct 2023 13:52:24 +0530",
"msg_from": "Shlok Kyal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Shlok,\r\n\r\nThanks for testing the feature!\r\n\r\n> \r\n> I tested a test scenario:\r\n> I started a new publisher with 'max_replication_slots' parameter set\r\n> to '1' and created a streaming replication with the new publisher as\r\n> primary node.\r\n\r\nJust to confirm what you did - you set up a physical replication and the\r\ntarget of pg_upgrade was set to the primary, right?\r\n\r\nI think we can assume that new cluster (target of pg_upgrade) is not used yet.\r\nThe documentation describes the usage [1] and it says that we must initialize\r\nthe cluster (at step 4) and then run the pg_upgrade (at step 10).\r\n\r\nTherefore I don't think we should document anything about it.\r\n\r\n[1]: https://www.postgresql.org/docs/devel/pgupgrade.html#:~:text=Initialize%20the%20new%20PostgreSQL%20cluster\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 19 Oct 2023 09:24:04 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThanks for reviewing! PSA new version.\r\n\r\n> ======\r\n> src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\r\n> \r\n> 1.\r\n> + # 2. max_replication_slots is set to smaller than the number of slots (2)\r\n> + # present on the old cluster\r\n> \r\n> SUGGESTION\r\n> 2. Set 'max_replication_slots' to be less than the number of slots (2)\r\n> present on the old cluster.\r\n\r\nFixed.\r\n\r\n> 2.\r\n> + # Set max_replication_slots to the same value as the number of slots. Both\r\n> + # of slots will be used for subsequent tests.\r\n> \r\n> SUGGESTION\r\n> Set 'max_replication_slots' to match the number of slots (2) present\r\n> on the old cluster.\r\n> Both slots will be used for subsequent tests.\r\n\r\nFixed.\r\n\r\n> \r\n> 3.\r\n> + # 3. Emit a non-transactional message. test_slot2 detects the message so\r\n> + # that this slot will be also reported by upcoming pg_upgrade.\r\n> + $old_publisher->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\r\n> 'This is a non-transactional message');\"\r\n> + );\r\n> \r\n> SUGGESTION\r\n> 3. Emit a non-transactional message. This will cause test_slot2 to\r\n> detect the unconsumed WAL record.\r\n\r\nFixed.\r\n\r\n> \r\n> 4.\r\n> + # Preparations for the subsequent test:\r\n> + # 1. Generate extra WAL records. At this point neither test_slot1 nor\r\n> + # test_slot2 has consumed them.\r\n> + $old_publisher->start;\r\n> + $old_publisher->safe_psql('postgres',\r\n> + \"CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\");\r\n> +\r\n> + # 2. Advance the slot test_slot2 up to the current WAL location, but\r\n> + # test_slot1 still has unconsumed WAL records.\r\n> + $old_publisher->safe_psql('postgres',\r\n> + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\r\n> +\r\n> + # 3. Emit a non-transactional message. test_slot2 detects the message so\r\n> + # that this slot will be also reported by upcoming pg_upgrade.\r\n> + $old_publisher->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\r\n> 'This is a non-transactional message');\"\r\n> + );\r\n> +\r\n> + $old_publisher->stop;\r\n> \r\n> All of the above are sequentially executed on the\r\n> old_publisher->safe_psql, so consider if it is worth combining them\r\n> all in a single call (keeping the comments 1,2,3 separate still)\r\n> \r\n> For example,\r\n> \r\n> $old_publisher->start;\r\n> $old_publisher->safe_psql('postgres', qq[\r\n> CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\r\n> SELECT pg_replication_slot_advance('test_slot2', NULL);\r\n> SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\r\n> 'This is a non-transactional message');\r\n> ]);\r\n> $old_publisher->stop;\r\n\r\nFixed.\r\n\r\n> \r\n> 5.\r\n> + # Clean up\r\n> + $subscriber->stop();\r\n> + $new_publisher->stop();\r\n> \r\n> Should this also drop the 'test_slot1' and 'test_slot2'?\r\n\r\n'test_slot1' and 'test_slot2' have already been removed while preparing in\r\n\"Successful upgrade\" case. Also, I don't think objects have to be removed at the\r\nend. It is tested by other parts, and it may make the test more difficult to\r\ndebug, if there are some failures.\r\n\r\n> 6.\r\n> +# Verify that logical replication slots cannot be migrated. This function\r\n> +# will be executed when the old cluster is PG16 and prior.\r\n> +sub test_upgrade_from_pre_PG17\r\n> +{\r\n> + my ($old_publisher, $new_publisher, $mode) = @_;\r\n> +\r\n> + my $oldbindir = $old_publisher->config_data('--bindir');\r\n> + my $newbindir = $new_publisher->config_data('--bindir');\r\n> \r\n> SUGGESTION (let's not mention lots of different numbers; just refer to 17)\r\n> This function will be executed when the old cluster version is prior to PG17.\r\n\r\nFixed.\r\n\r\n\r\n> 7.\r\n> + # Actual run, successful upgrade is expected\r\n> + command_ok(\r\n> + [\r\n> + 'pg_upgrade', '--no-sync',\r\n> + '-d', $old_publisher->data_dir,\r\n> + '-D', $new_publisher->data_dir,\r\n> + '-b', $oldbindir,\r\n> + '-B', $newbindir,\r\n> + '-s', $new_publisher->host,\r\n> + '-p', $old_publisher->port,\r\n> + '-P', $new_publisher->port,\r\n> + $mode,\r\n> + ],\r\n> + 'run of pg_upgrade of old cluster');\r\n> +\r\n> + ok( !-d $new_publisher->data_dir . \"/pg_upgrade_output.d\",\r\n> + \"pg_upgrade_output.d/ removed after pg_upgrade success\");\r\n> \r\n> 7a.\r\n> The comment is wrong?\r\n> \r\n> SUGGESTION\r\n> # pg_upgrade should NOT be successful\r\n\r\nNo, pg_uprade will success but no logical replication slots are migrated.\r\nComments docs were added.\r\n\r\n> 7b.\r\n> There is a blank line here before the ok() function, but in the other\r\n> tests, there was none. Better to be consistent.\r\n\r\nRemoved.\r\n\r\n> 8.\r\n> + # Clean up\r\n> + $new_publisher->stop();\r\n> \r\n> Should this also drop the 'test_slot'?\r\n\r\nI don't think so. Please see above.\r\n\r\n> \r\n> 9.\r\n> +# The TAP Cluster.pm assigns default 'max_wal_senders' and 'max_connections'\r\n> to\r\n> +# the same value (10) but PG12 and prior considered max_walsenders as a\r\n> subset\r\n> +# of max_connections, so setting the same value will fail.\r\n> +if ($old_publisher->pg_version->major < 12)\r\n> +{\r\n> + $old_publisher->append_conf(\r\n> + 'postgresql.conf', qq[\r\n> + max_wal_senders = 5\r\n> + max_connections = 10\r\n> + ]);\r\n> +}\r\n> \r\n> If the comment is correct, then PG12 *and* prior, should be testing\r\n> \"<= 12\", not \"< 12\". right?\r\n\r\nI analyzed more and I was wrong - we must set GUCs here only for PG9.6-.\r\nRegarding PG11 and PG10, the corresponding constructor will be chosen in new() [a],\r\nand these instance will set max_wal_senders to 5 [b]. \r\nAs for PG9.6-, the related package has not been defined yet so that such a\r\nworkaround will not be used. So we must set manually.\r\n\r\nActually, the part will be not needed when Cluster.pm supports PG9.6-. If needed\r\nwe can start another thread and support them. For now the case is handled ad-hoc.\r\n\r\n> 10.\r\n> +# Test according to the major version of the old cluster.\r\n> +# Upgrading logical replication slots has been supported only since PG17.\r\n> +if ($old_publisher->pg_version->major >= 17)\r\n> \r\n> This comment seems wrong IMO. I think we always running the latest\r\n> version of pg_upgrade so slot migration is always \"supported\" from now\r\n> on. IIUC you intended this comment to be saying something about the\r\n> old_publisher slots.\r\n> \r\n> BEFORE\r\n> Upgrading logical replication slots has been supported only since PG17.\r\n> \r\n> SUGGESTION\r\n> Upgrading logical replication slots from versions older than PG17 is\r\n> not supported.\r\n\r\nFixed.\r\n\r\n[a]:\r\n```\r\n\t# Use a subclass as defined below (or elsewhere) if this version\r\n\t# isn't fully compatible. Warn if the version is too old and thus we don't\r\n\t# have a subclass of this class.\r\n\tif (ref $ver && $ver < $min_compat)\r\n\t{\r\n\t\tmy $maj = $ver->major(separator => '_');\r\n\t\tmy $subclass = $class . \"::V_$maj\";\r\n\t\tif ($subclass->isa($class))\r\n\t\t{\r\n\t\t\tbless $node, $subclass;\r\n\t\t}\r\n```\r\n\r\n[b]:\r\n```\r\nsub init\r\n{\r\n\tmy ($self, %params) = @_;\r\n\t$self->SUPER::init(%params);\r\n\t$self->adjust_conf('postgresql.conf', 'max_wal_senders',\r\n\t\t$params{allows_streaming} ? 5 : 0);\r\n}\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 19 Oct 2023 10:42:50 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for reviewing! New patch can be available in [1].\r\n\r\n> \r\n> Few comments:\r\n> 1) We will be able to override the value of max_slot_wal_keep_size by\r\n> using --new-options like '--new-options \"-c\r\n> max_slot_wal_keep_size=val\"':\r\n> + /*\r\n> + * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\r\n> + * checkpointer process. If WALs required by logical replication slots\r\n> + * are removed, the slots are unusable. This setting prevents the\r\n> + * invalidation of slots during the upgrade. We set this option when\r\n> + * cluster is PG17 or later because logical replication slots\r\n> can only be\r\n> + * migrated since then. Besides, max_slot_wal_keep_size is\r\n> added in PG13.\r\n> + */\r\n> + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\r\n> + appendPQExpBufferStr(&pgoptions, \" -c\r\n> max_slot_wal_keep_size=-1\");\r\n> \r\n> Should there be a check to throw an error if this option is specified\r\n> or do we need some documentation that this option should not be\r\n> specified?\r\n\r\nHmm, I don't think we have to add checks. Other settings, like synchronous_commit\r\nand fsync, can be also overwritten, but pg_upgrade has never checked. Therefore,\r\nit's user's responsibility to not set max_slot_wal_keep_size to a dangerous\r\nvalue.\r\n\r\n> 2) Because we are able to override max_slot_wal_keep_size there is a\r\n> chance of slot getting invalidated and Assert being hit:\r\n> + /*\r\n> + * The logical replication slots shouldn't be invalidated as\r\n> + * max_slot_wal_keep_size GUC is set to -1 during the\r\n> upgrade.\r\n> + *\r\n> + * The following is just a sanity check.\r\n> + */\r\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\r\n> + {\r\n> + Assert(max_slot_wal_keep_size_mb == -1);\r\n> + elog(ERROR, \"replication slots must not be\r\n> invalidated during the upgrade\");\r\n> + }\r\n\r\nHmm, so how about removing an assert and changing the error message more\r\nappropriate? I still think it seldom occurs.\r\n\r\n> 3) File 003_logical_replication_slots.pl is now changed to\r\n> 003_upgrade_logical_replication_slots.pl, it should be change here too\r\n> accordingly:\r\n> index 5834513add..815d1a7ca1 100644\r\n> --- a/src/bin/pg_upgrade/Makefile\r\n> +++ b/src/bin/pg_upgrade/Makefile\r\n> @@ -3,6 +3,9 @@\r\n> PGFILEDESC = \"pg_upgrade - an in-place binary upgrade utility\"\r\n> PGAPPICON = win32\r\n> \r\n> +# required for 003_logical_replication_slots.pl\r\n> +EXTRA_INSTALL=contrib/test_decoding\r\n> +\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB587007EA2F9AB92F0E1F5957F5D4A%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 19 Oct 2023 10:43:57 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\nThanks for reviewing! New patch can be available in [1].\r\n\r\n> Thanks for updating the patch, here are few comments for the test.\r\n> \r\n> 1.\r\n> \r\n> >\r\n> # The TAP Cluster.pm assigns default 'max_wal_senders' and 'max_connections'\r\n> to\r\n> # the same value (10) but PG12 and prior considered max_walsenders as a subset\r\n> # of max_connections, so setting the same value will fail.\r\n> if ($old_publisher->pg_version->major < 12)\r\n> {\r\n> \t$old_publisher->append_conf(\r\n> \t\t'postgresql.conf', qq[\r\n> \tmax_wal_senders = 5\r\n> \tmax_connections = 10\r\n> \t]);\r\n> >\r\n> \r\n> I think we already set max_wal_senders to 5 in init() function(in Cluster.pm),\r\n> so is this necessary ? And 002_pg_upgrade.pl doesn't seems set this.\r\n\r\nI thought you mentioned about Cluster::V_11::init(). I analyzed based on that and\r\nfound a fault. Could you please check [1]?\r\n\r\n> 2.\r\n> \r\n> \t\tSELECT pg_create_logical_replication_slot('test_slot1',\r\n> 'test_decoding', false, true);\r\n> \t\tSELECT pg_create_logical_replication_slot('test_slot2',\r\n> 'test_decoding', false, true);\r\n> \r\n> I think we don't need to set the last two parameters here as we don't check\r\n> these info in the tests.\r\n\r\nRemoved.\r\n\r\n> 3.\r\n> \r\n> # Set extra params if cross-version checks are required. This is needed to\r\n> # avoid using previously initdb'd cluster\r\n> if (defined($ENV{oldinstall}))\r\n> {\r\n> \tmy @initdb_params = ();\r\n> \tpush @initdb_params, ('--encoding', 'UTF-8');\r\n> \tpush @initdb_params, ('--locale', 'C');\r\n> \r\n> I am not sure I understand the comment, would it be possible provide a bit more\r\n> explanation about the purpose of this setting ? And I see 002_pg_upgrade always\r\n> have these setting even if oldinstall is not defined, so shall we follow the\r\n> same ?\r\n\r\nFixed.\r\nActually settings are not needed for new cluster, but seems better to follow 002.\r\n\r\n> 4.\r\n> \r\n> +\tcommand_ok(\r\n> +\t\t[\r\n> +\t\t\t'pg_upgrade', '--no-sync',\r\n> +\t\t\t'-d', $old_publisher->data_dir,\r\n> +\t\t\t'-D', $new_publisher->data_dir,\r\n> +\t\t\t'-b', $oldbindir,\r\n> +\t\t\t'-B', $newbindir,\r\n> +\t\t\t'-s', $new_publisher->host,\r\n> +\t\t\t'-p', $old_publisher->port,\r\n> +\t\t\t'-P', $new_publisher->port,\r\n> +\t\t\t$mode,\r\n> +\t\t],\r\n> \r\n> I think all the pg_upgrade commands in the test are the same, so we can save the\r\n> cmd\r\n> in a variable and pass them to command_xx(). I think it can save some effort to\r\n> check the difference of each command and can also reduce some codes.\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB587007EA2F9AB92F0E1F5957F5D4A%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 19 Oct 2023 10:44:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Shlok,\r\n\r\n> \r\n> I have tested the above scenario. We are able to override the\r\n> max_slot_wal_keep_size by using '--new-options \"-c\r\n> max_slot_wal_keep_size=val\"'. And also with some insert statements\r\n> during pg_upgrade, old WAL file were deleted and logical replication\r\n> slots were invalidated. Since the slots were invalidated replication\r\n> was not happening after the upgrade.\r\n\r\nYeah, theoretically it could be overwritten, but I still think we do not have to\r\nguard. Also, connections must not be established during the upgrade [1].\r\nI improved the ereport() message in the new patch[2]. How do you think?\r\n\r\n[1]: https://www.postgresql.org/message-id/ZNZ4AxUMIrnMgRbo%40momjian.us\r\n[2]: https://www.postgresql.org/message-id/TYCPR01MB587007EA2F9AB92F0E1F5957F5D4A%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n\r\n",
"msg_date": "Thu, 19 Oct 2023 10:45:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for revieing! New patch can be available in [1].\r\n\r\n> Few comments:\r\n> 1) Even if we comment 3rd point \"Emit a non-transactional message\",\r\n> test_slot2 still appears in the invalid_logical_replication_slots.txt\r\n> file. There is something wrong here.\r\n> + # 2. Advance the slot test_slot2 up to the current WAL location, but\r\n> + # test_slot1 still has unconsumed WAL records.\r\n> + $old_publisher->safe_psql('postgres',\r\n> + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\r\n> +\r\n> + # 3. Emit a non-transactional message. test_slot2 detects the message\r\n> so\r\n> + # that this slot will be also reported by upcoming pg_upgrade.\r\n> + $old_publisher->safe_psql('postgres',\r\n> + \"SELECT count(*) FROM pg_logical_emit_message('false',\r\n> 'prefix', 'This is a non-transactional message');\"\r\n> + );\r\n\r\nThe comment was updated based on others. How do you think?\r\n\r\n> 2) If the test fails here, it is difficult to debug as the\r\n> pg_upgrade_output.d directory was removed, so better to keep the\r\n> directory as it is this case:\r\n> + # Check the file content. Both slots should be reporting that they have\r\n> + # unconsumed WAL records.\r\n> + like(\r\n> + slurp_file($slots_filename),\r\n> + qr/The slot \\\"test_slot1\\\" has not consumed the WAL yet/m,\r\n> + 'the previous test failed due to unconsumed WALs');\r\n> + like(\r\n> + slurp_file($slots_filename),\r\n> + qr/The slot \\\"test_slot2\\\" has not consumed the WAL yet/m,\r\n> + 'the previous test failed due to unconsumed WALs');\r\n> +\r\n> + # Clean up\r\n> + rmtree($new_publisher->data_dir . \"/pg_upgrade_output.d\");\r\n\r\nRight. Current style just follows the 002 test. I removed rmtree().\r\n\r\n> 3) The below could be changed:\r\n> + # Check the file content. Both slots should be reporting that they have\r\n> + # unconsumed WAL records.\r\n> + like(\r\n> + slurp_file($slots_filename),\r\n> + qr/The slot \\\"test_slot1\\\" has not consumed the WAL yet/m,\r\n> + 'the previous test failed due to unconsumed WALs');\r\n> + like(\r\n> + slurp_file($slots_filename),\r\n> + qr/The slot \\\"test_slot2\\\" has not consumed the WAL yet/m,\r\n> + 'the previous test failed due to unconsumed WALs');\r\n> \r\n> to:\r\n> my $result = slurp_file($slots_filename);\r\n> is( $result, qq(The slot \"test_slot1\" has not consumed the WAL yet\r\n> The slot \"test_slot2\" has not consumed the WAL yet\r\n> ),\r\n> 'the previous test failed due to unconsumed WALs');\r\n>\r\n\r\nReplaced, but the formatting seems not good. I wanted to hear opinions from others.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB587007EA2F9AB92F0E1F5957F5D4A%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 19 Oct 2023 10:45:53 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> Thanks for reviewing! PSA new version.\r\n\r\nHmm. The cfbot got angry, whereas it can pass on my machine.\r\nIt seems that the ordering in invalid_logical_replication_slots.txt is not fixed.\r\n\r\nA change for checking the content was reverted. It could pass on my CI.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 19 Oct 2023 12:53:51 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Here are some review comments for v54-0001\n\n======\nsrc/backend/replication/slot.c\n\n1.\n+ if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n+ {\n+ ereport(ERROR,\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"replication slots must not be invalidated during the upgrade\"),\n+ errhint(\"\\\"max_slot_wal_keep_size\\\" must not be set to -1 during the\nupgrade\"));\n+ }\n\nThis new error is replacing the old code:\n+ Assert(max_slot_wal_keep_size_mb == -1);\n\nIs that errhint correct? Shouldn't it say \"must\" instead of \"must not\"?\n\n======\nsrc/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\n\n2. General formating\n\nSome of the \"]);\" formatting and indenting for the multiple SQL\ncommands is inconsistent.\n\nFor example,\n\n+ $old_publisher->safe_psql(\n+ 'postgres', qq[\n+ SELECT pg_create_logical_replication_slot('test_slot1', 'test_decoding');\n+ SELECT pg_create_logical_replication_slot('test_slot2', 'test_decoding');\n+ ]\n+ );\n\nversus\n\n+ $old_publisher->safe_psql(\n+ 'postgres', qq[\n+ CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a;\n+ SELECT pg_replication_slot_advance('test_slot2', NULL);\n+ SELECT count(*) FROM pg_logical_emit_message('false', 'prefix',\n'This is a non-transactional message');\n+ ]);\n\n~~~\n\n3.\n+# Set up some settings for the old cluster, so that we can ensures that initdb\n+# will be done.\n+my @initdb_params = ();\n+push @initdb_params, ('--encoding', 'UTF-8');\n+push @initdb_params, ('--locale', 'C');\n+$node_params{extra} = \\@initdb_params;\n+\n+$old_publisher->init(%node_params);\n\nWhy would initdb not be done if these were not set? I didn't\nunderstand the comment.\n\n/so that we can ensures/to ensure/\n\n~~~\n\n4.\n+# XXX: For PG9.6 and prior, the TAP Cluster.pm assigns 'max_wal_senders' and\n+# 'max_connections' to the same value (10). But these versions considered\n+# max_wal_senders as a subset of max_connections, so setting the same value\n+# will fail. This adjustment will not be needed when packages for older\n+#versions are defined.\n+if ($old_publisher->pg_version->major <= 9.6)\n+{\n+ $old_publisher->append_conf(\n+ 'postgresql.conf', qq[\n+ max_wal_senders = 5\n+ max_connections = 10\n+ ]);\n+}\n\n4a.\nIMO remove the complicated comment trying to explain the problem and\njust to unconditionally set the values you want.\n\nSUGGESTION#1\n# Older PG version had different rules for the inter-dependency of\n'max_wal_senders' and 'max_connections',\n# so assign values which will work for all PG versions.\n$old_publisher->append_conf(\n 'postgresql.conf', qq[\n max_wal_senders = 5\n max_connections = 10\n ]);\n\n~~\n\n4b.\nIf you really want to put special code here then I think the comment\nneeds to be more descriptive like below. IMO this suggestion is\noverkill, #4a above is much simpler.\n\nSUGGESTION#2\n# Versions prior to PG12 considered max_walsenders as a subset\nmax_connections, so setting the same value will fail.\n#\n# The TAP Cluster.pm assigns default 'max_wal_senders' and\n'max_connections' as follows:\n# PG_11: 'max_wal_senders=5' and 'max_connections=10'\n# PG_10: 'max_wal_senders=5' and 'max_connections=10'\n# Everything else: 'max_wal_senders=10' and 'max_connections=10'\n#\n# The following code is needed to make adjustments for versions not\nalready being handled by Cluster.pm.\n\n~\n\n4c.\nAlternatively, make necessary adjustments in the Cluster.pm to set\nappropriate defaults for all older versions. Then probably you can\nremove all this code entirely.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 Oct 2023 12:49:59 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, 19 Oct 2023 at 16:14, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for reviewing! New patch can be available in [1].\n>\n> >\n> > Few comments:\n> > 1) We will be able to override the value of max_slot_wal_keep_size by\n> > using --new-options like '--new-options \"-c\n> > max_slot_wal_keep_size=val\"':\n> > + /*\n> > + * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n> > + * checkpointer process. If WALs required by logical replication slots\n> > + * are removed, the slots are unusable. This setting prevents the\n> > + * invalidation of slots during the upgrade. We set this option when\n> > + * cluster is PG17 or later because logical replication slots\n> > can only be\n> > + * migrated since then. Besides, max_slot_wal_keep_size is\n> > added in PG13.\n> > + */\n> > + if (GET_MAJOR_VERSION(cluster->major_version) >= 1700)\n> > + appendPQExpBufferStr(&pgoptions, \" -c\n> > max_slot_wal_keep_size=-1\");\n> >\n> > Should there be a check to throw an error if this option is specified\n> > or do we need some documentation that this option should not be\n> > specified?\n>\n> Hmm, I don't think we have to add checks. Other settings, like synchronous_commit\n> and fsync, can be also overwritten, but pg_upgrade has never checked. Therefore,\n> it's user's responsibility to not set max_slot_wal_keep_size to a dangerous\n> value.\n>\n> > 2) Because we are able to override max_slot_wal_keep_size there is a\n> > chance of slot getting invalidated and Assert being hit:\n> > + /*\n> > + * The logical replication slots shouldn't be invalidated as\n> > + * max_slot_wal_keep_size GUC is set to -1 during the\n> > upgrade.\n> > + *\n> > + * The following is just a sanity check.\n> > + */\n> > + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> > + {\n> > + Assert(max_slot_wal_keep_size_mb == -1);\n> > + elog(ERROR, \"replication slots must not be\n> > invalidated during the upgrade\");\n> > + }\n>\n> Hmm, so how about removing an assert and changing the error message more\n> appropriate? I still think it seldom occurs.\n\nAs this scenario can occur by overriding max_slot_wal_keep_size, it is\nbetter to remove the Assert.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 20 Oct 2023 08:49:08 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, 19 Oct 2023 at 16:16, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for revieing! New patch can be available in [1].\n>\n> > Few comments:\n> > 1) Even if we comment 3rd point \"Emit a non-transactional message\",\n> > test_slot2 still appears in the invalid_logical_replication_slots.txt\n> > file. There is something wrong here.\n> > + # 2. Advance the slot test_slot2 up to the current WAL location, but\n> > + # test_slot1 still has unconsumed WAL records.\n> > + $old_publisher->safe_psql('postgres',\n> > + \"SELECT pg_replication_slot_advance('test_slot2', NULL);\");\n> > +\n> > + # 3. Emit a non-transactional message. test_slot2 detects the message\n> > so\n> > + # that this slot will be also reported by upcoming pg_upgrade.\n> > + $old_publisher->safe_psql('postgres',\n> > + \"SELECT count(*) FROM pg_logical_emit_message('false',\n> > 'prefix', 'This is a non-transactional message');\"\n> > + );\n>\n> The comment was updated based on others. How do you think?\n\nI mean if we comment or remove this statement like in the attached\npatch, the test is still passing with 'The slot \"test_slot2\" has not\nconsumed the WAL yet', in this case should the test_slot2 be still\ninvalid as we have called pg_replication_slot_advance for test_slot2.\n\nRegards,\nVignesh",
"msg_date": "Fri, 20 Oct 2023 08:54:23 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Friday, October 20, 2023 9:50 AM Peter Smith <[email protected]> wrote:\r\n> \r\n> Here are some review comments for v54-0001\r\n\r\nThanks for the review.\r\n\r\n> \r\n> ======\r\n> src/backend/replication/slot.c\r\n> \r\n> 1.\r\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade) {\r\n> + ereport(ERROR, errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"replication slots must not be invalidated during the\r\n> + upgrade\"), errhint(\"\\\"max_slot_wal_keep_size\\\" must not be set to -1\r\n> + during the\r\n> upgrade\"));\r\n> + }\r\n> \r\n> This new error is replacing the old code:\r\n> + Assert(max_slot_wal_keep_size_mb == -1);\r\n> \r\n> Is that errhint correct? Shouldn't it say \"must\" instead of \"must not\"?\r\n\r\nFixed.\r\n\r\n> \r\n> ======\r\n> src/bin/pg_upgrade/t/003_upgrade_logical_replication_slots.pl\r\n> \r\n> 2. General formating\r\n> \r\n> Some of the \"]);\" formatting and indenting for the multiple SQL commands is\r\n> inconsistent.\r\n> \r\n> For example,\r\n> \r\n> + $old_publisher->safe_psql(\r\n> + 'postgres', qq[\r\n> + SELECT pg_create_logical_replication_slot('test_slot1',\r\n> + 'test_decoding'); SELECT\r\n> + pg_create_logical_replication_slot('test_slot2', 'test_decoding'); ]\r\n> + );\r\n> \r\n> versus\r\n> \r\n> + $old_publisher->safe_psql(\r\n> + 'postgres', qq[\r\n> + CREATE TABLE tbl AS SELECT generate_series(1, 10) AS a; SELECT\r\n> + pg_replication_slot_advance('test_slot2', NULL); SELECT count(*) FROM\r\n> + pg_logical_emit_message('false', 'prefix',\r\n> 'This is a non-transactional message');\r\n> + ]);\r\n> \r\n\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 3.\r\n> +# Set up some settings for the old cluster, so that we can ensures that\r\n> +initdb # will be done.\r\n> +my @initdb_params = ();\r\n> +push @initdb_params, ('--encoding', 'UTF-8'); push @initdb_params,\r\n> +('--locale', 'C'); $node_params{extra} = \\@initdb_params;\r\n> +\r\n> +$old_publisher->init(%node_params);\r\n> \r\n> Why would initdb not be done if these were not set? I didn't understand the\r\n> comment.\r\n> \r\n> /so that we can ensures/to ensure/\r\n\r\nThe node->init() will use a previously initialized cluster if no parameter was\r\nspecified, but that cluster could be of wrong version when doing cross-version\r\ntest, so we set something to let the initdb happen.\r\n\r\nI added some explanation in the comment.\r\n\r\n> ~~~\r\n> \r\n> 4.\r\n> +# XXX: For PG9.6 and prior, the TAP Cluster.pm assigns\r\n> +'max_wal_senders' and # 'max_connections' to the same value (10). But\r\n> +these versions considered # max_wal_senders as a subset of\r\n> +max_connections, so setting the same value # will fail. This adjustment\r\n> +will not be needed when packages for older #versions are defined.\r\n> +if ($old_publisher->pg_version->major <= 9.6) {\r\n> +$old_publisher->append_conf( 'postgresql.conf', qq[ max_wal_senders =\r\n> +5 max_connections = 10 ]); }\r\n> \r\n> 4a.\r\n> IMO remove the complicated comment trying to explain the problem and just\r\n> to unconditionally set the values you want.\r\n> \r\n> SUGGESTION#1\r\n> # Older PG version had different rules for the inter-dependency of\r\n> 'max_wal_senders' and 'max_connections', # so assign values which will work\r\n> for all PG versions.\r\n> $old_publisher->append_conf(\r\n> 'postgresql.conf', qq[\r\n> max_wal_senders = 5\r\n> max_connections = 10\r\n> ]);\r\n> \r\n> ~~\r\n\r\nAs Kuroda-san mentioned, we may fix Cluster.pm later, so I kept the XXX comment\r\nbut simplify it based on your suggestion.\r\n\r\nAttach the new version patch.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Fri, 20 Oct 2023 15:20:51 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Friday, October 20, 2023 11:24 AM vignesh C <[email protected]> wrote:\r\n> \r\n> On Thu, 19 Oct 2023 at 16:16, Hayato Kuroda (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> > Dear Vignesh,\r\n> >\r\n> > Thanks for revieing! New patch can be available in [1].\r\n> >\r\n> > > Few comments:\r\n> > > 1) Even if we comment 3rd point \"Emit a non-transactional message\",\r\n> > > test_slot2 still appears in the\r\n> > > invalid_logical_replication_slots.txt\r\n> > > file. There is something wrong here.\r\n> > > + # 2. Advance the slot test_slot2 up to the current WAL location,\r\n> but\r\n> > > + # test_slot1 still has unconsumed WAL records.\r\n> > > + $old_publisher->safe_psql('postgres',\r\n> > > + \"SELECT pg_replication_slot_advance('test_slot2',\r\n> > > + NULL);\");\r\n> > > +\r\n> > > + # 3. Emit a non-transactional message. test_slot2 detects\r\n> > > + the message\r\n> > > so\r\n> > > + # that this slot will be also reported by upcoming\r\n> pg_upgrade.\r\n> > > + $old_publisher->safe_psql('postgres',\r\n> > > + \"SELECT count(*) FROM\r\n> > > + pg_logical_emit_message('false',\r\n> > > 'prefix', 'This is a non-transactional message');\"\r\n> > > + );\r\n> >\r\n> > The comment was updated based on others. How do you think?\r\n> \r\n> I mean if we comment or remove this statement like in the attached patch, the\r\n> test is still passing with 'The slot \"test_slot2\" has not consumed the WAL yet', in\r\n> this case should the test_slot2 be still invalid as we have called\r\n> pg_replication_slot_advance for test_slot2.\r\n\r\nIt's because we pass NULL to pg_replication_slot_advance(). We should pass \r\npg_current_wal_lsn() instead. I have fixed it in V55 version.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Fri, 20 Oct 2023 15:21:23 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 8:51 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Attach the new version patch.\n\nThanks. Here are some comments on v55 patch:\n\n1. A nit:\n+\n+ /*\n+ * We also skip decoding in 'fast_forward' mode. In passing set the\n+ * 'processing_required' flag to indicate, were it not for this mode,\n+ * processing *would* have been required.\n+ */\nHow about \"We also skip decoding in fast_forward mode. In passing set\nthe processing_required flag to indicate that if it were not for\nfast_forward mode, processing would have been required.\"?\n\n2. Don't we need InvalidateSystemCaches() after FreeDecodingContext()?\n\n+ /* Clean up */\n+ FreeDecodingContext(ctx);\n\n3. Don't we need to put CreateDecodingContext in PG_TRY-PG_CATCH with\nInvalidateSystemCaches() in PG_CATCH block? I think we need to clear\nall timetravel entries with InvalidateSystemCaches(), no?\n\n4. The following assertion better be an error? Or we ensure that\nbinary_upgrade_slot_has_caught_up isn't called for an invalidated slot\nat all?\n+\n+ /* Slots must be valid as otherwise we won't be able to scan the WAL */\n+ Assert(MyReplicationSlot->data.invalidated == RS_INVAL_NONE);\n\n5. This better be an error instead of returning false? IMO, null value\nfor slot name is an error.\n+ /* Quick exit if the input is NULL */\n+ if (PG_ARGISNULL(0))\n+ PG_RETURN_BOOL(false);\n\n6. A nit: how about is_decodable_txn or is_decodable_change or some\nother instead of just a plain name processing_required?\n+ /* Do we need to process any change in 'fast_forward' mode? */\n+ bool processing_required;\n\n7. Can the following pg_fatal message be consistent and start with\nlowercase letter something like \"expected 0 logical replication slots\n....\"?\n+ pg_fatal(\"Expected 0 logical replication slots but found %d.\",\n+ nslots_on_new);\n\n8. s/problem/problematic - \"A list of problematic slots is in the file:\\n\"\n+ \"A list of the problem slots is in the file:\\n\"\n\n9. IMO, binary_upgrade_logical_replication_slot_has_caught_up seems\nbetter, meaningful and consistent despite a bit long than just\nbinary_upgrade_slot_has_caught_up.\n\n10. How about an assert that the passed-in replication slot is logical\nin binary_upgrade_slot_has_caught_up?\n\n11. How about adding CheckLogicalDecodingRequirements too in\nbinary_upgrade_slot_has_caught_up after CheckSlotPermissions just in\ncase?\n\n12. Not necessary but adding ReplicationSlotValidateName(slot_name,\nERROR); for the passed-in slotname in\nbinary_upgrade_slot_has_caught_up may be a good idea, at least in\nassert builds to help with input validations.\n\n13. Can the functionality of LogicalReplicationSlotHasPendingWal be\nmoved to binary_upgrade_slot_has_caught_up and get rid of a separate\nfunction LogicalReplicationSlotHasPendingWal? Or is it that the\nfunction exists in logical.c to avoid extra dependencies between\nlogical.c and pg_upgrade_support.c?\n\n14. I think it's better to check if the old cluster contains the\nnecessary function binary_upgrade_slot_has_caught_up instead of just\nrelying on major version.\n+ /* Logical slots can be migrated since PG17. */\n+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\n+ return;\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 21 Oct 2023 05:41:46 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1. A nit:\r\n> +\r\n> + /*\r\n> + * We also skip decoding in 'fast_forward' mode. In passing set the\r\n> + * 'processing_required' flag to indicate, were it not for this mode,\r\n> + * processing *would* have been required.\r\n> + */\r\n> How about \"We also skip decoding in fast_forward mode. In passing set\r\n> the processing_required flag to indicate that if it were not for\r\n> fast_forward mode, processing would have been required.\"?\r\n\r\nFixed.\r\n\r\n> 2. Don't we need InvalidateSystemCaches() after FreeDecodingContext()?\r\n> \r\n> + /* Clean up */\r\n> + FreeDecodingContext(ctx);\r\n\r\nRight. Older system caches should be thrown away here for upcoming pg_dump.\r\n\r\n> 3. Don't we need to put CreateDecodingContext in PG_TRY-PG_CATCH with\r\n> InvalidateSystemCaches() in PG_CATCH block? I think we need to clear\r\n> all timetravel entries with InvalidateSystemCaches(), no?\r\n\r\nAdded.\r\n\r\n> 4. The following assertion better be an error? Or we ensure that\r\n> binary_upgrade_slot_has_caught_up isn't called for an invalidated slot\r\n> at all?\r\n> +\r\n> + /* Slots must be valid as otherwise we won't be able to scan the WAL */\r\n> + Assert(MyReplicationSlot->data.invalidated == RS_INVAL_NONE);\r\n\r\nI kept the Assert() because pg_upgrade won't call this function for invalidated\r\nslots.\r\n\r\n> 5. This better be an error instead of returning false? IMO, null value\r\n> for slot name is an error.\r\n> + /* Quick exit if the input is NULL */\r\n> + if (PG_ARGISNULL(0))\r\n> + PG_RETURN_BOOL(false);\r\n\r\nHmm, OK, changed to elog(ERROR).\r\nIf current style is kept and NULL were to input, an empty string may be reported\r\nas slotname in invalid_logical_replication_slots.txt. It is quite strange. Note\r\nagain that it won't be expected.\r\n\r\n> 6. A nit: how about is_decodable_txn or is_decodable_change or some\r\n> other instead of just a plain name processing_required?\r\n> + /* Do we need to process any change in 'fast_forward' mode? */\r\n> + bool processing_required;\r\n\r\nI preferred current one. Because not only decodable txn, non-txn change and\r\nempty transactions also be processed.\r\n\r\n> 7. Can the following pg_fatal message be consistent and start with\r\n> lowercase letter something like \"expected 0 logical replication slots\r\n> ....\"?\r\n> + pg_fatal(\"Expected 0 logical replication slots but found %d.\",\r\n> + nslots_on_new);\r\n\r\nNote that the Upper/Lower case rule has been broken in this file. Lower case was\r\nused here because I regarded this sentence as hint message. Please see previous\r\nposts [1] [2].\r\n\r\n\r\n> 8. s/problem/problematic - \"A list of problematic slots is in the file:\\n\"\r\n> + \"A list of the problem slots is in the file:\\n\"\r\n\r\nFixed.\r\n\r\n> 9. IMO, binary_upgrade_logical_replication_slot_has_caught_up seems\r\n> better, meaningful and consistent despite a bit long than just\r\n> binary_upgrade_slot_has_caught_up.\r\n\r\nFixed.\r\n\r\n> 10. How about an assert that the passed-in replication slot is logical\r\n> in binary_upgrade_slot_has_caught_up?\r\n\r\nFixed.\r\n\r\n> 11. How about adding CheckLogicalDecodingRequirements too in\r\n> binary_upgrade_slot_has_caught_up after CheckSlotPermissions just in\r\n> case?\r\n\r\nNot added. CheckLogicalDecodingRequirements() ensures that WALs can be decodable\r\nand the changes can be applied, but both of them are not needed for fast_forward\r\nmode. Also, pre-existing function pg_logical_replication_slot_advance() does not\r\ncall it.\r\n\r\n> 12. Not necessary but adding ReplicationSlotValidateName(slot_name,\r\n> ERROR); for the passed-in slotname in\r\n> binary_upgrade_slot_has_caught_up may be a good idea, at least in\r\n> assert builds to help with input validations.\r\n\r\nNot added because ReplicationSlotAcquire() can report even if invalid name is\r\nadded. Also, pre-existing function pg_logical_replication_slot_advance() does not\r\ncall it.\r\n\r\n> 13. Can the functionality of LogicalReplicationSlotHasPendingWal be\r\n> moved to binary_upgrade_slot_has_caught_up and get rid of a separate\r\n> function LogicalReplicationSlotHasPendingWal? Or is it that the\r\n> function exists in logical.c to avoid extra dependencies between\r\n> logical.c and pg_upgrade_support.c?\r\n\r\nI kept current style. I think upgrade functions should be short so that actual\r\ntasks should be done in other place. SetAttrMissing() is called only from an\r\nupgrading function, so we do not have a policy to avoid deviding function.\r\nAlso, LogicalDecodingProcessRecord() is called from only files in src/backend/replication,\r\nso we can keep them.\r\n\r\n> 14. I think it's better to check if the old cluster contains the\r\n> necessary function binary_upgrade_slot_has_caught_up instead of just\r\n> relying on major version.\r\n> + /* Logical slots can be migrated since PG17. */\r\n> + if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1600)\r\n> + return;\r\n\r\nI kept current style because I could not find a merit for the approach. If the\r\npatch is committed PG17.X surely has binary_upgrade_logical_replication_slot_has_caught_up().\r\nAlso, other upgrading function are not checked from the pg_proc catalog. If you\r\nhave some other things in your mind, please reply here.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB586642D33208D190F67CDD7BF5F2A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB58666936A0DB0EEDCC929CEEF5FEA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 23 Oct 2023 05:39:59 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 11:10 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Thank you for reviewing! PSA new version.\n\n> > 6. A nit: how about is_decodable_txn or is_decodable_change or some\n> > other instead of just a plain name processing_required?\n> > + /* Do we need to process any change in 'fast_forward' mode? */\n> > + bool processing_required;\n>\n> I preferred current one. Because not only decodable txn, non-txn change and\n> empty transactions also be processed.\n\nRight. It's not the txn, but the change. processing_required seems too\ngeneric IMV. A nit: is_change_decodable or something?\n\nThanks for the patch. Here are few comments on v56 patch:\n\n1.\n+ *\n+ * Although this function is currently used only during pg_upgrade, there are\n+ * no reasons to restrict it, so IsBinaryUpgrade is not checked here.\n\nThis comment isn't required IMV, because anyone looking at the code\nand callsites can understand it.\n\n2. A nit: IMV \"This is a special purpose ...\" statement seems redundant.\n+ *\n+ * This is a special purpose function to ensure that the given slot can be\n+ * upgraded without data loss.\n\nHow about\n\nVerify that the given replication slot has consumed all the WAL changes.\nIf there's any decodable WAL record after the slot's\nconfirmed_flush_lsn, the slot's consumer will lose that data after the\nslot is upgraded.\nReturns true if there are no decodable WAL records after the\nconfirmed_flush_lsn. Otherwise false.\n\n3.\n+ if (PG_ARGISNULL(0))\n+ elog(ERROR, \"null argument to\nbinary_upgrade_validate_wal_records is not allowed\");\n\nI can see the above style is referenced from\nbinary_upgrade_create_empty_extension, but IMV the following looks\nbetter and latest (ereport is new style than elog)\n\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"replication slot name cannot be null\")));\n\n4. The following comment seems frivolous, the code tells it all.\nPlease remove the comment.\n+\n+ /* No need to check this slot, seek to new one */\n+ continue;\n\n5. A typo - s/gets/Gets\n+ * gets the LogicalSlotInfos for all the logical replication slots of the\n\n6. An optimization in count_old_cluster_logical_slots(void): Turn\nslot_count to a function static variable so that the for loop isn't\nrequired every time because the slot count is prepared in\nget_old_cluster_logical_slot_infos only once and won't change later\non. Do you see any problem with the following? This saves a few CPU\ncycles when there are large number of replication slots.\n{\n static int slot_count = 0;\n static bool first_time = true;\n\n if (first_time)\n {\n for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\n\n first_time = false;\n }\n\n return slot_count;\n}\n\n7. A typo: s/slotname/slot name. \"slot name\" looks better in user\nvisible messages.\n+ pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n\n8.\n+else\n+{\n+ test_upgrade_from_pre_PG17($old_publisher, $new_publisher,\n+ @pg_upgrade_cmd);\n+}\nWill this ever be tested in current TAP test framework? I mean, will\nthe TAP test framework allow testing upgrades from one PG version to\nanother PG version?\n\n9. A nit: Can single quotes around variable names in the comments be\nremoved just to be consistent?\n+ * We also skip decoding in 'fast_forward' mode. This check must be last\n+ /* Do we need to process any change in 'fast_forward' mode? */\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 Oct 2023 14:00:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Oct 23, 2023 at 2:00 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Oct 23, 2023 at 11:10 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Thank you for reviewing! PSA new version.\n>\n> > > 6. A nit: how about is_decodable_txn or is_decodable_change or some\n> > > other instead of just a plain name processing_required?\n> > > + /* Do we need to process any change in 'fast_forward' mode? */\n> > > + bool processing_required;\n> >\n> > I preferred current one. Because not only decodable txn, non-txn change and\n> > empty transactions also be processed.\n>\n> Right. It's not the txn, but the change. processing_required seems too\n> generic IMV. A nit: is_change_decodable or something?\n>\n\nIf we don't want to keep it generic then we should use something like\n'contains_decodable_change'. 'is_change_decodable' could have suited\nhere if we were checking a particular change.\n\n> Thanks for the patch. Here are few comments on v56 patch:\n>\n> 1.\n> + *\n> + * Although this function is currently used only during pg_upgrade, there are\n> + * no reasons to restrict it, so IsBinaryUpgrade is not checked here.\n>\n> This comment isn't required IMV, because anyone looking at the code\n> and callsites can understand it.\n>\n> 2. A nit: IMV \"This is a special purpose ...\" statement seems redundant.\n> + *\n> + * This is a special purpose function to ensure that the given slot can be\n> + * upgraded without data loss.\n>\n> How about\n>\n> Verify that the given replication slot has consumed all the WAL changes.\n> If there's any decodable WAL record after the slot's\n> confirmed_flush_lsn, the slot's consumer will lose that data after the\n> slot is upgraded.\n> Returns true if there are no decodable WAL records after the\n> confirmed_flush_lsn. Otherwise false.\n>\n\nPersonally, I find the current comment succinct and clear.\n\n> 3.\n> + if (PG_ARGISNULL(0))\n> + elog(ERROR, \"null argument to\n> binary_upgrade_validate_wal_records is not allowed\");\n>\n> I can see the above style is referenced from\n> binary_upgrade_create_empty_extension, but IMV the following looks\n> better and latest (ereport is new style than elog)\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"replication slot name cannot be null\")));\n>\n\nDo you have any theory for making elog to ereport? I am not completely\nsure but as this and related function is used internally, so using\nelog seems reasonable. Also, I find keeping it consistent with the\nexisting error message is also reasonable. We can change both later\ntogether if we get a broader agreement.\n\n> 4. The following comment seems frivolous, the code tells it all.\n> Please remove the comment.\n> +\n> + /* No need to check this slot, seek to new one */\n> + continue;\n>\n> 5. A typo - s/gets/Gets\n> + * gets the LogicalSlotInfos for all the logical replication slots of the\n>\n> 6. An optimization in count_old_cluster_logical_slots(void): Turn\n> slot_count to a function static variable so that the for loop isn't\n> required every time because the slot count is prepared in\n> get_old_cluster_logical_slot_infos only once and won't change later\n> on. Do you see any problem with the following? This saves a few CPU\n> cycles when there are large number of replication slots.\n> {\n> static int slot_count = 0;\n> static bool first_time = true;\n>\n> if (first_time)\n> {\n> for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n> slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\n>\n> first_time = false;\n> }\n>\n> return slot_count;\n> }\n>\n\nThis may not be a problem but this is also not a function that will be\nused frequently. I am not sure if adding such code optimizations is\nworth it.\n\n> 7. A typo: s/slotname/slot name. \"slot name\" looks better in user\n> visible messages.\n> + pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n>\n\nIf we want to follow other parameters then we can even use slot_name.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Oct 2023 09:46:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Sat, Oct 21, 2023 at 5:41 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Oct 20, 2023 at 8:51 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n\n>\n> 9. IMO, binary_upgrade_logical_replication_slot_has_caught_up seems\n> better, meaningful and consistent despite a bit long than just\n> binary_upgrade_slot_has_caught_up.\n>\n\nI think logical_replication is specific to our pub-sub model but we\ncan have manually created slots as well. So, it would be better to\nname it as binary_upgrade_logical_slot_has_caught_up().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Oct 2023 09:51:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath, Amit,\r\n\r\nThanks for reviewing! PSA new version.\r\nI addressed comments which have not been claimed.\r\n\r\n> On Mon, Oct 23, 2023 at 2:00 PM Bharath Rupireddy\r\n> <[email protected]> wrote:\r\n> >\r\n> > On Mon, Oct 23, 2023 at 11:10 AM Hayato Kuroda (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > Thank you for reviewing! PSA new version.\r\n> >\r\n> > > > 6. A nit: how about is_decodable_txn or is_decodable_change or some\r\n> > > > other instead of just a plain name processing_required?\r\n> > > > + /* Do we need to process any change in 'fast_forward' mode? */\r\n> > > > + bool processing_required;\r\n> > >\r\n> > > I preferred current one. Because not only decodable txn, non-txn change and\r\n> > > empty transactions also be processed.\r\n> >\r\n> > Right. It's not the txn, but the change. processing_required seems too\r\n> > generic IMV. A nit: is_change_decodable or something?\r\n> >\r\n> \r\n> If we don't want to keep it generic then we should use something like\r\n> 'contains_decodable_change'. 'is_change_decodable' could have suited\r\n> here if we were checking a particular change.\r\n\r\nI kept the name for now. How does Bharath think?\r\n\r\n> > Thanks for the patch. Here are few comments on v56 patch:\r\n> >\r\n> > 1.\r\n> > + *\r\n> > + * Although this function is currently used only during pg_upgrade, there are\r\n> > + * no reasons to restrict it, so IsBinaryUpgrade is not checked here.\r\n> >\r\n> > This comment isn't required IMV, because anyone looking at the code\r\n> > and callsites can understand it.\r\n\r\nRemoved.\r\n\r\n> > 2. A nit: IMV \"This is a special purpose ...\" statement seems redundant.\r\n> > + *\r\n> > + * This is a special purpose function to ensure that the given slot can be\r\n> > + * upgraded without data loss.\r\n> >\r\n> > How about\r\n> >\r\n> > Verify that the given replication slot has consumed all the WAL changes.\r\n> > If there's any decodable WAL record after the slot's\r\n> > confirmed_flush_lsn, the slot's consumer will lose that data after the\r\n> > slot is upgraded.\r\n> > Returns true if there are no decodable WAL records after the\r\n> > confirmed_flush_lsn. Otherwise false.\r\n> >\r\n> \r\n> Personally, I find the current comment succinct and clear.\r\n\r\nI kept current one.\r\n\r\n> > 3.\r\n> > + if (PG_ARGISNULL(0))\r\n> > + elog(ERROR, \"null argument to\r\n> > binary_upgrade_validate_wal_records is not allowed\");\r\n> >\r\n> > I can see the above style is referenced from\r\n> > binary_upgrade_create_empty_extension, but IMV the following looks\r\n> > better and latest (ereport is new style than elog)\r\n> >\r\n> > ereport(ERROR,\r\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > errmsg(\"replication slot name cannot be null\")));\r\n> >\r\n> \r\n> Do you have any theory for making elog to ereport? I am not completely\r\n> sure but as this and related function is used internally, so using\r\n> elog seems reasonable. Also, I find keeping it consistent with the\r\n> existing error message is also reasonable. We can change both later\r\n> together if we get a broader agreement.\r\n\r\nI kept current style. elog() was used here because I regarded it as\r\n\"cannot happen\" error. According to the doc [1], elog() is still used\r\nfor the purpose.\r\n\r\n> > 4. The following comment seems frivolous, the code tells it all.\r\n> > Please remove the comment.\r\n> > +\r\n> > + /* No need to check this slot, seek to new one */\r\n> > + continue;\r\n\r\nRemoved.\r\n\r\n> > 5. A typo - s/gets/Gets\r\n> > + * gets the LogicalSlotInfos for all the logical replication slots of the\r\n\r\nReplaced.\r\n\r\n> > 6. An optimization in count_old_cluster_logical_slots(void): Turn\r\n> > slot_count to a function static variable so that the for loop isn't\r\n> > required every time because the slot count is prepared in\r\n> > get_old_cluster_logical_slot_infos only once and won't change later\r\n> > on. Do you see any problem with the following? This saves a few CPU\r\n> > cycles when there are large number of replication slots.\r\n> > {\r\n> > static int slot_count = 0;\r\n> > static bool first_time = true;\r\n> >\r\n> > if (first_time)\r\n> > {\r\n> > for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\r\n> > slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\r\n> >\r\n> > first_time = false;\r\n> > }\r\n> >\r\n> > return slot_count;\r\n> > }\r\n> >\r\n> \r\n> This may not be a problem but this is also not a function that will be\r\n> used frequently. I am not sure if adding such code optimizations is\r\n> worth it.\r\n\r\nNot addressed.\r\n\r\n> > 7. A typo: s/slotname/slot name. \"slot name\" looks better in user\r\n> > visible messages.\r\n> > + pg_log(PG_VERBOSE, \"slotname: \\\"%s\\\", plugin: \\\"%s\\\",\r\n> two_phase: %s\",\r\n> >\r\n> \r\n> If we want to follow other parameters then we can even use slot_name.\r\n\r\nChanged to slot_name.\r\n\r\nBelow part is replies for remained comments:\r\n\r\n>8.\r\n>+else\r\n>+{\r\n>+ test_upgrade_from_pre_PG17($old_publisher, $new_publisher,\r\n>+ @pg_upgrade_cmd);\r\n>+}\r\n>Will this ever be tested in current TAP test framework? I mean, will\r\n>the TAP test framework allow testing upgrades from one PG version to\r\n>another PG version?\r\n\r\nYes, the TAP tester allow to do cross-version upgrade. According to\r\nsrc/bin/pg_upgrade/TESTING file:\r\n\r\n```\r\nTesting an upgrade from a different PG version is also possible, and\r\nprovides a more thorough test that pg_upgrade does what it's meant for.\r\n```\r\n\r\nBelow commands are an example of the test.\r\n\r\n```\r\n# test PG9.5 -> patched HEAD\r\n$ oldinstall=/home/hayato/older/pg95 make check PROVE_TESTS='t/003_upgrade_logical_replication_slots.pl'\r\n...\r\n# +++ tap check in src/bin/pg_upgrade +++\r\nt/003_upgrade_logical_replication_slots.pl .. ok \r\nAll tests successful.\r\nFiles=1, Tests=3, 11 wallclock secs ( 0.03 usr 0.01 sys + 2.78 cusr 1.08 csys = 3.90 CPU)\r\nResult: PASS\r\n\r\n# grep the output and find an evidence that cross-version check was done\r\n$ cat tmp_check/log/regress_log_003_upgrade_logical_replication_slots | grep 'check the slot does not exist on new cluster'\r\n[05:14:22.322](0.139s) ok 3 - check the slot does not exist on new cluster\r\n\r\n```\r\n\r\n>9. A nit: Can single quotes around variable names in the comments be\r\n>removed just to be consistent?\r\n>+ * We also skip decoding in 'fast_forward' mode. This check must be last\r\n>+ /* Do we need to process any change in 'fast_forward' mode? */\r\n\r\nRemoved.\r\n\r\nAlso, based on a comment [2], the upgrade function was renamed to \r\n'binary_upgrade_logical_slot_has_caught_up'.\r\n\r\n[1]: https://www.postgresql.org/docs/devel/error-message-reporting.html\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1%2BYZP3j1H4ChhzSR23k6MPryW-cgGstyvqbek2CMJoHRA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 24 Oct 2023 06:02:21 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 11:32 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > If we don't want to keep it generic then we should use something like\n> > 'contains_decodable_change'. 'is_change_decodable' could have suited\n> > here if we were checking a particular change.\n>\n> I kept the name for now. How does Bharath think?\n\nNo more bikeshedding from my side. +1 for processing_required as-is.\n\n> > > 6. An optimization in count_old_cluster_logical_slots(void): Turn\n> > > slot_count to a function static variable so that the for loop isn't\n> > > required every time because the slot count is prepared in\n> > > get_old_cluster_logical_slot_infos only once and won't change later\n> > > on. Do you see any problem with the following? This saves a few CPU\n> > > cycles when there are large number of replication slots.\n> > > {\n> > > static int slot_count = 0;\n> > > static bool first_time = true;\n> > >\n> > > if (first_time)\n> > > {\n> > > for (int dbnum = 0; dbnum < old_cluster.dbarr.ndbs; dbnum++)\n> > > slot_count += old_cluster.dbarr.dbs[dbnum].slot_arr.nslots;\n> > >\n> > > first_time = false;\n> > > }\n> > >\n> > > return slot_count;\n> > > }\n> > >\n> >\n> > This may not be a problem but this is also not a function that will be\n> > used frequently. I am not sure if adding such code optimizations is\n> > worth it.\n>\n> Not addressed.\n\ncount_old_cluster_logical_slots is being called 3 times during\npg_upgrade and every time counting number of slots for all the\ndatabases seems redundant IMV especially given the fact that the slot\ncount is computed once at the beginning and never changes. When the\nreplication slots on the cluster are on the higher side, every time\ncounting *may* prove costly. And, the use of static variables isn't a\nhuge change requiring a different set of infra or as such, it's a\nsimple pattern.\n\nHaving said above, if others don't see a merit in it, I'm okay to\nwithdraw my comment.\n\n> Below commands are an example of the test.\n>\n> ```\n> # test PG9.5 -> patched HEAD\n> $ oldinstall=/home/hayato/older/pg95 make check PROVE_TESTS='t/003_upgrade_logical_replication_slots.pl'\n\nOh, I get it. Thanks.\n\n> Also, based on a comment [2], the upgrade function was renamed to\n> 'binary_upgrade_logical_slot_has_caught_up'.\n\n+1.\n\nI spent some time on the v57 patch and it looks good to me - tests are\npassing, no complaints from pgindent and pgperltidy. I turned the CF\nentry https://commitfest.postgresql.org/45/4273/ to RfC.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 Oct 2023 13:20:02 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 1:20 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> I spent some time on the v57 patch and it looks good to me - tests are\n> passing, no complaints from pgindent and pgperltidy. I turned the CF\n> entry https://commitfest.postgresql.org/45/4273/ to RfC.\n>\n\nThanks, the patch looks mostly good to me but I am not convinced of\nkeeping the tests across versions in this form. I don't think they are\ntested in BF, only one can manually create a setup to test. Shall we\nremove it for now and then consider it separately?\n\nApart from that, I have made minor modifications in the docs to adjust\nthe order of various prerequisites.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 25 Oct 2023 11:39:07 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nBased on your advice, I revised the patch again. \r\n\r\n> >\r\n> > I spent some time on the v57 patch and it looks good to me - tests are\r\n> > passing, no complaints from pgindent and pgperltidy. I turned the CF\r\n> > entry https://commitfest.postgresql.org/45/4273/ to RfC.\r\n> >\r\n> \r\n> Thanks, the patch looks mostly good to me but I am not convinced of\r\n> keeping the tests across versions in this form. I don't think they are\r\n> tested in BF, only one can manually create a setup to test.\r\n\r\nI analyzed and agreed that current BF client does not use TAP test framework\r\nfor cross-version checks.\r\n\r\n> Shall we\r\n> remove it for now and then consider it separately?\r\n\r\nOK, some parts for cross-checks were removed.\r\n\r\n> Apart from that, I have made minor modifications in the docs to adjust\r\n> the order of various prerequisites.\r\n\r\nThanks, included.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 25 Oct 2023 08:05:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 11:39 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Oct 24, 2023 at 1:20 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> >\n> > I spent some time on the v57 patch and it looks good to me - tests are\n> > passing, no complaints from pgindent and pgperltidy. I turned the CF\n> > entry https://commitfest.postgresql.org/45/4273/ to RfC.\n> >\n>\n> Thanks, the patch looks mostly good to me but I am not convinced of\n> keeping the tests across versions in this form. I don't think they are\n> tested in BF, only one can manually create a setup to test. Shall we\n> remove it for now and then consider it separately?\n\nI think we can retain the test_upgrade_from_pre_PG17 because it is not\nonly possible to trigger it manually but also one can write a CI\nworkflow to trigger it.\n\n> Apart from that, I have made minor modifications in the docs to adjust\n> the order of various prerequisites.\n\n+ <para>\n+ <application>pg_upgrade</application> attempts to migrate logical\n+ replication slots. This helps avoid the need for manually defining the\n+ same replication slots on the new publisher. Migration of logical\n+ replication slots is only supported when the old cluster is version 17.0\n+ or later. Logical replication slots on clusters before version 17.0 will\n+ silently be ignored.\n+ </para>\n\n+ The new cluster must not have permanent logical replication slots, i.e.,\n\nHow about using \"logical slots\" in place of \"logical replication\nslots\" to be more generic? We agreed and changed the function name to\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 25 Oct 2023 13:39:36 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 1:39 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Oct 25, 2023 at 11:39 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Oct 24, 2023 at 1:20 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > >\n> > > I spent some time on the v57 patch and it looks good to me - tests are\n> > > passing, no complaints from pgindent and pgperltidy. I turned the CF\n> > > entry https://commitfest.postgresql.org/45/4273/ to RfC.\n> > >\n> >\n> > Thanks, the patch looks mostly good to me but I am not convinced of\n> > keeping the tests across versions in this form. I don't think they are\n> > tested in BF, only one can manually create a setup to test. Shall we\n> > remove it for now and then consider it separately?\n>\n> I think we can retain the test_upgrade_from_pre_PG17 because it is not\n> only possible to trigger it manually but also one can write a CI\n> workflow to trigger it.\n>\n\nIt would be better to gauge its value separately and add it once the\nmain patch is committed. I am slightly unhappy even with the hack used\nfor pre-version testing in previous patch which is as follows:\n+# XXX: Older PG version had different rules for the inter-dependency of\n+# 'max_wal_senders' and 'max_connections', so assign values which will work for\n+# all PG versions. If Cluster.pm is fixed this code is not needed.\n+$old_publisher->append_conf(\n+ 'postgresql.conf', qq[\n+max_wal_senders = 5\n+max_connections = 10\n+]);\n\nThere should be a way to avoid this but we can decide it afterwards. I\ndon't want to hold the main patch for this point. What do you think?\n\n> > Apart from that, I have made minor modifications in the docs to adjust\n> > the order of various prerequisites.\n>\n> + <para>\n> + <application>pg_upgrade</application> attempts to migrate logical\n> + replication slots. This helps avoid the need for manually defining the\n> + same replication slots on the new publisher. Migration of logical\n> + replication slots is only supported when the old cluster is version 17.0\n> + or later. Logical replication slots on clusters before version 17.0 will\n> + silently be ignored.\n> + </para>\n>\n> + The new cluster must not have permanent logical replication slots, i.e.,\n>\n> How about using \"logical slots\" in place of \"logical replication\n> slots\" to be more generic? We agreed and changed the function name to\n>\n\nYeah, I am fine with that and I can take care of it before committing\nunless there is more to change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 25 Oct 2023 13:49:58 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 1:50 PM Amit Kapila <[email protected]> wrote:\n>\n> It would be better to gauge its value separately and add it once the\n> main patch is committed.\n> There should be a way to avoid this but we can decide it afterwards. I\n> don't want to hold the main patch for this point. What do you think?\n\n+1 to go with the main patch first. We also have another thing to take\ncare of - pg_upgrade option to not migrate logical slots.\n\n> > How about using \"logical slots\" in place of \"logical replication\n> > slots\" to be more generic? We agreed and changed the function name to\n> >\n>\n> Yeah, I am fine with that and I can take care of it before committing\n> unless there is more to change.\n\n+1. I have no other comments.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 25 Oct 2023 14:18:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "\r\nHi,\r\n\r\nThe BF animal fairywren[1] failed when testing\r\n003_upgrade_logical_replication_slots.pl.\r\n\r\nFrom the log, I can see pg_upgrade failed to open the\r\ninvalid_logical_replication_slots.txt:\r\n\r\n# Checking for valid logical replication slots \r\n# could not open file \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_upgrade_logical_replication_slots/data/t_003_upgrade_logical_replication_slots_new_publisher_data/pgdata/pg_upgrade_output.d/20231026T112558.309/invalid_logical_replication_slots.txt\": No such file or directory\r\n# Failure, exiting\r\n\r\nThe reason could be the length of this path(262) exceed the windows path\r\nlimit(260 IIRC). If so, I recall we fixed similar things before (e213de8e7) by\r\nreducing the path somehow.\r\n\r\nIn this case, I think one approach is to reduce the file and testname to\r\nxxx_logical_slots instead of xxx_logical_replication_slots. But we will analyze more\r\nand share fix soon.\r\n\r\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-26%2009%3A04%3A54\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Thu, 26 Oct 2023 14:41:03 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 8:11 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> The BF animal fairywren[1] failed when testing\n> 003_upgrade_logical_replication_slots.pl.\n>\n> From the log, I can see pg_upgrade failed to open the\n> invalid_logical_replication_slots.txt:\n>\n> # Checking for valid logical replication slots\n> # could not open file \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_upgrade_logical_replication_slots/data/t_003_upgrade_logical_replication_slots_new_publisher_data/pgdata/pg_upgrade_output.d/20231026T112558.309/invalid_logical_replication_slots.txt\": No such file or directory\n> # Failure, exiting\n>\n> The reason could be the length of this path(262) exceed the windows path\n> limit(260 IIRC). If so, I recall we fixed similar things before (e213de8e7) by\n> reducing the path somehow.\n\nNice catch. Windows docs say that the file/directory path name can't\nexceed MAX_PATH, which is defined as 260 characters. However, one must\nopt-in to enable longer path names -\nhttps://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry\nand https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry#enable-long-paths-in-windows-10-version-1607-and-later.\n\n> In this case, I think one approach is to reduce the file and testname to\n> xxx_logical_slots instead of xxx_logical_replication_slots. But we will analyze more\n> and share fix soon.\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-26%2009%3A04%3A54\n\n+1 for s/003_upgrade_logical_replication_slots.pl/003_upgrade_logical_slots.pl\nand s/invalid_logical_replication_slots.txt/invalid_logical_slots.txt.\nIn fact, we've used \"logical slots\" instead of \"logical replication\nslots\" in the docs to be generic. By looking at the generated\ndirectory path name, I think we can use shorter node names - instead\nof old_publisher, new_publisher, subscriber - either use node1 (for\nold publisher), node2 (for subscriber), node3 (for new publisher) or\nuse alpha (for old publisher), bravo (for subscriber), charlie (for\nnew publisher) or such shorter names. We don't have to be that\ndescriptive and long in node names, one can look at the test file to\nknow which one is what.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Oct 2023 20:56:06 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 2:26 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Oct 26, 2023 at 8:11 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > The BF animal fairywren[1] failed when testing\n> > 003_upgrade_logical_replication_slots.pl.\n> >\n> > From the log, I can see pg_upgrade failed to open the\n> > invalid_logical_replication_slots.txt:\n> >\n> > # Checking for valid logical replication slots\n> > # could not open file \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_upgrade_logical_replication_slots/data/t_003_upgrade_logical_replication_slots_new_publisher_data/pgdata/pg_upgrade_output.d/20231026T112558.309/invalid_logical_replication_slots.txt\": No such file or directory\n> > # Failure, exiting\n> >\n> > The reason could be the length of this path(262) exceed the windows path\n> > limit(260 IIRC). If so, I recall we fixed similar things before (e213de8e7) by\n> > reducing the path somehow.\n>\n> Nice catch. Windows docs say that the file/directory path name can't\n> exceed MAX_PATH, which is defined as 260 characters. However, one must\n> opt-in to enable longer path names -\n> https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry\n> and https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry#enable-long-paths-in-windows-10-version-1607-and-later.\n>\n> > In this case, I think one approach is to reduce the file and testname to\n> > xxx_logical_slots instead of xxx_logical_replication_slots. But we will analyze more\n> > and share fix soon.\n> >\n> > [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-26%2009%3A04%3A54\n>\n> +1 for s/003_upgrade_logical_replication_slots.pl/003_upgrade_logical_slots.pl\n> and s/invalid_logical_replication_slots.txt/invalid_logical_slots.txt.\n> In fact, we've used \"logical slots\" instead of \"logical replication\n> slots\" in the docs to be generic. By looking at the generated\n> directory path name, I think we can use shorter node names - instead\n> of old_publisher, new_publisher, subscriber - either use node1 (for\n> old publisher), node2 (for subscriber), node3 (for new publisher) or\n> use alpha (for old publisher), bravo (for subscriber), charlie (for\n> new publisher) or such shorter names. We don't have to be that\n> descriptive and long in node names, one can look at the test file to\n> know which one is what.\n>\n\nSome more ideas for shortening the filename:\n\n1. \"003_upgrade_logical_replication_slots.pl\" -- IMO the word\n\"upgrade\" is redundant in that filename (earlier patches never had\nthis). The test file lives under \"pg_upgrade/t\" so I felt that\nupgrading is already implied.\n\n2. If the node names will be shortened they should still retain *some*\nmeaning if possible:\nold_publisher/subscriber/new_publisher --> node1/node2/node3 (means\nnothing without studying the tests)\nold_publisher/subscriber/new_publisher --> alpha/bravo/charlie (means\nnothing without studying the tests)\nHow about:\nold_publisher/subscriber/new_publisher --> node_p1/node_s/node_p2\nor similar...\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 27 Oct 2023 08:57:57 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 3:28 AM Peter Smith <[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 2:26 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Thu, Oct 26, 2023 at 8:11 PM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > The BF animal fairywren[1] failed when testing\n> > > 003_upgrade_logical_replication_slots.pl.\n> > >\n> > > From the log, I can see pg_upgrade failed to open the\n> > > invalid_logical_replication_slots.txt:\n> > >\n> > > # Checking for valid logical replication slots\n> > > # could not open file \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_upgrade_logical_replication_slots/data/t_003_upgrade_logical_replication_slots_new_publisher_data/pgdata/pg_upgrade_output.d/20231026T112558.309/invalid_logical_replication_slots.txt\": No such file or directory\n> > > # Failure, exiting\n> > >\n> > > The reason could be the length of this path(262) exceed the windows path\n> > > limit(260 IIRC). If so, I recall we fixed similar things before (e213de8e7) by\n> > > reducing the path somehow.\n> >\n> > Nice catch. Windows docs say that the file/directory path name can't\n> > exceed MAX_PATH, which is defined as 260 characters. However, one must\n> > opt-in to enable longer path names -\n> > https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry\n> > and https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry#enable-long-paths-in-windows-10-version-1607-and-later.\n> >\n> > > In this case, I think one approach is to reduce the file and testname to\n> > > xxx_logical_slots instead of xxx_logical_replication_slots. But we will analyze more\n> > > and share fix soon.\n> > >\n> > > [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-10-26%2009%3A04%3A54\n> >\n> > +1 for s/003_upgrade_logical_replication_slots.pl/003_upgrade_logical_slots.pl\n> > and s/invalid_logical_replication_slots.txt/invalid_logical_slots.txt.\n\n+1. The proposed file name sounds reasonable.\n\n> > In fact, we've used \"logical slots\" instead of \"logical replication\n> > slots\" in the docs to be generic. By looking at the generated\n> > directory path name, I think we can use shorter node names - instead\n> > of old_publisher, new_publisher, subscriber - either use node1 (for\n> > old publisher), node2 (for subscriber), node3 (for new publisher) or\n> > use alpha (for old publisher), bravo (for subscriber), charlie (for\n> > new publisher) or such shorter names. We don't have to be that\n> > descriptive and long in node names, one can look at the test file to\n> > know which one is what.\n> >\n>\n> Some more ideas for shortening the filename:\n>\n> 1. \"003_upgrade_logical_replication_slots.pl\" -- IMO the word\n> \"upgrade\" is redundant in that filename (earlier patches never had\n> this). The test file lives under \"pg_upgrade/t\" so I felt that\n> upgrading is already implied.\n>\n\nAgreed. So, how about 003_upgrade_logical_slots.pl or simply\n003_upgrade_slots.pl?\n\n> 2. If the node names will be shortened they should still retain *some*\n> meaning if possible:\n> old_publisher/subscriber/new_publisher --> node1/node2/node3 (means\n> nothing without studying the tests)\n> old_publisher/subscriber/new_publisher --> alpha/bravo/charlie (means\n> nothing without studying the tests)\n> How about:\n> old_publisher/subscriber/new_publisher --> node_p1/node_s/node_p2\n> or similar...\n>\n\nWhy not simply oldpub/sub/newpub or old_pub/sub/new_pub?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 08:06:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 8:06 AM Amit Kapila <[email protected]> wrote:\n>\n> > > +1 for s/003_upgrade_logical_replication_slots.pl/003_upgrade_logical_slots.pl\n> > > and s/invalid_logical_replication_slots.txt/invalid_logical_slots.txt.\n>\n> +1. The proposed file name sounds reasonable.\n>\n> Agreed. So, how about 003_upgrade_logical_slots.pl or simply\n> 003_upgrade_slots.pl?\n>\n> Why not simply oldpub/sub/newpub or old_pub/sub/new_pub?\n\n+1 for invalid_logical_slots.txt, 003_upgrade_logical_slots.pl and\noldpub/sub/newpub. With these changes, the path name is brought down\nto ~220 chars. These names look good to me iff other things in the\npath name aren't dynamic crossing MAX_PATH limit (260 chars).\n\nC:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_upgrade_logical_slots/data/t_003_upgrade_logical_slots_newpub_data/pgdata/pg_upgrade_output.d/20231026T112558.309/invalid_logical_slots.txt\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Oct 2023 08:37:35 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Hou,\r\n\r\n> The BF animal fairywren[1] failed when testing\r\n> 003_upgrade_logical_replication_slots.pl.\r\n\r\nGood catch!\r\n\r\n> \r\n> The reason could be the length of this path(262) exceed the windows path\r\n> limit(260 IIRC). If so, I recall we fixed similar things before (e213de8e7) by\r\n> reducing the path somehow.\r\n\r\nYeah, Bharath has already reported, I agreed that the reason was [1]. \r\n\r\n```\r\nIn the Windows API (with some exceptions discussed in the following paragraphs),\r\nthe maximum length for a path is MAX_PATH, which is defined as 260 characters.\r\n```\r\n\r\n> In this case, I think one approach is to reduce the file and testname to\r\n> xxx_logical_slots instead of xxx_logical_replication_slots. But we will analyze\r\n> more\r\n> and share fix soon.\r\n>\r\n\r\nHere is a patch for fixing to 003_logical_slots. Also, I got a comment off list so that it was included.\r\n\r\n```\r\n-# Setup a pg_upgrade command. This will be used anywhere.\r\n+# Setup a common pg_upgrade command to be used by all the test cases\r\n```\r\n\r\n[1]: https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 27 Oct 2023 04:40:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath, Amit, Peter,\r\n\r\nThank you for discussing! A patch can be available in [1].\r\n\r\n> > > > +1 for\r\n> s/003_upgrade_logical_replication_slots.pl/003_upgrade_logical_slots.pl\r\n> > > > and s/invalid_logical_replication_slots.txt/invalid_logical_slots.txt.\r\n> >\r\n> > +1. The proposed file name sounds reasonable.\r\n> >\r\n> > Agreed. So, how about 003_upgrade_logical_slots.pl or simply\r\n> > 003_upgrade_slots.pl?\r\n> >\r\n> > Why not simply oldpub/sub/newpub or old_pub/sub/new_pub?\r\n> \r\n> +1 for invalid_logical_slots.txt, 003_upgrade_logical_slots.pl and\r\n> oldpub/sub/newpub. With these changes, the path name is brought down\r\n> to ~220 chars. These names look good to me iff other things in the\r\n> path name aren't dynamic crossing MAX_PATH limit (260 chars).\r\n> \r\n> C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgra\r\n> de/003_upgrade_logical_slots/data/t_003_upgrade_logical_slots_newpub_data/\r\n> pgdata/pg_upgrade_output.d/20231026T112558.309/invalid_logical_slots.txt\r\n\r\nReplaced to invalid_logical_slots.txt, 003_logical_slots.pl, and oldpub/sub/newpub.\r\nRegarding the test finename, some client app (e.g., pg_ctl) does not have a prefix,\r\nand some others (e.g., pg_dump) have. Either way seems acceptable.\r\nHence I chose to remove the header.\r\n\r\n```\r\n$ ls pg_ctl/t/\r\n001_start_stop.pl 002_status.pl 003_promote.pl 004_logrotate.pl\r\n\r\n$ ls pg_dump/t/\r\n001_basic.pl 002_pg_dump.pl 003_pg_dump_with_server.pl 004_pg_dump_parallel.pl 010_dump_connstr.pl\r\n```\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB5870A6A8FBB23554EDE8F5F3F5DCA%40TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 27 Oct 2023 04:41:35 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 04:40:43AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> Yeah, Bharath has already reported, I agreed that the reason was [1]. \n> \n> ```\n> In the Windows API (with some exceptions discussed in the following paragraphs),\n> the maximum length for a path is MAX_PATH, which is defined as 260 characters.\n> ```\n\n- \"invalid_logical_replication_slots.txt\");\n+ \"invalid_logical_slots.txt\");\n\nOr you could do something even shorter, with \"invalid_slots.txt\".\n--\nMichael",
"msg_date": "Fri, 27 Oct 2023 14:13:49 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 10:43 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 04:40:43AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> > Yeah, Bharath has already reported, I agreed that the reason was [1].\n> >\n> > ```\n> > In the Windows API (with some exceptions discussed in the following paragraphs),\n> > the maximum length for a path is MAX_PATH, which is defined as 260 characters.\n> > ```\n>\n> - \"invalid_logical_replication_slots.txt\");\n> + \"invalid_logical_slots.txt\");\n>\n> Or you could do something even shorter, with \"invalid_slots.txt\".\n>\n\nI also thought of it but if we want to keep it that way, we should\nslightly adjust the messages like: \"The slot \\\"%s\\\" is invalid\" to\ninclude slot_type. This will contain only logical slots, so the\ncurrent one probably seems okay.\n\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 11:09:01 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Michael,\n\n> Or you could do something even shorter, with \"invalid_slots.txt\".\n\nI think current one seems better, because we only support logical replication\nslots for now. We can extend as you said when we support physical slot as well.\nAlso, proposed length is sufficient for fairywren [1].\n\n[1]: https://www.postgresql.org/message-id/CALj2ACVc-WSx_fvfynt-G3j8rjhNTMZ8DHu2wiKgCEiV9EO86g%40mail.gmail.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 27 Oct 2023 05:49:21 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 11:09 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 10:43 AM Michael Paquier <[email protected]> wrote:\n> >\n> > - \"invalid_logical_replication_slots.txt\");\n> > + \"invalid_logical_slots.txt\");\n> >\n> > Or you could do something even shorter, with \"invalid_slots.txt\".\n> >\n>\n> I also thought of it but if we want to keep it that way, we should\n> slightly adjust the messages like: \"The slot \\\"%s\\\" is invalid\" to\n> include slot_type. This will contain only logical slots, so the\n> current one probably seems okay.\n\n+1 for invalid_logical_slots.txt as file name (which can fix Windows\npath name issue) and contents as-is \"The slot \\\"%s\\\" is invalid\\n\" and\n\"The slot \\\"%s\\\" has not consumed the WAL yet\\n\".\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Oct 2023 11:20:17 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 10:10 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is a patch for fixing to 003_logical_slots. Also, I got a comment off list so that it was included.\n>\n> ```\n> -# Setup a pg_upgrade command. This will be used anywhere.\n> +# Setup a common pg_upgrade command to be used by all the test cases\n> ```\n\nThe patch LGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Oct 2023 11:23:00 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 11:24 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 10:10 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Here is a patch for fixing to 003_logical_slots. Also, I got a comment off list so that it was included.\n> >\n> > ```\n> > -# Setup a pg_upgrade command. This will be used anywhere.\n> > +# Setup a common pg_upgrade command to be used by all the test cases\n> > ```\n>\n> The patch LGTM.\n>\n\nThanks, I'll push it in some time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Oct 2023 11:27:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\nI found several machines on BF got angry (e.g. [1]), because of missing update meson.build. Sorry for that.\r\nPSA the patch to fix it.\r\n\r\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2023-10-27%2006%3A08%3A31\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 27 Oct 2023 07:44:15 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\nPSA the patch to solve the issue [1].\r\n\r\nKindly Peter E. and Andrew raised an issue that delete_old_cluster.sh is\r\ngenerated in the source directory, even when the VPATH/meson build.\r\nThis can avoid by changing the directory explicitly.\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/7b8a9460-5668-b372-04e6-7b52e9308493%40dunslane.net#554090099bbbd12c94bf570665a6badf\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 7 Nov 2023 04:14:25 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 3:14 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> PSA the patch to solve the issue [1].\n>\n> Kindly Peter E. and Andrew raised an issue that delete_old_cluster.sh is\n> generated in the source directory, even when the VPATH/meson build.\n> This can avoid by changing the directory explicitly.\n>\n\nHi Kuroda-san,\n\nThanks for the patch.\n\nI reproduced the bug, then after applying your patch, I confirmed the\nproblem is fixed. I used the VPATH build\n\n~~~\n\nBEFORE\nt/001_basic.pl .......... ok\nt/002_pg_upgrade.pl ..... ok\nt/003_logical_slots.pl .. ok\nAll tests successful.\nFiles=3, Tests=39, 128 wallclock secs ( 0.05 usr 0.01 sys + 12.90\ncusr 7.43 csys = 20.39 CPU)\nResult: PASS\n\nOBSERVE THE BUG\nLook in the source folder and notice the file that should not be there.\n\n[postgres@CentOS7-x64 pg_upgrade]$ pwd\n/home/postgres/oss_postgres_misc/src/bin/pg_upgrade\n[postgres@CentOS7-x64 pg_upgrade]$ ls *.sh\ndelete_old_cluster.sh\n\n~~~\n\nAFTER\n# +++ tap check in src/bin/pg_upgrade +++\nt/001_basic.pl .......... ok\nt/002_pg_upgrade.pl ..... ok\nt/003_logical_slots.pl .. ok\nAll tests successful.\nFiles=3, Tests=39, 128 wallclock secs ( 0.06 usr 0.01 sys + 13.02\ncusr 7.28 csys = 20.37 CPU)\nResult: PASS\n\nCONFIRM THE FIX\nCheck the offending file is no longer in the src folder\n\n[postgres@CentOS7-x64 pg_upgrade]$ pwd\n/home/postgres/oss_postgres_misc/src/bin/pg_upgrade\n[postgres@CentOS7-x64 pg_upgrade]$ ls *.sh\nls: cannot access *.sh: No such file or directory\n\nInstead, it is found in the VPATH folder\n[postgres@CentOS7-x64 pg_upgrade]$ pwd\n/home/postgres/vpath_dir/src/bin/pg_upgrade\n[postgres@CentOS7-x64 pg_upgrade]$ ls tmp_check/\ndelete_old_cluster.sh log results\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 7 Nov 2023 15:23:28 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tuesday, November 7, 2023 12:14 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> \r\n> Dear hackers,\r\n> \r\n> PSA the patch to solve the issue [1].\r\n> \r\n> Kindly Peter E. and Andrew raised an issue that delete_old_cluster.sh is\r\n> generated in the source directory, even when the VPATH/meson build.\r\n> This can avoid by changing the directory explicitly.\r\n> \r\n> [1]:\r\n> https://www.postgresql.org/message-id/flat/7b8a9460-5668-b372-04e6-7b\r\n> 52e9308493%40dunslane.net#554090099bbbd12c94bf570665a6badf\r\n\r\nThanks for the patch, I have confirmed that the files won't be generated\r\nin source directory after applying the patch.\r\n\r\nAfter running: \"meson test -C build/ --suite pg_upgrade\",\r\nThe files are in the test directory:\r\n./build/testrun/pg_upgrade/003_logical_slots/data/delete_old_cluster.sh\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n",
"msg_date": "Tue, 7 Nov 2023 04:30:59 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Nov 7, 2023 at 10:01 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Tuesday, November 7, 2023 12:14 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> >\n> > Dear hackers,\n> >\n> > PSA the patch to solve the issue [1].\n> >\n> > Kindly Peter E. and Andrew raised an issue that delete_old_cluster.sh is\n> > generated in the source directory, even when the VPATH/meson build.\n> > This can avoid by changing the directory explicitly.\n> >\n> > [1]:\n> > https://www.postgresql.org/message-id/flat/7b8a9460-5668-b372-04e6-7b\n> > 52e9308493%40dunslane.net#554090099bbbd12c94bf570665a6badf\n>\n> Thanks for the patch, I have confirmed that the files won't be generated\n> in source directory after applying the patch.\n>\n> After running: \"meson test -C build/ --suite pg_upgrade\",\n> The files are in the test directory:\n> ./build/testrun/pg_upgrade/003_logical_slots/data/delete_old_cluster.sh\n>\n\nThanks for the patch and verification. Pushed the fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Nov 2023 13:25:33 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, 7 Nov 2023 at 13:25, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Nov 7, 2023 at 10:01 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > On Tuesday, November 7, 2023 12:14 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> > >\n> > > Dear hackers,\n> > >\n> > > PSA the patch to solve the issue [1].\n> > >\n> > > Kindly Peter E. and Andrew raised an issue that delete_old_cluster.sh is\n> > > generated in the source directory, even when the VPATH/meson build.\n> > > This can avoid by changing the directory explicitly.\n> > >\n> > > [1]:\n> > > https://www.postgresql.org/message-id/flat/7b8a9460-5668-b372-04e6-7b\n> > > 52e9308493%40dunslane.net#554090099bbbd12c94bf570665a6badf\n> >\n> > Thanks for the patch, I have confirmed that the files won't be generated\n> > in source directory after applying the patch.\n> >\n> > After running: \"meson test -C build/ --suite pg_upgrade\",\n> > The files are in the test directory:\n> > ./build/testrun/pg_upgrade/003_logical_slots/data/delete_old_cluster.sh\n> >\n>\n> Thanks for the patch and verification. Pushed the fix.\n\nWhile verifying upgrade of subscriber patch, I found one issue with\nupgrade in verbose mode.\nI was able to reproduce this issue by performing a upgrade with a\nverbose option.\n\nThe trace for the same is given below:\nProgram received signal SIGSEGV, Segmentation fault.\n__strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n126 ../sysdeps/x86_64/multiarch/strlen-vec.S: No such file or directory.\n(gdb) bt\n#0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n#1 0x000055555556f572 in dopr (target=0x7fffffffbb90,\nformat=0x55555557859e \"\\\", plugin: \\\"%s\\\", two_phase: %s\",\nargs=0x7fffffffdc40) at snprintf.c:444\n#2 0x000055555556ed95 in pg_vsnprintf (str=0x7fffffffbc10 \"slot_name:\n\\\"ication slots within the database:\", count=8192, fmt=0x555555578590\n\"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n args=0x7fffffffdc40) at snprintf.c:195\n#3 0x00005555555667e3 in pg_log_v (type=PG_VERBOSE,\nfmt=0x555555578590 \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\nap=0x7fffffffdc40) at util.c:184\n#4 0x0000555555566b38 in pg_log (type=PG_VERBOSE, fmt=0x555555578590\n\"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\") at util.c:264\n#5 0x0000555555561a06 in print_slot_infos (slot_arr=0x555555595ed0)\nat info.c:813\n#6 0x000055555556186e in print_db_infos (db_arr=0x555555587518\n<new_cluster+120>) at info.c:782\n#7 0x00005555555606da in get_db_rel_and_slot_infos\n(cluster=0x5555555874a0 <new_cluster>, live_check=false) at info.c:308\n#8 0x000055555555839a in check_new_cluster () at check.c:215\n#9 0x0000555555563010 in main (argc=13, argv=0x7fffffffdf08) at\npg_upgrade.c:136\n\nThis issue occurs because we are accessing uninitialized slot array information.\n\nWe could fix it by a couple of ways: a) Initialize the whole of\ndbinfos by using pg_malloc0 instead of pg_malloc which will ensure\nthat the slot information is set to 0. b) Setting only slot\ninformation. Attached patch has the changes for both the approaches.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 8 Nov 2023 08:43:50 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 8:44 AM vignesh C <[email protected]> wrote:\n>\n> While verifying upgrade of subscriber patch, I found one issue with\n> upgrade in verbose mode.\n> I was able to reproduce this issue by performing a upgrade with a\n> verbose option.\n>\n> The trace for the same is given below:\n> Program received signal SIGSEGV, Segmentation fault.\n> __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n> 126 ../sysdeps/x86_64/multiarch/strlen-vec.S: No such file or directory.\n> (gdb) bt\n> #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n> #1 0x000055555556f572 in dopr (target=0x7fffffffbb90,\n> format=0x55555557859e \"\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> args=0x7fffffffdc40) at snprintf.c:444\n> #2 0x000055555556ed95 in pg_vsnprintf (str=0x7fffffffbc10 \"slot_name:\n> \\\"ication slots within the database:\", count=8192, fmt=0x555555578590\n> \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> args=0x7fffffffdc40) at snprintf.c:195\n> #3 0x00005555555667e3 in pg_log_v (type=PG_VERBOSE,\n> fmt=0x555555578590 \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> ap=0x7fffffffdc40) at util.c:184\n> #4 0x0000555555566b38 in pg_log (type=PG_VERBOSE, fmt=0x555555578590\n> \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\") at util.c:264\n> #5 0x0000555555561a06 in print_slot_infos (slot_arr=0x555555595ed0)\n> at info.c:813\n> #6 0x000055555556186e in print_db_infos (db_arr=0x555555587518\n> <new_cluster+120>) at info.c:782\n> #7 0x00005555555606da in get_db_rel_and_slot_infos\n> (cluster=0x5555555874a0 <new_cluster>, live_check=false) at info.c:308\n> #8 0x000055555555839a in check_new_cluster () at check.c:215\n> #9 0x0000555555563010 in main (argc=13, argv=0x7fffffffdf08) at\n> pg_upgrade.c:136\n>\n> This issue occurs because we are accessing uninitialized slot array information.\n>\n\nThanks for the report. I'll review your proposed fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 09:24:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 8:44 AM vignesh C <[email protected]> wrote:\n>\n> While verifying upgrade of subscriber patch, I found one issue with\n> upgrade in verbose mode.\n> I was able to reproduce this issue by performing a upgrade with a\n> verbose option.\n>\n> The trace for the same is given below:\n> Program received signal SIGSEGV, Segmentation fault.\n> __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n> 126 ../sysdeps/x86_64/multiarch/strlen-vec.S: No such file or directory.\n> (gdb) bt\n> #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n> #1 0x000055555556f572 in dopr (target=0x7fffffffbb90,\n> format=0x55555557859e \"\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> args=0x7fffffffdc40) at snprintf.c:444\n> #2 0x000055555556ed95 in pg_vsnprintf (str=0x7fffffffbc10 \"slot_name:\n> \\\"ication slots within the database:\", count=8192, fmt=0x555555578590\n> \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> args=0x7fffffffdc40) at snprintf.c:195\n> #3 0x00005555555667e3 in pg_log_v (type=PG_VERBOSE,\n> fmt=0x555555578590 \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> ap=0x7fffffffdc40) at util.c:184\n> #4 0x0000555555566b38 in pg_log (type=PG_VERBOSE, fmt=0x555555578590\n> \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\") at util.c:264\n> #5 0x0000555555561a06 in print_slot_infos (slot_arr=0x555555595ed0)\n> at info.c:813\n> #6 0x000055555556186e in print_db_infos (db_arr=0x555555587518\n> <new_cluster+120>) at info.c:782\n> #7 0x00005555555606da in get_db_rel_and_slot_infos\n> (cluster=0x5555555874a0 <new_cluster>, live_check=false) at info.c:308\n> #8 0x000055555555839a in check_new_cluster () at check.c:215\n> #9 0x0000555555563010 in main (argc=13, argv=0x7fffffffdf08) at\n> pg_upgrade.c:136\n>\n> This issue occurs because we are accessing uninitialized slot array information.\n>\n> We could fix it by a couple of ways: a) Initialize the whole of\n> dbinfos by using pg_malloc0 instead of pg_malloc which will ensure\n> that the slot information is set to 0.\n>\n\nI would prefer this fix instead of initializing the slot array at\nmultiple places. I'll push this tomorrow unless someone thinks\notherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 14:08:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 08:43, vignesh C <[email protected]> wrote:\n>\n> On Tue, 7 Nov 2023 at 13:25, Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Nov 7, 2023 at 10:01 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > On Tuesday, November 7, 2023 12:14 PM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> > > >\n> > > > Dear hackers,\n> > > >\n> > > > PSA the patch to solve the issue [1].\n> > > >\n> > > > Kindly Peter E. and Andrew raised an issue that delete_old_cluster.sh is\n> > > > generated in the source directory, even when the VPATH/meson build.\n> > > > This can avoid by changing the directory explicitly.\n> > > >\n> > > > [1]:\n> > > > https://www.postgresql.org/message-id/flat/7b8a9460-5668-b372-04e6-7b\n> > > > 52e9308493%40dunslane.net#554090099bbbd12c94bf570665a6badf\n> > >\n> > > Thanks for the patch, I have confirmed that the files won't be generated\n> > > in source directory after applying the patch.\n> > >\n> > > After running: \"meson test -C build/ --suite pg_upgrade\",\n> > > The files are in the test directory:\n> > > ./build/testrun/pg_upgrade/003_logical_slots/data/delete_old_cluster.sh\n> > >\n> >\n> > Thanks for the patch and verification. Pushed the fix.\n>\n> While verifying upgrade of subscriber patch, I found one issue with\n> upgrade in verbose mode.\n> I was able to reproduce this issue by performing a upgrade with a\n> verbose option.\n>\n> The trace for the same is given below:\n> Program received signal SIGSEGV, Segmentation fault.\n> __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n> 126 ../sysdeps/x86_64/multiarch/strlen-vec.S: No such file or directory.\n> (gdb) bt\n> #0 __strlen_sse2 () at ../sysdeps/x86_64/multiarch/strlen-vec.S:126\n> #1 0x000055555556f572 in dopr (target=0x7fffffffbb90,\n> format=0x55555557859e \"\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> args=0x7fffffffdc40) at snprintf.c:444\n> #2 0x000055555556ed95 in pg_vsnprintf (str=0x7fffffffbc10 \"slot_name:\n> \\\"ication slots within the database:\", count=8192, fmt=0x555555578590\n> \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> args=0x7fffffffdc40) at snprintf.c:195\n> #3 0x00005555555667e3 in pg_log_v (type=PG_VERBOSE,\n> fmt=0x555555578590 \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> ap=0x7fffffffdc40) at util.c:184\n> #4 0x0000555555566b38 in pg_log (type=PG_VERBOSE, fmt=0x555555578590\n> \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\") at util.c:264\n> #5 0x0000555555561a06 in print_slot_infos (slot_arr=0x555555595ed0)\n> at info.c:813\n> #6 0x000055555556186e in print_db_infos (db_arr=0x555555587518\n> <new_cluster+120>) at info.c:782\n> #7 0x00005555555606da in get_db_rel_and_slot_infos\n> (cluster=0x5555555874a0 <new_cluster>, live_check=false) at info.c:308\n> #8 0x000055555555839a in check_new_cluster () at check.c:215\n> #9 0x0000555555563010 in main (argc=13, argv=0x7fffffffdf08) at\n> pg_upgrade.c:136\n>\n> This issue occurs because we are accessing uninitialized slot array information.\n>\n> We could fix it by a couple of ways: a) Initialize the whole of\n> dbinfos by using pg_malloc0 instead of pg_malloc which will ensure\n> that the slot information is set to 0. b) Setting only slot\n> information. Attached patch has the changes for both the approaches.\n> Thoughts?\n\nHere is a small improvisation where num_slots need not be initialized\nas it will be used only after assigning the result now. The attached\npatch has the changes for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 8 Nov 2023 23:05:24 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 11:05 PM vignesh C <[email protected]> wrote:\n>\n> On Wed, 8 Nov 2023 at 08:43, vignesh C <[email protected]> wrote:\n>\n> Here is a small improvisation where num_slots need not be initialized\n> as it will be used only after assigning the result now. The attached\n> patch has the changes for the same.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Nov 2023 15:36:56 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 5:07 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Nov 8, 2023 at 11:05 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 8 Nov 2023 at 08:43, vignesh C <[email protected]> wrote:\n> >\n> > Here is a small improvisation where num_slots need not be initialized\n> > as it will be used only after assigning the result now. The attached\n> > patch has the changes for the same.\n> >\n>\n> Pushed!\n\nHi all, the CF entry for this is marked RfC, and CI is trying to apply\nthe last patch committed. Is there further work that needs to be\nre-attached and/or rebased?\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:59:59 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Nov 22, 2023 at 1:30 PM John Naylor <[email protected]> wrote:\n>\n> On Thu, Nov 9, 2023 at 5:07 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Nov 8, 2023 at 11:05 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On Wed, 8 Nov 2023 at 08:43, vignesh C <[email protected]> wrote:\n> > >\n> > > Here is a small improvisation where num_slots need not be initialized\n> > > as it will be used only after assigning the result now. The attached\n> > > patch has the changes for the same.\n> > >\n> >\n> > Pushed!\n>\n> Hi all, the CF entry for this is marked RfC, and CI is trying to apply\n> the last patch committed. Is there further work that needs to be\n> re-attached and/or rebased?\n>\n\nNo. I have marked it as committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Nov 2023 14:17:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Nov 9, 2023 at 7:07 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Nov 8, 2023 at 11:05 PM vignesh C <[email protected]> wrote:\n> >\n> > On Wed, 8 Nov 2023 at 08:43, vignesh C <[email protected]> wrote:\n> >\n> > Here is a small improvisation where num_slots need not be initialized\n> > as it will be used only after assigning the result now. The attached\n> > patch has the changes for the same.\n> >\n>\n> Pushed!\n>\n\nThank you for your work on this feature!\n\nOne month has already been passed since this main patch got committed\nbut reading this change, I have some questions on new\nbinary_upgrade_logical_slot_has_caught_up() function:\n\nIs there any reason why this function can be executed only in binary\nupgrade mode? It seems to me that other functions in\npg_upgrade_support.c must be called only in binary upgrade mode\nbecause it does some hacky changes internally. On the other hand,\nbinary_upgrade_logical_slot_has_caught_up() just calls\nLogicalReplicationSlotHasPendingWal(), which doesn't change anything\ninternally. If we make this function usable in normal mode, the user\nwould be able to check each slot's upgradability without pg_upgrade\n--check command (or without stopping the server if the user can ensure\nno more meaningful WAL records are generated).\n\n---\nAlso, the function checks if the user has the REPLICATION privilege\nbut I think that only superuser can connect to the server in binary\nupgrade mode in the first place.\n\n---\nThe following error message doesn't match the function name:\n\n /* We must check before dereferencing the argument */\n if (PG_ARGISNULL(0))\n elog(ERROR, \"null argument to\nbinary_upgrade_validate_wal_records is not allowed\");\n\n---\n{ oid => '8046', descr => 'for use by pg_upgrade',\n proname => 'binary_upgrade_logical_slot_has_caught_up', proisstrict => 'f',\n provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n proargtypes => 'name',\n prosrc => 'binary_upgrade_logical_slot_has_caught_up' },\n\nThe function is not a strict function but we check in the function if\nthe passed argument is not null. I think it would be clearer to make\nit a strict function.\n\n---\nLogicalReplicationSlotHasPendingWal() is defined in logical.c but I\nguess it's more suitable to be in slotfunc.s where similar functions\nsuch as pg_logical_replication_slot_advance() is also defined.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Nov 2023 14:35:23 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 11:06 AM Masahiko Sawada <[email protected]> wrote:\n>\n> One month has already been passed since this main patch got committed\n> but reading this change, I have some questions on new\n> binary_upgrade_logical_slot_has_caught_up() function:\n>\n> Is there any reason why this function can be executed only in binary\n> upgrade mode? It seems to me that other functions in\n> pg_upgrade_support.c must be called only in binary upgrade mode\n> because it does some hacky changes internally. On the other hand,\n> binary_upgrade_logical_slot_has_caught_up() just calls\n> LogicalReplicationSlotHasPendingWal(), which doesn't change anything\n> internally. If we make this function usable in normal mode, the user\n> would be able to check each slot's upgradability without pg_upgrade\n> --check command (or without stopping the server if the user can ensure\n> no more meaningful WAL records are generated).\n\nIt may happen that such a user-facing function tells there's no\nunconsumed WAL, but later on the WAL gets generated during pg_upgrade.\nTherefore, the information the function gives turns out to be\nincorrect. I don't see a real-world use-case for such a function right\nnow. If there's one, it's not a big change to turn it into a\nuser-facing function.\n\n> ---\n> Also, the function checks if the user has the REPLICATION privilege\n> but I think that only superuser can connect to the server in binary\n> upgrade mode in the first place.\n\nIf that were true, I don't see a problem in having\nCheckSlotPermissions() there, in fact it can act as an assertion.\n\n> ---\n> The following error message doesn't match the function name:\n>\n> /* We must check before dereferencing the argument */\n> if (PG_ARGISNULL(0))\n> elog(ERROR, \"null argument to\n> binary_upgrade_validate_wal_records is not allowed\");\n>\n> ---\n> { oid => '8046', descr => 'for use by pg_upgrade',\n> proname => 'binary_upgrade_logical_slot_has_caught_up', proisstrict => 'f',\n> provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n> proargtypes => 'name',\n> prosrc => 'binary_upgrade_logical_slot_has_caught_up' },\n>\n> The function is not a strict function but we check in the function if\n> the passed argument is not null. I think it would be clearer to make\n> it a strict function.\n\nI think it has been done that way similar to\nbinary_upgrade_create_empty_extension().\n\n> ---\n> LogicalReplicationSlotHasPendingWal() is defined in logical.c but I\n> guess it's more suitable to be in slotfunc.s where similar functions\n> such as pg_logical_replication_slot_advance() is also defined.\n\nWhy not in logicalfuncs.c?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Nov 2023 13:32:41 +0530",
"msg_from": "Bharath Rupireddy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 1:32 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Nov 28, 2023 at 11:06 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > One month has already been passed since this main patch got committed\n> > but reading this change, I have some questions on new\n> > binary_upgrade_logical_slot_has_caught_up() function:\n> >\n> > Is there any reason why this function can be executed only in binary\n> > upgrade mode? It seems to me that other functions in\n> > pg_upgrade_support.c must be called only in binary upgrade mode\n> > because it does some hacky changes internally. On the other hand,\n> > binary_upgrade_logical_slot_has_caught_up() just calls\n> > LogicalReplicationSlotHasPendingWal(), which doesn't change anything\n> > internally. If we make this function usable in normal mode, the user\n> > would be able to check each slot's upgradability without pg_upgrade\n> > --check command (or without stopping the server if the user can ensure\n> > no more meaningful WAL records are generated).\n>\n> It may happen that such a user-facing function tells there's no\n> unconsumed WAL, but later on the WAL gets generated during pg_upgrade.\n> Therefore, the information the function gives turns out to be\n> incorrect. I don't see a real-world use-case for such a function right\n> now. If there's one, it's not a big change to turn it into a\n> user-facing function.\n>\n\nYeah, as of now, I don't see a use case for it and in fact, it could\nlead to unpredictable results. Immediately after calling the function,\nthere could be more activity on the server which could make the\nresults incorrect. I think to check the slot's upgradeability, one can\nrely on the results of the pg_upgrade --check functionality.\n\n> > ---\n> > Also, the function checks if the user has the REPLICATION privilege\n> > but I think that only superuser can connect to the server in binary\n> > upgrade mode in the first place.\n>\n> If that were true, I don't see a problem in having\n> CheckSlotPermissions() there, in fact it can act as an assertion.\n>\n\nI think we can change it to assertion or may elog(ERROR, ...) with a\ncomment as to why we don't expect this can happen.\n\n> > ---\n> > The following error message doesn't match the function name:\n> >\n> > /* We must check before dereferencing the argument */\n> > if (PG_ARGISNULL(0))\n> > elog(ERROR, \"null argument to\n> > binary_upgrade_validate_wal_records is not allowed\");\n> >\n\nThis should be fixed.\n\n> > ---\n> > { oid => '8046', descr => 'for use by pg_upgrade',\n> > proname => 'binary_upgrade_logical_slot_has_caught_up', proisstrict => 'f',\n> > provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n> > proargtypes => 'name',\n> > prosrc => 'binary_upgrade_logical_slot_has_caught_up' },\n> >\n> > The function is not a strict function but we check in the function if\n> > the passed argument is not null. I think it would be clearer to make\n> > it a strict function.\n>\n> I think it has been done that way similar to\n> binary_upgrade_create_empty_extension().\n>\n> > ---\n> > LogicalReplicationSlotHasPendingWal() is defined in logical.c but I\n> > guess it's more suitable to be in slotfunc.s where similar functions\n> > such as pg_logical_replication_slot_advance() is also defined.\n>\n> Why not in logicalfuncs.c?\n>\n\nI am not sure if either of those is better than logical.c. IIRC, I\nthought it was okay to keep in logical.c as others primarily deal with\nexposed SQL functions and I felt it somewhat matches with the intent\nof logical.c (\"The goal is to encapsulate most of the internal\ncomplexity for consumers of logical decoding, so they can create and\nconsume a changestream with a low amount of code..\").\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 15:20:25 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Bharath, Sawada-san,\r\n\r\nWelcome back!\r\n\r\n> >\r\n> > ---\r\n> > { oid => '8046', descr => 'for use by pg_upgrade',\r\n> > proname => 'binary_upgrade_logical_slot_has_caught_up', proisstrict => 'f',\r\n> > provolatile => 'v', proparallel => 'u', prorettype => 'bool',\r\n> > proargtypes => 'name',\r\n> > prosrc => 'binary_upgrade_logical_slot_has_caught_up' },\r\n> >\r\n> > The function is not a strict function but we check in the function if\r\n> > the passed argument is not null. I think it would be clearer to make\r\n> > it a strict function.\r\n> \r\n> I think it has been done that way similar to\r\n> binary_upgrade_create_empty_extension().\r\n\r\nYeah, we followed binary_upgrade_create_empty_extension(). Also, we set as\r\nun-strict to keep a caller function simpler.\r\n\r\nCurrently get_old_cluster_logical_slot_infos() executes a query and it contains\r\nbinary_upgrade_logical_slot_has_caught_up(). In pg_upgrade layer, we assumed\r\neither true or false is returned.\r\n \r\nBut if proisstrict is changed true, we must handle the case when NULL is returned.\r\nIt is small but backseat operation.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 28 Nov 2023 10:04:38 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 6:50 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Nov 28, 2023 at 1:32 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Tue, Nov 28, 2023 at 11:06 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > One month has already been passed since this main patch got committed\n> > > but reading this change, I have some questions on new\n> > > binary_upgrade_logical_slot_has_caught_up() function:\n> > >\n> > > Is there any reason why this function can be executed only in binary\n> > > upgrade mode? It seems to me that other functions in\n> > > pg_upgrade_support.c must be called only in binary upgrade mode\n> > > because it does some hacky changes internally. On the other hand,\n> > > binary_upgrade_logical_slot_has_caught_up() just calls\n> > > LogicalReplicationSlotHasPendingWal(), which doesn't change anything\n> > > internally. If we make this function usable in normal mode, the user\n> > > would be able to check each slot's upgradability without pg_upgrade\n> > > --check command (or without stopping the server if the user can ensure\n> > > no more meaningful WAL records are generated).\n> >\n> > It may happen that such a user-facing function tells there's no\n> > unconsumed WAL, but later on the WAL gets generated during pg_upgrade.\n> > Therefore, the information the function gives turns out to be\n> > incorrect. I don't see a real-world use-case for such a function right\n> > now. If there's one, it's not a big change to turn it into a\n> > user-facing function.\n> >\n>\n> Yeah, as of now, I don't see a use case for it and in fact, it could\n> lead to unpredictable results. Immediately after calling the function,\n> there could be more activity on the server which could make the\n> results incorrect. I think to check the slot's upgradeability, one can\n> rely on the results of the pg_upgrade --check functionality.\n\nFair point.\n\nThis function is already a user-executable function as it's in\npg_catalog but is restricted to be executed only in binary upgrade\neven though it doesn't change anything internally. So it wasn't clear\nto me why we put such a restriction.\n\n>\n> > > ---\n> > > Also, the function checks if the user has the REPLICATION privilege\n> > > but I think that only superuser can connect to the server in binary\n> > > upgrade mode in the first place.\n> >\n> > If that were true, I don't see a problem in having\n> > CheckSlotPermissions() there, in fact it can act as an assertion.\n> >\n>\n> I think we can change it to assertion or may elog(ERROR, ...) with a\n> comment as to why we don't expect this can happen.\n\n+1 for an assertion, to match other checks in the function.\n\n>\n> > > ---\n> > > The following error message doesn't match the function name:\n> > >\n> > > /* We must check before dereferencing the argument */\n> > > if (PG_ARGISNULL(0))\n> > > elog(ERROR, \"null argument to\n> > > binary_upgrade_validate_wal_records is not allowed\");\n> > >\n>\n> This should be fixed.\n>\n> > > ---\n> > > { oid => '8046', descr => 'for use by pg_upgrade',\n> > > proname => 'binary_upgrade_logical_slot_has_caught_up', proisstrict => 'f',\n> > > provolatile => 'v', proparallel => 'u', prorettype => 'bool',\n> > > proargtypes => 'name',\n> > > prosrc => 'binary_upgrade_logical_slot_has_caught_up' },\n> > >\n> > > The function is not a strict function but we check in the function if\n> > > the passed argument is not null. I think it would be clearer to make\n> > > it a strict function.\n> >\n> > I think it has been done that way similar to\n> > binary_upgrade_create_empty_extension().\n\nbinary_upgrade_create_empty_extension() needs to be a non-strict\nfunction since it needs to accept NULL in some arguments such as\nextConfig. On the other hand,\nbinary_upgrade_logical_slot_has_caught_up() doesn't handle NULL and\nit's conventional to make such a function a strict function.\n\n> >\n> > > ---\n> > > LogicalReplicationSlotHasPendingWal() is defined in logical.c but I\n> > > guess it's more suitable to be in slotfunc.s where similar functions\n> > > such as pg_logical_replication_slot_advance() is also defined.\n> >\n> > Why not in logicalfuncs.c?\n> >\n>\n> I am not sure if either of those is better than logical.c. IIRC, I\n> thought it was okay to keep in logical.c as others primarily deal with\n> exposed SQL functions and I felt it somewhat matches with the intent\n> of logical.c (\"The goal is to encapsulate most of the internal\n> complexity for consumers of logical decoding, so they can create and\n> consume a changestream with a low amount of code..\").\n\nI see your point. To me it looks that the functions in logical.c are\nAPIs and internal functions to manage logical decoding context and\nreplication slot (e.g., restart_lsn). On the other hand,\nLogicalReplicationSlotHasPendingWal() seems to be a user of the\nlogical decoding. But anyway, it seems that three hackers have\ndifferent opinions. So we can keep it unless someone has a good reason\nto change it.\n\nOn Tue, Nov 28, 2023 at 7:04 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n>\n> Yeah, we followed binary_upgrade_create_empty_extension(). Also, we set as\n> un-strict to keep a caller function simpler.\n>\n> Currently get_old_cluster_logical_slot_infos() executes a query and it contains\n> binary_upgrade_logical_slot_has_caught_up(). In pg_upgrade layer, we assumed\n> either true or false is returned.\n>\n> But if proisstrict is changed true, we must handle the case when NULL is returned.\n> It is small but backseat operation.\n\nWhich cases are you concerned pg_upgrade could pass NULL to\nbinary_upgrade_logical_slot_has_caught_up()?\n\nI've not tested it yet but even if it returns NULL, perhaps\nget_old_cluster_logical_slot_infos() would still set curr->caught_up\nto false, no?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Nov 2023 21:32:19 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\n> On Tue, Nov 28, 2023 at 7:04 PM Hayato Kuroda (Fujitsu)\r\n> <[email protected]> wrote:\r\n> >\r\n> >\r\n> > Yeah, we followed binary_upgrade_create_empty_extension(). Also, we set as\r\n> > un-strict to keep a caller function simpler.\r\n> >\r\n> > Currently get_old_cluster_logical_slot_infos() executes a query and it contains\r\n> > binary_upgrade_logical_slot_has_caught_up(). In pg_upgrade layer, we\r\n> assumed\r\n> > either true or false is returned.\r\n> >\r\n> > But if proisstrict is changed true, we must handle the case when NULL is\r\n> returned.\r\n> > It is small but backseat operation.\r\n> \r\n> Which cases are you concerned pg_upgrade could pass NULL to\r\n> binary_upgrade_logical_slot_has_caught_up()?\r\n\r\nActually, we do not expect that it won't input NULL. IIUC all of slots have\r\nslot_name, and subquery uses its name. But will it be kept forever? I think we\r\ncan avoid any risk.\r\n\r\n> I've not tested it yet but even if it returns NULL, perhaps\r\n> get_old_cluster_logical_slot_infos() would still set curr->caught_up\r\n> to false, no?\r\n\r\nHmm. I checked the C99 specification [1] of strcmp, but it does not define the\r\ncase when the NULL is input. So it depends implementation.\r\n\r\n[1]: https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definition.pdf\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 28 Nov 2023 13:58:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, Nov 28, 2023 at 10:58 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san,\n>\n> > On Tue, Nov 28, 2023 at 7:04 PM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > >\n> > > Yeah, we followed binary_upgrade_create_empty_extension(). Also, we set as\n> > > un-strict to keep a caller function simpler.\n> > >\n> > > Currently get_old_cluster_logical_slot_infos() executes a query and it contains\n> > > binary_upgrade_logical_slot_has_caught_up(). In pg_upgrade layer, we\n> > assumed\n> > > either true or false is returned.\n> > >\n> > > But if proisstrict is changed true, we must handle the case when NULL is\n> > returned.\n> > > It is small but backseat operation.\n> >\n> > Which cases are you concerned pg_upgrade could pass NULL to\n> > binary_upgrade_logical_slot_has_caught_up()?\n>\n> Actually, we do not expect that it won't input NULL. IIUC all of slots have\n> slot_name, and subquery uses its name. But will it be kept forever? I think we\n> can avoid any risk.\n>\n> > I've not tested it yet but even if it returns NULL, perhaps\n> > get_old_cluster_logical_slot_infos() would still set curr->caught_up\n> > to false, no?\n>\n> Hmm. I checked the C99 specification [1] of strcmp, but it does not define the\n> case when the NULL is input. So it depends implementation.\n\nI think PQgetvalue() returns an empty string if the result value is null.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 29 Nov 2023 04:30:37 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\n> > Actually, we do not expect that it won't input NULL. IIUC all of slots have\r\n> > slot_name, and subquery uses its name. But will it be kept forever? I think we\r\n> > can avoid any risk.\r\n> >\r\n> > > I've not tested it yet but even if it returns NULL, perhaps\r\n> > > get_old_cluster_logical_slot_infos() would still set curr->caught_up\r\n> > > to false, no?\r\n> >\r\n> > Hmm. I checked the C99 specification [1] of strcmp, but it does not define the\r\n> > case when the NULL is input. So it depends implementation.\r\n> \r\n> I think PQgetvalue() returns an empty string if the result value is null.\r\n>\r\n\r\nOh, you are right... I found below paragraph from [1].\r\n\r\n> An empty string is returned if the field value is null. See PQgetisnull to distinguish\r\n> null values from empty-string values.\r\n\r\nSo I agree what you said - current code can accept NULL.\r\nBut still not sure the error message is really good or not.\r\nIf we regard an empty string as false, the slot which has empty name will be reported like:\r\n\"The slot \\\"\\\" has not consumed the WAL yet\" in check_old_cluster_for_valid_slots().\r\nIsn't it inappropriate?\r\n\r\n(Note again - currently we do not find such a case, so it may be overkill)\r\n\r\n[1]: https://www.postgresql.org/docs/devel/libpq-exec.html#LIBPQ-PQGETVALUE\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 29 Nov 2023 02:03:00 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> > >\r\n> > > Pushed!\r\n> >\r\n> > Hi all, the CF entry for this is marked RfC, and CI is trying to apply\r\n> > the last patch committed. Is there further work that needs to be\r\n> > re-attached and/or rebased?\r\n> >\r\n> \r\n> No. I have marked it as committed.\r\n>\r\n\r\nI found another failure related with the commit [1]. I think it is caused by the\r\nautovacuum. I want to propose a patch which disables the feature for old publisher.\r\n\r\nMore detail, please see below.\r\n\r\n# Analysis of the failure\r\n\r\nSummary: this failure occurs when the autovacuum starts after the subscription\r\nis disabled but before doing pg_upgrade.\r\n\r\nAccording to the regress file, it unexpectedly failed the pg_upgrade [2]. There are\r\nno possibilities for slots are invalidated, so some WALs seemed to be generated\r\nafter disabling the subscriber.\r\n\r\nAlso, server log caused by oldpub said that autovacuum worker was terminated when\r\nit stopped. This was occurred after walsender released the logical slots. WAL records\r\ncaused by autovacuum workers could not be consumed by the slots, so that upgrading\r\nfunction returned false.\r\n\r\n# How to reproduce\r\n\r\nI made a small file for reproducing the failure. Please see reproduce.txt. This contains\r\nchanges for launching autovacuum worker very often and for ensuring actual works are\r\ndone. After applying it, I could reproduce the same failure every time.\r\n\r\n# How to fix\r\n\r\nI think it is sufficient to fix only the test code.\r\nThe easiest way is to disable the autovacuum on old publisher. PSA the patch file.\r\n\r\nHow do you think?\r\n\r\n\r\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-11-27%2020%3A52%3A10\r\n[2]:\r\n```\r\n...\r\nChecking for contrib/isn with bigint-passing mismatch ok\r\nChecking for valid logical replication slots fatal\r\n\r\nYour installation contains logical replication slots that can't be upgraded.\r\nYou can remove invalid slots and/or consume the pending WAL for other slots,\r\nand then restart the upgrade.\r\nA list of the problematic slots is in the file:\r\n /home/bf/bf-build/skink-master/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/t_003_logical_slots_newpub_data/pgdata/pg_upgrade_output.d/20231127T220024.480/invalid_logical_slots.txt\r\nFailure, exiting\r\n[22:01:20.362](86.645s) not ok 10 - run of pg_upgrade of old cluster\r\n...\r\n```\r\n[3]:\r\n```\r\n...\r\n2023-11-27 22:00:23.546 UTC [3567962][walsender][4/0:0] LOG: released logical replication slot \"regress_sub\"\r\n2023-11-27 22:00:23.549 UTC [3559042][postmaster][:0] LOG: received fast shutdown request\r\n2023-11-27 22:00:23.552 UTC [3559042][postmaster][:0] LOG: aborting any active transactions\r\n*2023-11-27 22:00:23.663 UTC [3568793][autovacuum worker][5/3:738] FATAL: terminating autovacuum process due to administrator command*\r\n2023-11-27 22:00:23.775 UTC [3559042][postmaster][:0] LOG: background worker \"logical replication launcher\" (PID 3560674) exited with exit code 1\r\n...\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 29 Nov 2023 09:26:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 2:56 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > >\n> > > > Pushed!\n> > >\n> > > Hi all, the CF entry for this is marked RfC, and CI is trying to apply\n> > > the last patch committed. Is there further work that needs to be\n> > > re-attached and/or rebased?\n> > >\n> >\n> > No. I have marked it as committed.\n> >\n>\n> I found another failure related with the commit [1]. I think it is caused by the\n> autovacuum. I want to propose a patch which disables the feature for old publisher.\n>\n> More detail, please see below.\n>\n> # Analysis of the failure\n>\n> Summary: this failure occurs when the autovacuum starts after the subscription\n> is disabled but before doing pg_upgrade.\n>\n> According to the regress file, it unexpectedly failed the pg_upgrade [2]. There are\n> no possibilities for slots are invalidated, so some WALs seemed to be generated\n> after disabling the subscriber.\n>\n> Also, server log caused by oldpub said that autovacuum worker was terminated when\n> it stopped. This was occurred after walsender released the logical slots. WAL records\n> caused by autovacuum workers could not be consumed by the slots, so that upgrading\n> function returned false.\n>\n> # How to reproduce\n>\n> I made a small file for reproducing the failure. Please see reproduce.txt. This contains\n> changes for launching autovacuum worker very often and for ensuring actual works are\n> done. After applying it, I could reproduce the same failure every time.\n>\n> # How to fix\n>\n> I think it is sufficient to fix only the test code.\n> The easiest way is to disable the autovacuum on old publisher. PSA the patch file.\n>\n\nAgreed, for now, we should change the test as you proposed. I'll take\ncare of that. However, I wonder, if we should also ensure that\nautovacuum or any other worker is shut down before walsender processes\nthe last set of WAL before shutdown. We can analyze more on this and\nprobably start a separate thread to discuss this point.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 08:40:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 8:40 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 2:56 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > > >\n> > > > > Pushed!\n> > > >\n> > > > Hi all, the CF entry for this is marked RfC, and CI is trying to apply\n> > > > the last patch committed. Is there further work that needs to be\n> > > > re-attached and/or rebased?\n> > > >\n> > >\n> > > No. I have marked it as committed.\n> > >\n> >\n> > I found another failure related with the commit [1]. I think it is caused by the\n> > autovacuum. I want to propose a patch which disables the feature for old publisher.\n> >\n> > More detail, please see below.\n> >\n> > # Analysis of the failure\n> >\n> > Summary: this failure occurs when the autovacuum starts after the subscription\n> > is disabled but before doing pg_upgrade.\n> >\n> > According to the regress file, it unexpectedly failed the pg_upgrade [2]. There are\n> > no possibilities for slots are invalidated, so some WALs seemed to be generated\n> > after disabling the subscriber.\n> >\n> > Also, server log caused by oldpub said that autovacuum worker was terminated when\n> > it stopped. This was occurred after walsender released the logical slots. WAL records\n> > caused by autovacuum workers could not be consumed by the slots, so that upgrading\n> > function returned false.\n> >\n> > # How to reproduce\n> >\n> > I made a small file for reproducing the failure. Please see reproduce.txt. This contains\n> > changes for launching autovacuum worker very often and for ensuring actual works are\n> > done. After applying it, I could reproduce the same failure every time.\n> >\n> > # How to fix\n> >\n> > I think it is sufficient to fix only the test code.\n> > The easiest way is to disable the autovacuum on old publisher. PSA the patch file.\n> >\n>\n> Agreed, for now, we should change the test as you proposed. I'll take\n> care of that. However, I wonder, if we should also ensure that\n> autovacuum or any other worker is shut down before walsender processes\n> the last set of WAL before shutdown. We can analyze more on this and\n> probably start a separate thread to discuss this point.\n>\n\nSorry, my analysis was not complete. On looking closely, I think the\nreason is that we are allowed to upgrade the slot iff there is no\npending WAL to be processed. The test first disables the subscription\nto avoid unnecessary LOGs on the subscriber and then stops the\npublisher node. It is quite possible that just before the shutdown of\nthe server, autovacuum generates some WAL record that needs to be\nprocessed, so you propose just disabling the autovacuum for this test.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 09:22:40 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Sorry, my analysis was not complete. On looking closely, I think the\r\n> reason is that we are allowed to upgrade the slot iff there is no\r\n> pending WAL to be processed. \r\n\r\nYes, the guard will strongly protect from data loss, but I do not take care in the test.\r\n\r\n> The test first disables the subscription\r\n> to avoid unnecessary LOGs on the subscriber and then stops the\r\n> publisher node.\r\n\r\nRight. Unnecessary ERROR would be appeared if we do not disable.\r\n\r\n> It is quite possible that just before the shutdown of\r\n> the server, autovacuum generates some WAL record that needs to be\r\n> processed, \r\n\r\nYeah, pg_upgrade does not ensure that autovacuum is not running *before* the\r\nupgrade.\r\n\r\n> so you propose just disabling the autovacuum for this test.\r\n\r\nAbsolutely correct.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 30 Nov 2023 04:12:50 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 7:33 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> > > Actually, we do not expect that it won't input NULL. IIUC all of slots have\n> > > slot_name, and subquery uses its name. But will it be kept forever? I think we\n> > > can avoid any risk.\n> > >\n> > > > I've not tested it yet but even if it returns NULL, perhaps\n> > > > get_old_cluster_logical_slot_infos() would still set curr->caught_up\n> > > > to false, no?\n> > >\n> > > Hmm. I checked the C99 specification [1] of strcmp, but it does not define the\n> > > case when the NULL is input. So it depends implementation.\n> >\n> > I think PQgetvalue() returns an empty string if the result value is null.\n> >\n>\n> Oh, you are right... I found below paragraph from [1].\n>\n> > An empty string is returned if the field value is null. See PQgetisnull to distinguish\n> > null values from empty-string values.\n>\n> So I agree what you said - current code can accept NULL.\n> But still not sure the error message is really good or not.\n> If we regard an empty string as false, the slot which has empty name will be reported like:\n> \"The slot \\\"\\\" has not consumed the WAL yet\" in check_old_cluster_for_valid_slots().\n> Isn't it inappropriate?\n>\n\nI see your point that giving a better message (which would tell the\nactual problem) to the user in this case also has a value. OTOH, as\nyou said, this case won't happen in practical scenarios, so I am fine\neither way with a slight tilt toward retaining a better error message\n(aka the current way). Sawada-San/Bharath, do you have any suggestions\non this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 15:19:11 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 6:49 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Nov 29, 2023 at 7:33 AM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > > > Actually, we do not expect that it won't input NULL. IIUC all of slots have\n> > > > slot_name, and subquery uses its name. But will it be kept forever? I think we\n> > > > can avoid any risk.\n> > > >\n> > > > > I've not tested it yet but even if it returns NULL, perhaps\n> > > > > get_old_cluster_logical_slot_infos() would still set curr->caught_up\n> > > > > to false, no?\n> > > >\n> > > > Hmm. I checked the C99 specification [1] of strcmp, but it does not define the\n> > > > case when the NULL is input. So it depends implementation.\n> > >\n> > > I think PQgetvalue() returns an empty string if the result value is null.\n> > >\n> >\n> > Oh, you are right... I found below paragraph from [1].\n> >\n> > > An empty string is returned if the field value is null. See PQgetisnull to distinguish\n> > > null values from empty-string values.\n> >\n> > So I agree what you said - current code can accept NULL.\n> > But still not sure the error message is really good or not.\n> > If we regard an empty string as false, the slot which has empty name will be reported like:\n> > \"The slot \\\"\\\" has not consumed the WAL yet\" in check_old_cluster_for_valid_slots().\n> > Isn't it inappropriate?\n> >\n>\n> I see your point that giving a better message (which would tell the\n> actual problem) to the user in this case also has a value. OTOH, as\n> you said, this case won't happen in practical scenarios, so I am fine\n> either way with a slight tilt toward retaining a better error message\n> (aka the current way). Sawada-San/Bharath, do you have any suggestions\n> on this?\n\nTBH I'm not sure the error message is much helpful for users more than\nthe message \"The slot \\\"\\\" has not consumed the WAL yet\" in practice.\nIn either case, the messages just tell the user the slot name passed\nto the function was not appropriate. Rather, I'm a bit concerned that\nwe create a precedent that we make a function non-strict to produce an\nerror message only for unrealistic cases. Please point out if we\nalready have such precedents. Other functions in pg_upgrade_support.c\nsuch as binary_upgrade_set_next_pg_tablespace_oid() are not called if\nthe argument is NULL since it's a strict function. But if null was\npassed in (where should not happen in practice), pg_upgrade would fail\nwith an error message or would finish while leaving the cluster in an\ninconsistent state, I've not tested. Why do we want to care about the\nargument being NULL only in\nbinary_upgrade_logical_slot_has_caught_up()?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Dec 2023 17:20:08 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear hackers,\r\n\r\nI found another failure related with the commit [1]. This is caused by missing\r\nwait on the test code. Amit helped me for this analysis and fix.\r\n\r\n# Analysis of the failure\r\n\r\nThe failure is that restored slot is two_phase = false, whereas the slot is\r\ncreated as two_phase = true. This is because pg_upgrade was executed before all\r\ntables are in ready state.\r\n\r\n# How to fix\r\n\r\nI think the test is not good. According to other subscription tests related with\r\n2PC, they additionally wait until subtwophasestate becomes 'e'. It should be\r\nadded as well. PSA the patch.\r\n\r\n[1]: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2023-12-01%2016%3A59%3A30\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 4 Dec 2023 06:29:27 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Mon, Dec 4, 2023 at 11:59 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> I found another failure related with the commit [1]. This is caused by missing\n> wait on the test code. Amit helped me for this analysis and fix.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Dec 2023 10:46:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Sawada-san, hackers,\r\n\r\nBased on comments I made a fix. PSA the patch.\r\n\r\n> \r\n> Is there any reason why this function can be executed only in binary\r\n> upgrade mode? It seems to me that other functions in\r\n> pg_upgrade_support.c must be called only in binary upgrade mode\r\n> because it does some hacky changes internally. On the other hand,\r\n> binary_upgrade_logical_slot_has_caught_up() just calls\r\n> LogicalReplicationSlotHasPendingWal(), which doesn't change anything\r\n> internally. If we make this function usable in normal mode, the user\r\n> would be able to check each slot's upgradability without pg_upgrade\r\n> --check command (or without stopping the server if the user can ensure\r\n> no more meaningful WAL records are generated).\r\n\r\nI kept the function to be upgrade only because subsequent operations might generate\r\nWALs. See [1].\r\n\r\n> Also, the function checks if the user has the REPLICATION privilege\r\n> but I think that only superuser can connect to the server in binary\r\n> upgrade mode in the first place.\r\n\r\nCheckSlotPermissions() was replaced to Assert().\r\n\r\n> The following error message doesn't match the function name:\r\n> \r\n> /* We must check before dereferencing the argument */\r\n> if (PG_ARGISNULL(0))\r\n> elog(ERROR, \"null argument to\r\n> binary_upgrade_validate_wal_records is not allowed\");\r\n\r\nPer below comment, this elog(ERROR) was not needed anymore. Removed.\r\n\r\n> { oid => '8046', descr => 'for use by pg_upgrade',\r\n> proname => 'binary_upgrade_logical_slot_has_caught_up', proisstrict => 'f',\r\n> provolatile => 'v', proparallel => 'u', prorettype => 'bool',\r\n> proargtypes => 'name',\r\n> prosrc => 'binary_upgrade_logical_slot_has_caught_up' },\r\n> \r\n> The function is not a strict function but we check in the function if\r\n> the passed argument is not null. I think it would be clearer to make\r\n> it a strict function.\r\n\r\nPer conclusion [2], I changed the function to the strict one. As shown in below,\r\nbinary_upgrade_logical_slot_has_caught_up() returned NULL when the input was NULL.\r\n\r\n```\r\npostgres=# SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\n slot_name | lsn \r\n-----------+-----------\r\n slot | 0/152E7E0\r\n(1 row)\r\n\r\npostgres=# SELECT * FROM binary_upgrade_logical_slot_has_caught_up(NULL);\r\n binary_upgrade_logical_slot_has_caught_up \r\n-------------------------------------------\r\n \r\n(1 row)\r\n```\r\n\r\n> LogicalReplicationSlotHasPendingWal() is defined in logical.c but I\r\n> guess it's more suitable to be in slotfunc.s where similar functions\r\n> such as pg_logical_replication_slot_advance() is also defined.\r\n\r\nCommitters had different opinions about it, so I kept current style [3].\r\n\r\n[1]: https://www.postgresql.org/message-id/CALj2ACW7H-kAHia%3DvCbmdWDueGA_3pQfyzARfAQX0aGzHY57Zw%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1LzK0NvMkWAY6RJ6yN%2BYYUgMg1f%3DmNOGV8CPXLT43FHMw%40mail.gmail.com\r\n[3]: https://www.postgresql.org/message-id/CAD21AoDkyyC%3Dwa2%3D1Ruo_L8g16xf_W5Xyhp-%3D3j9urT916b9gA%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 5 Dec 2023 05:41:09 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Tue, 5 Dec 2023 at 11:11, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Sawada-san, hackers,\n>\n> Based on comments I made a fix. PSA the patch.\n>\n\nThanks for the patch, the changes look good to me.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 6 Dec 2023 09:40:43 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 9:40 AM vignesh C <[email protected]> wrote:\n>\n> On Tue, 5 Dec 2023 at 11:11, Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Sawada-san, hackers,\n> >\n> > Based on comments I made a fix. PSA the patch.\n> >\n>\n> Thanks for the patch, the changes look good to me.\n>\n\nThanks, I have added a comment and updated the commit message. I'll\npush this tomorrow unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 6 Dec 2023 10:02:28 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 10:02 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Dec 6, 2023 at 9:40 AM vignesh C <[email protected]> wrote:\n> >\n> > On Tue, 5 Dec 2023 at 11:11, Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > Dear Sawada-san, hackers,\n> > >\n> > > Based on comments I made a fix. PSA the patch.\n> > >\n> >\n> > Thanks for the patch, the changes look good to me.\n> >\n>\n> Thanks, I have added a comment and updated the commit message. I'll\n> push this tomorrow unless there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Dec 2023 11:59:00 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "FYI fairywren failed in this test:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-12-16%2022%3A03%3A06\n\n===8<===\nRestoring database schemas in the new cluster\n*failure*\n\nConsult the last few lines of\n\"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_logical_slots/data/t_003_logical_slots_newpub_data/pgdata/pg_upgrade_output.d/20231216T221418.035/log/pg_upgrade_dump_1.log\"\nfor\nthe probable cause of the failure.\nFailure, exiting\n[22:14:34.598](22.801s) not ok 10 - run of pg_upgrade of old cluster\n[22:14:34.600](0.001s) # Failed test 'run of pg_upgrade of old cluster'\n# at C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/003_logical_slots.pl\nline 177.\n===8<===\n\nWithout that log it might be hard to figure out what went wrong though :-/\n\n\n",
"msg_date": "Sun, 17 Dec 2023 17:02:35 +1300",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "17.12.2023 07:02, Thomas Munro wrote:\n> FYI fairywren failed in this test:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-12-16%2022%3A03%3A06\n>\n> ===8<===\n> Restoring database schemas in the new cluster\n> *failure*\n>\n> Consult the last few lines of\n> \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgrade/003_logical_slots/data/t_003_logical_slots_newpub_data/pgdata/pg_upgrade_output.d/20231216T221418.035/log/pg_upgrade_dump_1.log\"\n> for\n> the probable cause of the failure.\n> Failure, exiting\n> [22:14:34.598](22.801s) not ok 10 - run of pg_upgrade of old cluster\n> [22:14:34.600](0.001s) # Failed test 'run of pg_upgrade of old cluster'\n> # at C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/003_logical_slots.pl\n> line 177.\n> ===8<===\n>\n> Without that log it might be hard to figure out what went wrong though :-/\n>\n\nYes, but most probably it's the same failure as\nhttps://www.postgresql.org/message-id/flat/TYAPR01MB5866AB7FD922CE30A2565B8BF5A8A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 17 Dec 2023 08:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Thomas, Alexander,\r\n\r\n> 17.12.2023 07:02, Thomas Munro wrote:\r\n> > FYI fairywren failed in this test:\r\n> >\r\n> >\r\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2023-1\r\n> 2-16%2022%3A03%3A06\r\n> >\r\n> > ===8<===\r\n> > Restoring database schemas in the new cluster\r\n> > *failure*\r\n> >\r\n> > Consult the last few lines of\r\n> >\r\n> \"C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql.build/testrun/pg_upgr\r\n> ade/003_logical_slots/data/t_003_logical_slots_newpub_data/pgdata/pg_upgra\r\n> de_output.d/20231216T221418.035/log/pg_upgrade_dump_1.log\"\r\n> > for\r\n> > the probable cause of the failure.\r\n> > Failure, exiting\r\n> > [22:14:34.598](22.801s) not ok 10 - run of pg_upgrade of old cluster\r\n> > [22:14:34.600](0.001s) # Failed test 'run of pg_upgrade of old cluster'\r\n> > # at\r\n> C:/tools/nmsys64/home/pgrunner/bf/root/HEAD/pgsql/src/bin/pg_upgrade/t/\r\n> 003_logical_slots.pl\r\n> > line 177.\r\n> > ===8<===\r\n> >\r\n> > Without that log it might be hard to figure out what went wrong though :-/\r\n> >\r\n> \r\n> Yes, but most probably it's the same failure as\r\n> \r\n\r\nThanks for reporting. Yes, it has been already reported by me [1], and the server\r\nlog was provided by Andrew [2]. The issue was that a file creation was failed\r\nbecause the same one was unlink()'d just before but it was in STATUS_DELETE_PENDING\r\nstatus. Kindly Alexander proposed a fix [3] and it looks good to me, but\r\nconfirmations by senior and windows-friendly developers are needed to move forward.\r\n(at first we thought the issue was solved by updating, but it was not correct)\r\n\r\nI know that you have developed there region, so I'm very happy if you check the\r\nforked thread.\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB5866AB7FD922CE30A2565B8BF5A8A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866A4E7342088E91362BEF0F5BBA%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[3]: https://www.postgresql.org/message-id/976479cf-dd66-ca19-f40c-5640e30700cb%40gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Sun, 17 Dec 2023 15:03:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
},
{
"msg_contents": "Dear Thomas, Alexander,\r\n\r\n> Thanks for reporting. Yes, it has been already reported by me [1], and the server\r\n> log was provided by Andrew [2]. The issue was that a file creation was failed\r\n> because the same one was unlink()'d just before but it was in\r\n> STATUS_DELETE_PENDING\r\n> status. Kindly Alexander proposed a fix [3] and it looks good to me, but\r\n> confirmations by senior and windows-friendly developers are needed to move\r\n> forward.\r\n> (at first we thought the issue was solved by updating, but it was not correct)\r\n> \r\n> I know that you have developed there region, so I'm very happy if you check the\r\n> forked thread.\r\n\r\nI forgot to say an important point. The issue was not introduced by the feature.\r\nIt just actualized a possible failure, only for Windows environment.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 18 Dec 2023 07:40:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: [PoC] pg_upgrade: allow to upgrade publisher node"
}
] |
[
{
"msg_contents": "Hi,\n\n\nWe are looking forward to get help from community on publishing s390x binaries. As per downloads page, apt repo supports Ubuntu on amd,arm,i386 and ppc64le.\nWe had reached out earlier and are ready to provide infra if needed. Wanted to check again if community is willing to help.\n\nThank you,\nNamrata\n\n\n\n\n\n\n\n\n\nHi,\n \n \nWe are looking forward to get help from community on publishing s390x binaries. As per downloads page, apt repo supports Ubuntu on amd,arm,i386 and ppc64le.\nWe had reached out earlier and are ready to provide infra if needed. Wanted to check again if community is willing to help.\n \nThank you,\nNamrata",
"msg_date": "Tue, 4 Apr 2023 12:56:49 +0000",
"msg_from": "Namrata Bhave <[email protected]>",
"msg_from_op": true,
"msg_subject": "Check whether binaries can be released for s390x"
},
{
"msg_contents": "Hi Namrata,\r\n\r\nOn 4/4/23 8:56 AM, Namrata Bhave wrote:\r\n> Hi,\r\n> \r\n> We are looking forward to get help from community on publishing s390x \r\n> binaries. As per downloads page, apt repo supports Ubuntu on \r\n> amd,arm,i386 and ppc64le.\r\n> \r\n> We had reached out earlier and are ready to provide infra if needed. \r\n> Wanted to check again if community is willing to help.\r\n\r\nIt'd be better to discuss on [email protected] -- that's \r\nthe mailing list for the website.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 4 Apr 2023 09:23:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check whether binaries can be released for s390x"
},
{
"msg_contents": "Hi Jonathan,\r\n\r\nThank you for getting back.\r\n\r\nThe request is mainly for the developer community to build and publish s390x binaries, apologies if I wasn't clear earlier.\r\nWe can provide s390x VMs to build and test postgresql binaries if you feel infra is a blocker.\r\n\r\nPlease let me know if any more information is needed.\r\n\r\nThank you,\r\nNamrata\r\n\r\n-----Original Message-----\r\nFrom: Jonathan S. Katz <[email protected]> \r\nSent: Tuesday, April 4, 2023 6:53 PM\r\nTo: Namrata Bhave <[email protected]>; [email protected]\r\nCc: Vibhuti Sawant <[email protected]>\r\nSubject: [EXTERNAL] Re: Check whether binaries can be released for s390x\r\n\r\nHi Namrata,\r\n\r\nOn 4/4/23 8:56 AM, Namrata Bhave wrote:\r\n> Hi,\r\n> \r\n> We are looking forward to get help from community on publishing s390x \r\n> binaries. As per downloads page, apt repo supports Ubuntu on\r\n> amd,arm,i386 and ppc64le.\r\n> \r\n> We had reached out earlier and are ready to provide infra if needed. \r\n> Wanted to check again if community is willing to help.\r\n\r\nIt'd be better to discuss on [email protected] -- that's the mailing list for the website.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n",
"msg_date": "Tue, 4 Apr 2023 13:30:34 +0000",
"msg_from": "Namrata Bhave <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE: Check whether binaries can be released for s390x"
},
{
"msg_contents": "On 2023-Apr-04, Namrata Bhave wrote:\n\n> Hi Jonathan,\n> \n> Thank you for getting back.\n> \n> The request is mainly for the developer community to build and publish s390x binaries, apologies if I wasn't clear earlier.\n> We can provide s390x VMs to build and test postgresql binaries if you feel infra is a blocker.\n> \n> Please let me know if any more information is needed.\n\nI think the audience that needs to hear this is\[email protected].\n\n\nOn 4/4/23 8:56 AM, Namrata Bhave wrote:\n> Hi,\n> \n> We are looking forward to get help from community on publishing s390x \n> binaries. As per downloads page, apt repo supports Ubuntu on\n> amd,arm,i386 and ppc64le.\n> \n> We had reached out earlier and are ready to provide infra if needed. \n> Wanted to check again if community is willing to help.\n\n\n-- \nÁlvaro Herrera\n\n\n",
"msg_date": "Tue, 4 Apr 2023 16:32:56 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check whether binaries can be released for s390x"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 9:30 AM Namrata Bhave <[email protected]> wrote:\n> Thank you for getting back.\n>\n> The request is mainly for the developer community to build and publish s390x binaries, apologies if I wasn't clear earlier.\n> We can provide s390x VMs to build and test postgresql binaries if you feel infra is a blocker.\n>\n> Please let me know if any more information is needed.\n\nAs Jonathon said, this discussion is not pertinent to this mailing\nlist. Please use the correct mailing list.\n\nI'm not sure that anyone is going to want to undertake this effort for\nfree even if you're willing to provide the hardware for free. But your\nchances will be a lot better if you ask on the correct mailing list.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Apr 2023 11:03:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check whether binaries can be released for s390x"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 3:03 AM Robert Haas <[email protected]> wrote:\n> On Tue, Apr 4, 2023 at 9:30 AM Namrata Bhave <[email protected]> wrote:\n> > Thank you for getting back.\n> >\n> > The request is mainly for the developer community to build and publish s390x binaries, apologies if I wasn't clear earlier.\n> > We can provide s390x VMs to build and test postgresql binaries if you feel infra is a blocker.\n> >\n> > Please let me know if any more information is needed.\n>\n> As Jonathon said, this discussion is not pertinent to this mailing\n> list. Please use the correct mailing list.\n>\n> I'm not sure that anyone is going to want to undertake this effort for\n> free even if you're willing to provide the hardware for free. But your\n> chances will be a lot better if you ask on the correct mailing list.\n\nIsn't the right place for this the APT packaging team[1]'s list at\[email protected]? Not something I'm involved in\nmyself, but AFAIK it's the same team working on Debian etc packages\nand they do get built for s390x already[2] so I guess this is about\ndoing the same for the PGDG repos, so this is a packaging topic, not a\nweb topic, and this list wasn't a bad place to start...\n\n[1] https://wiki.postgresql.org/wiki/Apt\n[2] https://packages.debian.org/search?arch=s390x&keywords=postgresql-15\n\n\n",
"msg_date": "Wed, 5 Apr 2023 10:57:27 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check whether binaries can be released for s390x"
},
{
"msg_contents": "Re: Alvaro Herrera\n> On 2023-Apr-04, Namrata Bhave wrote:\n> \n> > Hi Jonathan,\n> > \n> > Thank you for getting back.\n> > \n> > The request is mainly for the developer community to build and publish s390x binaries, apologies if I wasn't clear earlier.\n> > We can provide s390x VMs to build and test postgresql binaries if you feel infra is a blocker.\n> > \n> > Please let me know if any more information is needed.\n> \n> I think the audience that needs to hear this is\n> [email protected].\n\nHi Namrata,\n\nwe are interested. To be on par with the other architectures we\nsupport, we'd need access to a build machine with 150GB of\ndisk, 16GB RAM and 8 cores. Host OS Debian stable, root access.\n\nThis would be for the Debian/Ubuntu part; the rpm/yum folks have\ndifferent requirements. (I'm adding Devrim on Cc.)\n\nIs that something that IBM can arrange?\n\nLast time we tried to set this up the showstopper were unreasonable\nlegal requirements from the IBM side, we were asked to sign documents\nthat went far beyond what usual agreements between open source\nprojects and hardware providers are. Fingers crossed this time. :)\n\n> On 4/4/23 8:56 AM, Namrata Bhave wrote:\n> > Hi,\n> > \n> > We are looking forward to get help from community on publishing s390x \n> > binaries. As per downloads page, apt repo supports Ubuntu on\n> > amd,arm,i386 and ppc64le.\n> > \n> > We had reached out earlier and are ready to provide infra if needed. \n> > Wanted to check again if community is willing to help.\n> \n> \n> -- \n> �lvaro Herrera\n\nThanks,\nChristoph\n\n\n",
"msg_date": "Tue, 4 Apr 2023 18:36:02 -0700",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Check whether binaries can be released for s390x"
}
] |
[
{
"msg_contents": "Hi,\n\nFor quite a while I'd been wishing to see *differential* code coverage, to see\nwhat changed in code coverage between two major releases. Unfortunately lcov\ndidn't provide that. A few months ago a PR for it has been merged into lcov\n([1]). There hasn't yet been a release though. And the feature definitely has\nsome rough edges, still.\n\nI'm planning to generate the 15->16 differential code coverage, once the\nfeature freeze has been reached.\n\nAnother nice thing provided by the in-development lcov is hierarchical\noutput. I find that to be much easier to navigate than the current flat\noutput, as e.g. used on coverage.postgresql.org.\n\n\nI've attached a screenshot showing the coverage differences of the backend\nbetween 15 and HEAD, as of a few days ago. \"UNC\" is uncovered new code, \"LBC\"\nis lost baseline coverage.\n\n\nI think for now it'd likely be a small script that'd generate the code\ncoverage across versions. Do we want to have that in the source tree?\n\n\nIs there any interest in a) using the hierarchical output b) differential\noutput on coverage.pg.o?\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/linux-test-project/lcov/pull/169",
"msg_date": "Tue, 4 Apr 2023 09:03:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "differential code coverage"
},
{
"msg_contents": "> On 4 Apr 2023, at 18:03, Andres Freund <[email protected]> wrote:\n\n> I'm planning to generate the 15->16 differential code coverage, once the\n> feature freeze has been reached.\n\nCool!\n\n> I think for now it'd likely be a small script that'd generate the code\n> coverage across versions. Do we want to have that in the source tree?\n\nIf it's published on pg.o like discussed below then I think it makes sense to\ninclude it.\n\n> Is there any interest in a) using the hierarchical output b) differential\n> output on coverage.pg.o?\n\nI would like to see that. If there are concerns about replacing the current\ncoverage report with one from an unreleased (and potentially buggy) lcov, we\ncould perhaps use diff.coverage.pg.o until it's released?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 11:00:39 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: differential code coverage"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-04 09:03:45 -0700, Andres Freund wrote:\n> For quite a while I'd been wishing to see *differential* code coverage, to see\n> what changed in code coverage between two major releases. Unfortunately lcov\n> didn't provide that. A few months ago a PR for it has been merged into lcov\n> ([1]). There hasn't yet been a release though. And the feature definitely has\n> some rough edges, still.\n> \n> I'm planning to generate the 15->16 differential code coverage, once the\n> feature freeze has been reached.\n> \n> Another nice thing provided by the in-development lcov is hierarchical\n> output. I find that to be much easier to navigate than the current flat\n> output, as e.g. used on coverage.postgresql.org.\n\nI've generated the output for 15 vs HEAD, now that we're past feature freeze.\n\nSimpler version:\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/\n\nVersion that tries to date the source lines using information from git:\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-02/\n\nThe various abbreviations are explained (somewhat) if you hover over them.\n\n\nThere's a few interesting bit of new code not covered by teests, that should\neasily be coverable:\n\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/bin/pg_dump/pg_dumpall.c.gcov.html#1014\n\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/utils/adt/array_userfuncs.c.gcov.html#675\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/utils/adt/array_userfuncs.c.gcov.html#921\n\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/utils/adt/pg_locale.c.gcov.html#1964\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/utils/adt/pg_locale.c.gcov.html#2219\n\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/parser/parse_expr.c.gcov.html#3108\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/nodes/nodeFuncs.c.gcov.html#1480\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/nodes/nodeFuncs.c.gcov.html#1652\n\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/src/backend/optimizer/util/plancat.c.gcov.html#348\n\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/contrib/postgres_fdw/connection.c.gcov.html#1316\nhttps://anarazel.de/postgres/cov/15-vs-HEAD-01/contrib/postgres_fdw/deparse.c.gcov.html#399\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Apr 2023 14:42:37 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: differential code coverage"
}
] |
[
{
"msg_contents": "Hi,\n\nLook at:\n\nstatic void\nfill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)\n{\n\tBuffer\t\tbuf;\n\tPage\t\tpage;\n\tsequence_magic *sm;\n\tOffsetNumber offnum;\n\n\t/* Initialize first page of relation with special magic number */\n\n\tbuf = ReadBufferExtended(rel, forkNum, P_NEW, RBM_NORMAL, NULL);\n\tAssert(BufferGetBlockNumber(buf) == 0);\n\n\tpage = BufferGetPage(buf);\n\n\tPageInit(page, BufferGetPageSize(buf), sizeof(sequence_magic));\n\tsm = (sequence_magic *) PageGetSpecialPointer(page);\n\tsm->magic = SEQ_MAGIC;\n\n\t/* Now insert sequence tuple */\n\n\tLockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);\n\n\nClearly we are modifying the page (via PageInit()), without holding a buffer\nlock, which is only acquired subsequently.\n\nIt's clearly unlikely to cause bad consequences - the sequence doesn't yet\nreally exist, and we haven't seen any reports of a problem - but it doesn't\nseem quite impossible that it would cause problems.\n\nAs far as I can tell, this goes back to the initial addition of the sequence\ncode, in e8647c45d66a - I'm too lazy to figure out whether it possibly wasn't\na problem in 1997 for some reason.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Apr 2023 11:55:01 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "fill_seq_fork_with_data() initializes buffer without lock"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-04 11:55:01 -0700, Andres Freund wrote:\n> Look at:\n>\n> static void\n> fill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)\n> {\n> \tBuffer\t\tbuf;\n> \tPage\t\tpage;\n> \tsequence_magic *sm;\n> \tOffsetNumber offnum;\n>\n> \t/* Initialize first page of relation with special magic number */\n>\n> \tbuf = ReadBufferExtended(rel, forkNum, P_NEW, RBM_NORMAL, NULL);\n> \tAssert(BufferGetBlockNumber(buf) == 0);\n>\n> \tpage = BufferGetPage(buf);\n>\n> \tPageInit(page, BufferGetPageSize(buf), sizeof(sequence_magic));\n> \tsm = (sequence_magic *) PageGetSpecialPointer(page);\n> \tsm->magic = SEQ_MAGIC;\n>\n> \t/* Now insert sequence tuple */\n>\n> \tLockBuffer(buf, BUFFER_LOCK_EXCLUSIVE);\n>\n>\n> Clearly we are modifying the page (via PageInit()), without holding a buffer\n> lock, which is only acquired subsequently.\n>\n> It's clearly unlikely to cause bad consequences - the sequence doesn't yet\n> really exist, and we haven't seen any reports of a problem - but it doesn't\n> seem quite impossible that it would cause problems.\n>\n> As far as I can tell, this goes back to the initial addition of the sequence\n> code, in e8647c45d66a - I'm too lazy to figure out whether it possibly wasn't\n> a problem in 1997 for some reason.\n\nRobert suggested to add an assertion to PageInit() to defend against such\nomissions. I quickly hacked one together. The assertion immediately found the\nissue here, but no other currently existing ones.\n\nI'm planning to push a fix for this to HEAD. Given that the risk seems low and\nthe issue is so longstanding, it doesn't seem quite worth backpatching?\n\n\nFWIW, the assertion I used is:\n\n if (page >= BufferBlocks && page <= BufferBlocks + BLCKSZ * NBuffers)\n {\n Buffer buffer = (page - BufferBlocks) / BLCKSZ + 1;\n BufferDesc *buf = GetBufferDescriptor(buffer - 1);\n\n Assert(LWLockHeldByMeInMode(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE));\n }\n\nIf there's interest in having such an assertion permenantly, it clearly can't\nlive in bufpage.c.\n\nI have a bit of a hard time coming up with a good name. Any suggestions?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Apr 2023 16:23:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: fill_seq_fork_with_data() initializes buffer without lock"
}
] |
[
{
"msg_contents": "There are some recent comment that added new options for CREATE SUBSCRIPTION\n\n\"Add new predefined role pg_create_subscription.\" [1]\nThis added a new \"password_required\" option.\n\n\"Add a run_as_owner option to subscriptions.\" [2]\nThis added a \"run_as_owner\" option.\n\n~~\n\nAFAICT the associated tab-complete code was accidentally omitted.\n\nPSA patches to add those tab completions.\n\n------\n[1] https://github.com/postgres/postgres/commit/c3afe8cf5a1e465bd71e48e4bc717f5bfdc7a7d6\n[2] https://github.com/postgres/postgres/commit/482675987bcdffb390ae735cfd5f34b485ae97c6\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 5 Apr 2023 10:27:33 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]> wrote:\n>\n> There are some recent comment that added new options for CREATE SUBSCRIPTION\n>\n...\n> PSA patches to add those tab completions.\n>\n\nLGTM, so pushed. BTW, while looking at this, I noticed that newly\nadded options \"password_required\" and \"run_as_owner\" has incorrectly\nmentioned their datatype as a string in the docs. It should be\nboolean. I think \"password_required\" belongs to first section of docs\nwhich says: \"The following parameters control what happens during\nsubscription creation\".\n\nThe attached patch makes those changes. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 7 Apr 2023 10:58:11 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]> wrote:\n> >\n> > There are some recent comment that added new options for CREATE SUBSCRIPTION\n> >\n> ...\n> > PSA patches to add those tab completions.\n> >\n>\n> LGTM, so pushed. BTW, while looking at this, I noticed that newly\n> added options \"password_required\" and \"run_as_owner\" has incorrectly\n> mentioned their datatype as a string in the docs. It should be\n> boolean.\n\n+1\n\n> I think \"password_required\" belongs to first section of docs\n> which says: \"The following parameters control what happens during\n> subscription creation\".\n\nBut the documentation of ALTER SUBSCRIPTION says:\n\nThe parameters that can be altered are slot_name, synchronous_commit,\nbinary, streaming, disable_on_error, password_required, run_as_owner,\nand origin. Only a superuser can set password_required = false.\n\nISTM that both password_required and run_as_owner are parameters to\ncontrol the subscription's behavior, like disable_on_error and\nstreaming. So it looks good to me that password_required belongs to\nthe second section.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Apr 2023 16:41:27 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 1:12 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Apr 7, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]> wrote:\n> > >\n> >\n> > LGTM, so pushed. BTW, while looking at this, I noticed that newly\n> > added options \"password_required\" and \"run_as_owner\" has incorrectly\n> > mentioned their datatype as a string in the docs. It should be\n> > boolean.\n>\n> +1\n>\n> > I think \"password_required\" belongs to first section of docs\n> > which says: \"The following parameters control what happens during\n> > subscription creation\".\n>\n> But the documentation of ALTER SUBSCRIPTION says:\n>\n> The parameters that can be altered are slot_name, synchronous_commit,\n> binary, streaming, disable_on_error, password_required, run_as_owner,\n> and origin. Only a superuser can set password_required = false.\n>\n\nBy the above, do you intend to say that all the parameters that can be\naltered are in the second list? If so, slot_name belongs to the first\ncategory.\n\n> ISTM that both password_required and run_as_owner are parameters to\n> control the subscription's behavior, like disable_on_error and\n> streaming. So it looks good to me that password_required belongs to\n> the second section.\n>\n\nDo you mean that because 'password_required' is used each time we make\na connection to a publisher during replication, it should be in the\nsecond category? If so, slot_name is also used during the start\nreplication each time.\n\nBTW, do we need to check one or both of these parameters in\nmaybe_reread_subscription() where we \"Exit if any parameter that\naffects the remote connection was changed.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 7 Apr 2023 14:40:36 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Friday, April 7, 2023 5:11 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Fri, Apr 7, 2023 at 1:12 PM Masahiko Sawada <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Fri, Apr 7, 2023 at 2:28 PM Amit Kapila <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]>\r\n> wrote:\r\n> > > >\r\n> > >\r\n> > > LGTM, so pushed. BTW, while looking at this, I noticed that newly\r\n> > > added options \"password_required\" and \"run_as_owner\" has incorrectly\r\n> > > mentioned their datatype as a string in the docs. It should be\r\n> > > boolean.\r\n> >\r\n> > +1\r\n> >\r\n> > > I think \"password_required\" belongs to first section of docs which\r\n> > > says: \"The following parameters control what happens during\r\n> > > subscription creation\".\r\n> >\r\n> > But the documentation of ALTER SUBSCRIPTION says:\r\n> >\r\n> > The parameters that can be altered are slot_name, synchronous_commit,\r\n> > binary, streaming, disable_on_error, password_required, run_as_owner,\r\n> > and origin. Only a superuser can set password_required = false.\r\n> >\r\n> \r\n> By the above, do you intend to say that all the parameters that can be altered\r\n> are in the second list? If so, slot_name belongs to the first category.\r\n> \r\n> > ISTM that both password_required and run_as_owner are parameters to\r\n> > control the subscription's behavior, like disable_on_error and\r\n> > streaming. So it looks good to me that password_required belongs to\r\n> > the second section.\r\n> >\r\n> \r\n> Do you mean that because 'password_required' is used each time we make a\r\n> connection to a publisher during replication, it should be in the second\r\n> category? If so, slot_name is also used during the start replication each time.\r\n> \r\n> BTW, do we need to check one or both of these parameters in\r\n> maybe_reread_subscription() where we \"Exit if any parameter that affects the\r\n> remote connection was changed.\"\r\n\r\nI think changing run_as_owner doesn't require to be checked as it only affect\r\nthe role to perform the apply. But it seems password_required need to be\r\nchecked in maybe_reread_subscription() because we used this parameter for\r\nconnection.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Fri, 7 Apr 2023 10:35:33 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 6:10 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Apr 7, 2023 at 1:12 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Fri, Apr 7, 2023 at 2:28 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]> wrote:\n> > > >\n> > >\n> > > LGTM, so pushed. BTW, while looking at this, I noticed that newly\n> > > added options \"password_required\" and \"run_as_owner\" has incorrectly\n> > > mentioned their datatype as a string in the docs. It should be\n> > > boolean.\n> >\n> > +1\n> >\n> > > I think \"password_required\" belongs to first section of docs\n> > > which says: \"The following parameters control what happens during\n> > > subscription creation\".\n> >\n> > But the documentation of ALTER SUBSCRIPTION says:\n> >\n> > The parameters that can be altered are slot_name, synchronous_commit,\n> > binary, streaming, disable_on_error, password_required, run_as_owner,\n> > and origin. Only a superuser can set password_required = false.\n> >\n>\n> By the above, do you intend to say that all the parameters that can be\n> altered are in the second list? If so, slot_name belongs to the first\n> category.\n>\n> > ISTM that both password_required and run_as_owner are parameters to\n> > control the subscription's behavior, like disable_on_error and\n> > streaming. So it looks good to me that password_required belongs to\n> > the second section.\n> >\n>\n> Do you mean that because 'password_required' is used each time we make\n> a connection to a publisher during replication, it should be in the\n> second category? If so, slot_name is also used during the start\n> replication each time.\n\nI think that parameters used by the backend process when performing\nCREATE SUBSCRIPTION belong to the first category. And other parameters\nused by apply workers and tablesync workers belong to the second\ncategory. Since slot_name is used by both I'm not sure it should be in\nthe second category, but password_requried seems to be used by only\napply workers and tablesync workers, so it should be in the second\ncategory.\n\n>\n> BTW, do we need to check one or both of these parameters in\n> maybe_reread_subscription() where we \"Exit if any parameter that\n> affects the remote connection was changed.\"\n\nAs for run_as_owner, since we can dynamically switch the behavior I\nthink we don't need to reconnect. I'm not really sure about\npassword_required. From the implementation point of view, we don't\nneed to reconnect. Even if password_required is changed from false to\ntrue, the apply worker already has the established connection. If it's\nchanged from true to false, we might not want to reconnect. I think we\nneed to consider it from the security point of view while checking the\nmotivation that password_required was introduced. So probably it's\nbetter to discuss it on the original thread.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 7 Apr 2023 22:28:43 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 6:59 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Fri, Apr 7, 2023 at 6:10 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Apr 7, 2023 at 1:12 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> >\n> > Do you mean that because 'password_required' is used each time we make\n> > a connection to a publisher during replication, it should be in the\n> > second category? If so, slot_name is also used during the start\n> > replication each time.\n>\n> I think that parameters used by the backend process when performing\n> CREATE SUBSCRIPTION belong to the first category. And other parameters\n> used by apply workers and tablesync workers belong to the second\n> category. Since slot_name is used by both I'm not sure it should be in\n> the second category, but password_requried seems to be used by only\n> apply workers and tablesync workers, so it should be in the second\n> category.\n>\n\nHmm, don't we use the option \"password_requried\" during CREATE\nSUBSCRIPTION when we have to connect? The second category is more\nabout parameters that define the replication behavior so it is not\nclear to me how this falls in that category. See the initial\ndiscussion which led to the current situation [1]. Anyway, for now, I\nhave just committed the fix for the datatype as we have not reached a\nconsensus on this one.\n\n> >\n> > BTW, do we need to check one or both of these parameters in\n> > maybe_reread_subscription() where we \"Exit if any parameter that\n> > affects the remote connection was changed.\"\n>\n> As for run_as_owner, since we can dynamically switch the behavior I\n> think we don't need to reconnect. I'm not really sure about\n> password_required. From the implementation point of view, we don't\n> need to reconnect. Even if password_required is changed from false to\n> true, the apply worker already has the established connection. If it's\n> changed from true to false, we might not want to reconnect. I think we\n> need to consider it from the security point of view while checking the\n> motivation that password_required was introduced. So probably it's\n> better to discuss it on the original thread.\n>\n\nAgreed and responded to the original thread [2].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Kmu74xHk2jcHTmKq8HBj3xK6n%3DRfiJB6dfV5zVSqqiFg%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1%2Bz9UDFEynXLsWeMMuUZc1iQkRwj2HNDtxUHTPo-u1F4A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 8 Apr 2023 11:08:57 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Fri, 7 Apr 2023 at 01:28, Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]> wrote:\n>\n> > PSA patches to add those tab completions.\n>\n> LGTM, so pushed.\n\nI moved this to the next CF but actually I just noticed the thread\nstarts with the original patch being pushed. Maybe we should just\nclose the CF entry? Is this further change relevant?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Sun, 9 Apr 2023 02:02:43 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 11:33 AM Gregory Stark (as CFM)\n<[email protected]> wrote:\n>\n> On Fri, 7 Apr 2023 at 01:28, Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Apr 5, 2023 at 5:58 AM Peter Smith <[email protected]> wrote:\n> >\n> > > PSA patches to add those tab completions.\n> >\n> > LGTM, so pushed.\n>\n> I moved this to the next CF but actually I just noticed the thread\n> starts with the original patch being pushed. Maybe we should just\n> close the CF entry? Is this further change relevant?\n>\n\nI have closed the CF entry as the patch for which this entry was\ncreated is already committed. If anything comes as a result of further\ndiscussion, I'll take care of it, or if required we can start a\nseparate thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Apr 2023 08:46:31 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 9:29 AM Masahiko Sawada <[email protected]> wrote:\n> I think that parameters used by the backend process when performing\n> CREATE SUBSCRIPTION belong to the first category. And other parameters\n> used by apply workers and tablesync workers belong to the second\n> category. Since slot_name is used by both I'm not sure it should be in\n> the second category, but password_requried seems to be used by only\n> apply workers and tablesync workers, so it should be in the second\n> category.\n\nI agree. I think actually the current division is quite odd. The only\nparameters that strictly affect the CREATE SUBSCRIPTION command are\n\"connect\" and \"create_slot\". \"enabled\" and \"slot_name\" clearly control\nlater behavior, because you can alter both of them later, with ALTER\nSUBSCRIPTION! The \"enabled\" parameter is changed using different\nsyntax, ALTER SUBSCRIPTION .. ENABLE | DISABLE instead of ALTER\nSUBSCRIPTION ... SET (enabled = true | false), which is possibly not\nthe best choice, but regardless of that, these parameters clearly\naffect behavior later, not just at CREATE SUBSCRIPTION time.\n\nProbably we ought to just collapse the sections together somehow, and\nuse the text to clarify the exact behavior as required. I definitely\ndisagree with the idea of moving the new parameters to the other\nsection -- that's clearly wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Apr 2023 12:24:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE SUBSCRIPTION -- add missing tab-completes"
}
] |
[
{
"msg_contents": "There seems to be a comment typo in the recent commit \"Perform logical\nreplication actions as the table owner\" [1].\n\n/*\n * Switch back to the original user ID.\n *\n * If we created a new GUC nest level, also role back any changes that were\n * made within it.\n */\n\n\n/role back/rollback/\n\n~~\n\nPSA a tiny patch to fix that.\n\n------\n[1] https://github.com/postgres/postgres/commit/1e10d49b65d6c26c61fee07999e4cd59eab2b765\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 5 Apr 2023 10:29:09 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Comment typo in recent push"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 8:29 PM Peter Smith <[email protected]> wrote:\n> There seems to be a comment typo in the recent commit \"Perform logical\n> replication actions as the table owner\" [1].\n>\n> /*\n> * Switch back to the original user ID.\n> *\n> * If we created a new GUC nest level, also role back any changes that were\n> * made within it.\n> */\n>\n>\n> /role back/rollback/\n>\n> ~~\n>\n> PSA a tiny patch to fix that.\n\nGood catch, but I think it should be roll back (a verb) not rollback (a noun).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 08:59:12 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in recent push"
},
{
"msg_contents": "Thanks for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 6 Apr 2023 08:04:52 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Comment typo in recent push"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThis Koshi Shibagaki.\nI found out that there is a mistake written in contrib/postgres_fdw/postgres_fdw.c.\n\nPatch file is attached.\n\nThe non-existent function name \" ExecCheckRTEPerms \" was written in \nthe comment in postgresBeginForeignScan.\nThis mistake is considered to have occurred at commit ID: a61b1f74.\nThe function name was changed to \"ExecCheckPermissions\" and comments \nrelated to this change was fixed, \nhowever only the comment in postgresBeginForeignScan was not fixed.\n\nBest\n\n-----------------------------------------------\nFujitsu Limited\nKoshi Shibagaki\[email protected]",
"msg_date": "Wed, 5 Apr 2023 05:27:48 +0000",
"msg_from": "\"Koshi Shibagaki (Fujitsu)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix code comment in postgres_fdw.c"
},
{
"msg_contents": "> On 5 Apr 2023, at 07:27, Koshi Shibagaki (Fujitsu) <[email protected]> wrote:\n\n> I found out that there is a mistake written in contrib/postgres_fdw/postgres_fdw.c.\n\nThanks for the report, fixed.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 09:11:47 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix code comment in postgres_fdw.c"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nParallelism has been in core since 9.6, it's a great feature that got \nseveral\nupgrades since then. However, it lacks metrics to determine if and how\nparallelism is used and help tune parameters related to it.\n\nCurrently, the only information available are pg_stat_activity.backend_type\nand pg_stat_activity.leader_pid. These could be sampled to get statistics\nabout the number of queries that are using parallel workers and the \nnumber of\nworkers spawned (globally or per statement), but this is not ideal because:\n\n* the sampling period would require a high frequency to get stats\n close enough from reality without missing lots of short duration\n queries;\n* with sampling we cannot get an accurate count of parallel queries;\n* we don't know how many queries can't get the workers they asked for.\n\nWe thought about several places where we could add some user facing \nmetrics, and would\nlike some input about the design before working on a larger patch. The \nvarious chosen\nnames are obviously not settled.\n\n# Traces\n\nWe could add a GUC \"log_parallel_draught\": it would add a message in the \nlogs when a\nquery or utility asks for parallel workers but can't get all of them.\n\nThe message could look like this. It could be issued several times per query\nsince workers can be requested for different parts of the plan.\n\n LOG: Parallel worker draught detected: worker launched: 0, requested: 2\n STATEMENT: explain analyze select * from pq_foo inner join pq_bar \nusing(id);\n\n LOG: Parallel worker draught detected: worker launched: 0, requested: 1\n CONTEXT: while scanning relation \"public.pv_tbl\"\n STATEMENT: VACUUM (PARALLEL 2, VERBOSE) pv_tbl;\n\n LOG: Parallel worker draught detected: worker launched: 0, requested: 1\n STATEMENT: CREATE INDEX ON pi_tbl(i);\n\nThis could be used in tools like pgBadger to aggregate stats\non statements that didn't get their workers, but we might need additionnal\ninformation to know why we are lacking workers.\n\nWe have a working PoC patch for this since it seems the most\nstraightforward to implement and use.\n\n# pg_stat_bgworker view\n\nI was initially thinking about metrics like:\n* number of parallel queries\n* number of parallel queries that didn't get their workers\nBut without a number of eligible queries, it's not very useful.\n\nInstead, some metrics could be useful:\n* how many workers were requested\n* how many workers were obtained.\nThe data would be updated as the workers are spawned\n(or aren't). It would be interesting to have this information per\nbackground worker type in order to identify which pool is the source of a\nparallel worker draught.\n\nThe view could look like this:\n\n* bgworker_type: possible values would be: logical replication worker / \nparallel\nworker / parallel maintenance worker / a name given by an extension;\n* datname: the database where the workers were connected if applicable, \nor null\n otherwise;\n* active: number of currently running workers;\n* requested: number of requested workers ;\n* obtained: number of obtained workers ;\n* duration: the aggregation of all durations; we could update this field \nwhen a\n background worker finishes and add the duration from the one still \nrunning to\n produce an more accurate number;\n* stats_reset: the reset would be handled the same way other pg_stat* views\n handle it.\n\nThe parallel maintenance worker type doesn't exist in pg_stat_activity. \nI think\nit would be worthwhile to add it since this kind of parallel worker has it's\nown pool.\n\nThis view could be used by monitoring or metrology tools to raise alerts or\ntrace graphs of the background worker usage, and determine if, when and \nwhere\nthere is a shortage of workers.\n\nTools like pg_activity, check_postgres/check_pgactivity or prometheus\nexporters could use these stats.\n\n# pg_stat_statements\n\nThis view is dedicated to per-query statistics. We could add a few metrics\nrelated to parallelism:\n\n* parallelized_calls: how many executions were planned with parallelism;\n* parallelized_draught_calls: how many executions were planned with \nparallelism but\n didn't get all their workers;\n* parallel_workers_requested: how many workers were requested for this \nparallel\n statement;\n* parallel_workers_total: how many workers were obtained for this \nparallel statement;\n\nThe information is useful to detect queries that didn't get their \nworkers on a\nregular basis. If it's sampled we could know when. It could be used by tools\nlike POWA to eg. visualize the query runtime depending on the number of\nworkers, the moment of the day it lacks the requested workers, etc.\n\nThe two last could help estimate if a query makes a heavy use of \nparallelism.\n\nNote: I have skimmed throught the thread \"Expose Parallelism counters \nplanned/execute\nin pg_stat_statements\" [1] and still need to take a closer look at it.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/6acbe570-068e-bd8e-95d5-00c737b865e8%40gmail.com\n\n# pg_stat_all_tables and pg_stat_all_indexes\n\nWe could add a parallel_seq_scan counter to pg_stat_all_tables. The column\nwould be incremented for each worker participating in a scan. The leader\nwould also increment the counter if it is participating.\n\nThe same thing could be done to pg_stat_all_indexes with a \nparallel_index_scan\ncolumn.\n\nThese metrics could be used in relation to system stats and other PostgreSQL\nmetrics such as pg_statio_* in tools like POWA.\n\n# Workflow\n\nAn overview of the backgroud worker usage could be viewed via the\npg_stat_bgworker view. It could help detect, and in some cases explain, \nparallel\nworkers draughts. It would also help adapt the size of the worker pools and\nprompt us to look into the logs or pg_stat_statements.\n\nThe statistics gathered in pg_stat_statements can be used the usual way:\n* have an idea of the parallel query usage on the server;\n* detect queries that starve from lack of parallel workers;\n* compare snapshots to see the impact of parameter modifications;\n* combine the statistics with other sources to know:\n * if the decrease in parallel workers had on impact on the average \nexecution duration\n * if the increase in parallel workers allocation had an impact on the \nsystem\n time;\n\nThe logs can be used to pin point specific queries with their parameters or\nto get global statistics when pg_stat_statements is not available or \ncan't be\nused.\n\nOnce a query is singled out, it can be analysed as usual with EXPLAIN to\ndetermine:\n* if the lack of workers is a problem;\n* how parallelism helps in this particular case.\n\nFinally, the per relation statitics could be combined with system and other\nPostgreSQL metrics to identify why the storage is stressed.\n\n\nIf you reach this point, thank you for reading me!\n\nMany thanks to Melanie Plageman for the pointers she shared with us \naround the\npgsessions in Paris and her time in general.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 15:00:53 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Parallel Query Stats"
},
{
"msg_contents": "Hi Benoit,\n\nOn 4/5/23 15:00, Benoit Lobréau wrote:\n> Hi hackers,\n> \n> Parallelism has been in core since 9.6, it's a great feature that got\n> several\n> upgrades since then. However, it lacks metrics to determine if and how\n> parallelism is used and help tune parameters related to it.\n> \n\nTrue.\n\n> Currently, the only information available are pg_stat_activity.backend_type\n> and pg_stat_activity.leader_pid. These could be sampled to get statistics\n> about the number of queries that are using parallel workers and the\n> number of\n> workers spawned (globally or per statement), but this is not ideal because:\n> \n> * the sampling period would require a high frequency to get stats\n> close enough from reality without missing lots of short duration\n> queries;\n> * with sampling we cannot get an accurate count of parallel queries;\n> * we don't know how many queries can't get the workers they asked for.\n> \n> We thought about several places where we could add some user facing\n> metrics, and would\n> like some input about the design before working on a larger patch. The\n> various chosen\n> names are obviously not settled.\n> \n\nI agree just sampling pg_stat_activity is insufficient to get a good\noverview and decide whether an adjustment of the parallel workers (or\nother GUCs) is needed.\n\n> # Traces\n> \n> We could add a GUC \"log_parallel_draught\": it would add a message in the\n> logs when a\n> query or utility asks for parallel workers but can't get all of them.\n> \n> The message could look like this. It could be issued several times per\n> query\n> since workers can be requested for different parts of the plan.\n> \n> LOG: Parallel worker draught detected: worker launched: 0, requested: 2\n> STATEMENT: explain analyze select * from pq_foo inner join pq_bar\n> using(id);\n> \n> LOG: Parallel worker draught detected: worker launched: 0, requested: 1\n> CONTEXT: while scanning relation \"public.pv_tbl\"\n> STATEMENT: VACUUM (PARALLEL 2, VERBOSE) pv_tbl;\n> \n> LOG: Parallel worker draught detected: worker launched: 0, requested: 1\n> STATEMENT: CREATE INDEX ON pi_tbl(i);\n> \n> This could be used in tools like pgBadger to aggregate stats\n> on statements that didn't get their workers, but we might need additionnal\n> information to know why we are lacking workers.\n> \n> We have a working PoC patch for this since it seems the most\n> straightforward to implement and use.\n> \n\nI commented on this in the separate thread nearby.\n\n> # pg_stat_bgworker view\n> \n> I was initially thinking about metrics like:\n> * number of parallel queries\n> * number of parallel queries that didn't get their workers\n> But without a number of eligible queries, it's not very useful.\n> \n> Instead, some metrics could be useful:\n> * how many workers were requested\n> * how many workers were obtained.\n> The data would be updated as the workers are spawned\n> (or aren't). It would be interesting to have this information per\n> background worker type in order to identify which pool is the source of a\n> parallel worker draught.\n> \n> The view could look like this:\n> \n> * bgworker_type: possible values would be: logical replication worker /\n> parallel\n> worker / parallel maintenance worker / a name given by an extension;\n> * datname: the database where the workers were connected if applicable,\n> or null\n> otherwise;\n> * active: number of currently running workers;\n> * requested: number of requested workers ;\n> * obtained: number of obtained workers ;\n> * duration: the aggregation of all durations; we could update this field\n> when a\n> background worker finishes and add the duration from the one still\n> running to\n> produce an more accurate number;\n> * stats_reset: the reset would be handled the same way other pg_stat* views\n> handle it.\n> \n> The parallel maintenance worker type doesn't exist in pg_stat_activity.\n> I think\n> it would be worthwhile to add it since this kind of parallel worker has\n> it's\n> own pool.\n> \n> This view could be used by monitoring or metrology tools to raise alerts or\n> trace graphs of the background worker usage, and determine if, when and\n> where\n> there is a shortage of workers.\n> \n> Tools like pg_activity, check_postgres/check_pgactivity or prometheus\n> exporters could use these stats.\n> \n\nI'm not against adding a new statistics view like the one you describe,\nbut maybe it'd be better to start with just adding something basic to\npg_stat_database?\n\nI think a minimum improvement would be to extend pg_stat_database with\nthe number of requested and started parallel workers, and perhaps also\nthe number of running parallel workers (similar to numbackends).\n\nNot sure about the \"duration\" - it seems pretty different from the\nworker counters, and the aggregate for all queries does not seem\nparticularly useful (especially if not knowing the number of queries).\n\nAnd we already have this in pg_stat_statements ...\n\n> # pg_stat_statements\n> \n> This view is dedicated to per-query statistics. We could add a few metrics\n> related to parallelism:\n> \n> * parallelized_calls: how many executions were planned with parallelism;\n> * parallelized_draught_calls: how many executions were planned with\n> parallelism but\n> didn't get all their workers;\n> * parallel_workers_requested: how many workers were requested for this\n> parallel\n> statement;\n> * parallel_workers_total: how many workers were obtained for this\n> parallel statement;\n> \n> The information is useful to detect queries that didn't get their\n> workers on a\n> regular basis. If it's sampled we could know when. It could be used by\n> tools\n> like POWA to eg. visualize the query runtime depending on the number of\n> workers, the moment of the day it lacks the requested workers, etc.\n> \n> The two last could help estimate if a query makes a heavy use of\n> parallelism.\n> \n> Note: I have skimmed throught the thread \"Expose Parallelism counters\n> planned/execute\n> in pg_stat_statements\" [1] and still need to take a closer look at it.\n> \n> [1]\n> https://www.postgresql.org/message-id/flat/6acbe570-068e-bd8e-95d5-00c737b865e8%40gmail.com\n> \n\nI'm not sure the parallelized_calls counter would be very useful. If two\nqueries are parallelized, it doesn't say they ended up with the same\nnumber of gather nodes, and so on. If someone wants to track this kind\nof details, maybe something like pg_stat_plans would be better?\n\nI think I'd start with just adding the same counters requested/started\ncounters proposed for pg_stat_database already.\n\n> # pg_stat_all_tables and pg_stat_all_indexes\n> \n> We could add a parallel_seq_scan counter to pg_stat_all_tables. The column\n> would be incremented for each worker participating in a scan. The leader\n> would also increment the counter if it is participating.\n> \n> The same thing could be done to pg_stat_all_indexes with a\n> parallel_index_scan\n> column.\n> \n> These metrics could be used in relation to system stats and other\n> PostgreSQL\n> metrics such as pg_statio_* in tools like POWA.\n> \n\nI haven't thought too much about how I'd use these counters, but I agree\nit might be useful. I'm not sure we'd want to increment the \"parallel\"\ncounters for each worker, though - I think logically it's still just a\nsingle parallel scan. It seems natural to ask \"what fraction of index\nscans is parallel?\" but with counting every worker, that'd be impossible\nto calculate.\n\nI'm not sure if we should add \"parallel\" versions of the other counters\nin those views (e.g. idx_tup_read -> parallel_idx_tup_read).\n\n> # Workflow\n> \n> An overview of the backgroud worker usage could be viewed via the\n> pg_stat_bgworker view. It could help detect, and in some cases explain,\n> parallel\n> workers draughts. It would also help adapt the size of the worker pools and\n> prompt us to look into the logs or pg_stat_statements.\n> \n> The statistics gathered in pg_stat_statements can be used the usual way:\n> * have an idea of the parallel query usage on the server;\n> * detect queries that starve from lack of parallel workers;\n> * compare snapshots to see the impact of parameter modifications;\n> * combine the statistics with other sources to know:\n> * if the decrease in parallel workers had on impact on the average\n> execution duration\n> * if the increase in parallel workers allocation had an impact on the\n> system\n> time;\n> \n> The logs can be used to pin point specific queries with their parameters or\n> to get global statistics when pg_stat_statements is not available or\n> can't be\n> used.\n> \n> Once a query is singled out, it can be analysed as usual with EXPLAIN to\n> determine:\n> * if the lack of workers is a problem;\n> * how parallelism helps in this particular case.\n> \n> Finally, the per relation statitics could be combined with system and other\n> PostgreSQL metrics to identify why the storage is stressed.\n> \n\nI'm not sure the goal would be singling out a particular query - I think\nmost of the time we'd be dealing with hitting the limit of (parallel)\nworkers, and that's a global limit, not something query-specific. But it\ncould help with identifying that the query duration increase correlates\nwith the drop of number of started parallel workers. Or stuff like that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Feb 2024 21:23:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Query Stats"
}
] |
[
{
"msg_contents": "Not sure if this is the right place to ask but I've tried to build using\nthis source rpm\nhttps://ftp.postgresql.org/pub/repos/yum/srpms/13/redhat/rhel-9-x86_64/postgresql13-13.9-1PGDG.rhel9.src.rpm\non a RHEL 9 system and gotten the follow error:\nrpmbuild -bb SPECS/postgresql-13.spec\nwarning: line 80: Possible unexpanded macro in: Name:\npostgresql%{pgmajorversion}\nwarning: line 280: Possible unexpanded macro in: Provides: postgresql-libs\n= %{pgmajorversion} libpq5 >= 10.0\nwarning: line 483: Possible unexpanded macro in: Obsoletes:\npostgresql%{pgmajorversion}-pl <= 13.9-1PGDG.el9\nwarning: line 505: Possible unexpanded macro in: Obsoletes:\npostgresql%{pgmajorversion}-pl <= 13.9-1PGDG.el9\nwarning: line 534: Possible unexpanded macro in: Obsoletes:\npostgresql%{pgmajorversion}-pl <= 13.9-1PGDG.el9\nerror: Bad source:\n/root/rpmbuild/SOURCES/postgresql-%{pgmajorversion}-rpm-pgsql.patch: No\nsuch file or directory\n\nI don't see pgmajorversion in the spec file, what am I missing?\n\nTed\n\nNot sure if this is the right place to ask but I've tried to build using this source rpm https://ftp.postgresql.org/pub/repos/yum/srpms/13/redhat/rhel-9-x86_64/postgresql13-13.9-1PGDG.rhel9.src.rpm on a RHEL 9 system and gotten the follow error:rpmbuild -bb SPECS/postgresql-13.spec warning: line 80: Possible unexpanded macro in: Name:\t\tpostgresql%{pgmajorversion}warning: line 280: Possible unexpanded macro in: Provides:\tpostgresql-libs = %{pgmajorversion} libpq5 >= 10.0warning: line 483: Possible unexpanded macro in: Obsoletes:\tpostgresql%{pgmajorversion}-pl <= 13.9-1PGDG.el9warning: line 505: Possible unexpanded macro in: Obsoletes:\tpostgresql%{pgmajorversion}-pl <= 13.9-1PGDG.el9warning: line 534: Possible unexpanded macro in: Obsoletes:\tpostgresql%{pgmajorversion}-pl <= 13.9-1PGDG.el9error: Bad source: /root/rpmbuild/SOURCES/postgresql-%{pgmajorversion}-rpm-pgsql.patch: No such file or directoryI don't see pgmajorversion in the spec file, what am I missing?Ted",
"msg_date": "Wed, 5 Apr 2023 09:01:18 -0500",
"msg_from": "Ted Toth <[email protected]>",
"msg_from_op": true,
"msg_subject": "building from el9 src rpm"
}
] |
[
{
"msg_contents": "I wrote an optimizer talk that explains memoize, slides 24-25:\n\n\thttps://momjian.us/main/writings/pgsql/beyond.pdf#page=25\n\nDuring two presentations, I was asked if negative cache entries were\ncreated for cases where inner-side lookups returned no rows.\n\nIt seems we don't do that. Has this been considered or is it planned?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Wed, 5 Apr 2023 11:12:36 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Negative cache entries for memoize"
},
{
"msg_contents": "On Thu, 6 Apr 2023 at 03:12, Bruce Momjian <[email protected]> wrote:\n> During two presentations, I was asked if negative cache entries were\n> created for cases where inner-side lookups returned no rows.\n>\n> It seems we don't do that. Has this been considered or is it planned?\n\nIt does allow negative cache entries, so I'm curious about what you\ndid to test this.\n\nA cache entry is always marked as complete (i.e valid to use for\nlookups) when we execute the subnode to completion. In some plan\nshapes we might not execute the inner side until it returns NULL, for\nexample in Nested Loop Semi Joins we skip to the next outer row when\nmatching the first inner row. This could leave an incomplete cache\nentry which Memoize can't be certain if it contains all rows from the\nsubnode or not.\n\nFor the negative entry case, which really there is no special code\nfor, there are simply just no matching rows so the cache entry will be\nmarked as complete always as the inner node will return NULL on the\nfirst call. So negative entries will even work in the semi-join case.\n\nHere's a demo of the negative entries working with normal joins:\n\ncreate table t0 (a int);\ninsert into t0 select 0 from generate_Series(1,1000000);\ncreate table t1 (a int primary key);\ninsert into t1 select x from generate_series(1,1000000)x;\nvacuum analyze t0,t1;\nexplain (analyze, costs off, timing off, summary off)\nselect * from t0 inner join t1 on t0.a=t1.a;\n QUERY PLAN\n--------------------------------------------------------------------------------\n Nested Loop (actual rows=0 loops=1)\n -> Seq Scan on t0 (actual rows=1000000 loops=1)\n -> Memoize (actual rows=0 loops=1000000)\n Cache Key: t0.a\n Cache Mode: logical\n Hits: 999999 Misses: 1 Evictions: 0 Overflows: 0 Memory Usage: 1kB\n -> Index Only Scan using t1_pkey on t1 (actual rows=0 loops=1)\n Index Cond: (a = t0.a)\n Heap Fetches: 0\n\nDavid\n\n\n",
"msg_date": "Thu, 6 Apr 2023 09:23:31 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Negative cache entries for memoize"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 09:23:31AM +1200, David Rowley wrote:\n> On Thu, 6 Apr 2023 at 03:12, Bruce Momjian <[email protected]> wrote:\n> > During two presentations, I was asked if negative cache entries were\n> > created for cases where inner-side lookups returned no rows.\n> >\n> > It seems we don't do that. Has this been considered or is it planned?\n> \n> It does allow negative cache entries, so I'm curious about what you\n> did to test this.\n\nMy mistake. Someone asked in Los Angeles and Jan Wieck checked during\nthe talk and said he didn't see it, and when someone asked in Moscow, I\nrepeated that answer. My mistake. I have updated the slides with the\ncorrect information.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Wed, 5 Apr 2023 17:51:15 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Negative cache entries for memoize"
}
] |
[
{
"msg_contents": "Add smgrzeroextend(), FileZero(), FileFallocate()\n\nsmgrzeroextend() uses FileFallocate() to efficiently extend files by multiple\nblocks. When extending by a small number of blocks, use FileZero() instead, as\nusing posix_fallocate() for small numbers of blocks is inefficient for some\nfile systems / operating systems. FileZero() is also used as the fallback for\nFileFallocate() on platforms / filesystems that don't support fallocate.\n\nA big advantage of using posix_fallocate() is that it typically won't cause\ndirty buffers in the kernel pagecache. So far the most common pattern in our\ncode is that we smgrextend() a page full of zeroes and put the corresponding\npage into shared buffers, from where we later write out the actual contents of\nthe page. If the kernel, e.g. due to memory pressure or elapsed time, already\nwrote back the all-zeroes page, this can lead to doubling the amount of writes\nreaching storage.\n\nThere are no users of smgrzeroextend() as of this commit. That will follow in\nfuture commits.\n\nReviewed-by: Melanie Plageman <[email protected]>\nReviewed-by: Heikki Linnakangas <[email protected]>\nReviewed-by: Kyotaro Horiguchi <[email protected]>\nReviewed-by: David Rowley <[email protected]>\nReviewed-by: John Naylor <[email protected]>\nDiscussion: https://postgr.es/m/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/4d330a61bb1969df31f2cebfe1ba9d1d004346d8\n\nModified Files\n--------------\nsrc/backend/storage/file/fd.c | 88 ++++++++++++++++++++++++++++++++\nsrc/backend/storage/smgr/md.c | 108 ++++++++++++++++++++++++++++++++++++++++\nsrc/backend/storage/smgr/smgr.c | 28 +++++++++++\nsrc/include/storage/fd.h | 3 ++\nsrc/include/storage/md.h | 2 +\nsrc/include/storage/smgr.h | 2 +\n6 files changed, 231 insertions(+)",
"msg_date": "Wed, 05 Apr 2023 17:27:12 +0000",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Add smgrzeroextend(), FileZero(), FileFallocate()"
},
{
"msg_contents": "Re: Andres Freund\n> Add smgrzeroextend(), FileZero(), FileFallocate()\n\nHi,\n\nI'm often seeing PG16 builds erroring out in the pgbench tests:\n\n00:33:12 make[2]: Entering directory '/<<PKGBUILDDIR>>/build/src/bin/pgbench'\n00:33:12 echo \"# +++ tap check in src/bin/pgbench +++\" && rm -rf '/<<PKGBUILDDIR>>/build/src/bin/pgbench'/tmp_check && /bin/mkdir -p '/<<PKGBUILDDIR>>/build/src/bin/pgbench'/tmp_check && cd /<<PKGBUILDDIR>>/build/../src/bin/pgbench && TESTLOGDIR='/<<PKGBUILDDIR>>/build/src/bin/pgbench/tmp_check/log' TESTDATADIR='/<<PKGBUILDDIR>>/build/src/bin/pgbench/tmp_check' PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/postgresql/16/bin:/<<PKGBUILDDIR>>/build/src/bin/pgbench:$PATH\" LD_LIBRARY_PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/aarch64-linux-gnu\" PGPORT='65432' top_builddir='/<<PKGBUILDDIR>>/build/src/bin/pgbench/../../..' PG_REGRESS='/<<PKGBUILDDIR>>/build/src/bin/pgbench/../../../src/test/regress/pg_regress' /usr/bin/prove -I /<<PKGBUILDDIR>>/build/../src/test/perl/ -I /<<PKGBUILDDIR>>/build/../src/bin/pgbench --verbose t/*.pl\n00:33:12 # +++ tap check in src/bin/pgbench +++\n00:33:14 # Failed test 'concurrent OID generation status (got 2 vs expected 0)'\n00:33:14 # at t/001_pgbench_with_server.pl line 31.\n00:33:14 # Failed test 'concurrent OID generation stdout /(?^:processed: 125/125)/'\n00:33:14 # at t/001_pgbench_with_server.pl line 31.\n00:33:14 # 'pgbench (16devel (Debian 16~~devel-1.pgdg100+~20230423.1656.g8bbd0cc))\n00:33:14 # transaction type: /<<PKGBUILDDIR>>/build/src/bin/pgbench/tmp_check/t_001_pgbench_with_server_main_data/001_pgbench_concurrent_insert\n00:33:14 # scaling factor: 1\n00:33:14 # query mode: prepared\n00:33:14 # number of clients: 5\n00:33:14 # number of threads: 1\n00:33:14 # maximum number of tries: 1\n00:33:14 # number of transactions per client: 25\n00:33:14 # number of transactions actually processed: 118/125\n00:33:14 # number of failed transactions: 0 (0.000%)\n00:33:14 # latency average = 26.470 ms\n00:33:14 # initial connection time = 66.583 ms\n00:33:14 # tps = 188.889760 (without initial connection time)\n00:33:14 # '\n00:33:14 # doesn't match '(?^:processed: 125/125)'\n00:33:14 # Failed test 'concurrent OID generation stderr /(?^:^$)/'\n00:33:14 # at t/001_pgbench_with_server.pl line 31.\n00:33:14 # 'pgbench: error: client 2 script 0 aborted in command 0 query 0: ERROR: could not extend file \"base/5/3501\" with FileFallocate(): Interrupted system call\n00:33:14 # HINT: Check free disk space.\n00:33:14 # pgbench: error: Run was aborted; the above results are incomplete.\n00:33:14 # '\n00:33:14 # doesn't match '(?^:^$)'\n00:33:26 # Looks like you failed 3 tests of 428.\n00:33:26 t/001_pgbench_with_server.pl ..\n00:33:26 not ok 1 - concurrent OID generation status (got 2 vs expected 0)\n\nI don't think the disk is full since it's always hitting that same\nspot, on some of the builds:\n\nhttps://pgdgbuild.dus.dg-i.net/job/postgresql-16-binaries-snapshot/833/\n\nThis is overlayfs with tmpfs (upper)/ext4 (lower). Manually running\nthat test works though, and the FS seems to support posix_fallocate:\n\n#include <fcntl.h>\n#include <stdio.h>\n\nint main ()\n{\n int f;\n int err;\n\n if (!(f = open(\"moo\", O_CREAT | O_RDWR, 0666)))\n perror(\"open\");\n\n err = posix_fallocate(f, 0, 10);\n perror(\"posix_fallocate\");\n\n return 0;\n}\n\n$ ./a.out\nposix_fallocate: Success\n\nThe problem has been there for some weeks - I didn't report it earlier\nas I was on vacation, in the free time trying to bootstrap s390x\nsupport for apt.pg.o, and there was this other direct IO problem\nmaking all the builds fail for some time.\n\nChristoph\n\n\n",
"msg_date": "Mon, 24 Apr 2023 10:53:35 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 10:53:35AM +0200, Christoph Berg wrote:\n> Re: Andres Freund\n> > Add smgrzeroextend(), FileZero(), FileFallocate()\n> \n> Hi,\n> \n> I'm often seeing PG16 builds erroring out in the pgbench tests:\n> \n> 00:33:12 make[2]: Entering directory '/<<PKGBUILDDIR>>/build/src/bin/pgbench'\n> 00:33:12 echo \"# +++ tap check in src/bin/pgbench +++\" && rm -rf '/<<PKGBUILDDIR>>/build/src/bin/pgbench'/tmp_check && /bin/mkdir -p '/<<PKGBUILDDIR>>/build/src/bin/pgbench'/tmp_check && cd /<<PKGBUILDDIR>>/build/../src/bin/pgbench && TESTLOGDIR='/<<PKGBUILDDIR>>/build/src/bin/pgbench/tmp_check/log' TESTDATADIR='/<<PKGBUILDDIR>>/build/src/bin/pgbench/tmp_check' PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/postgresql/16/bin:/<<PKGBUILDDIR>>/build/src/bin/pgbench:$PATH\" LD_LIBRARY_PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/aarch64-linux-gnu\" PGPORT='65432' top_builddir='/<<PKGBUILDDIR>>/build/src/bin/pgbench/../../..' PG_REGRESS='/<<PKGBUILDDIR>>/build/src/bin/pgbench/../../../src/test/regress/pg_regress' /usr/bin/prove -I /<<PKGBUILDDIR>>/build/../src/test/perl/ -I /<<PKGBUILDDIR>>/build/../src/bin/pgbench --verbose t/*.pl\n> 00:33:12 # +++ tap check in src/bin/pgbench +++\n> 00:33:14 # Failed test 'concurrent OID generation status (got 2 vs expected 0)'\n> 00:33:14 # at t/001_pgbench_with_server.pl line 31.\n> 00:33:14 # Failed test 'concurrent OID generation stdout /(?^:processed: 125/125)/'\n> 00:33:14 # at t/001_pgbench_with_server.pl line 31.\n> 00:33:14 # 'pgbench (16devel (Debian 16~~devel-1.pgdg100+~20230423.1656.g8bbd0cc))\n> 00:33:14 # transaction type: /<<PKGBUILDDIR>>/build/src/bin/pgbench/tmp_check/t_001_pgbench_with_server_main_data/001_pgbench_concurrent_insert\n> 00:33:14 # scaling factor: 1\n> 00:33:14 # query mode: prepared\n> 00:33:14 # number of clients: 5\n> 00:33:14 # number of threads: 1\n> 00:33:14 # maximum number of tries: 1\n> 00:33:14 # number of transactions per client: 25\n> 00:33:14 # number of transactions actually processed: 118/125\n> 00:33:14 # number of failed transactions: 0 (0.000%)\n> 00:33:14 # latency average = 26.470 ms\n> 00:33:14 # initial connection time = 66.583 ms\n> 00:33:14 # tps = 188.889760 (without initial connection time)\n> 00:33:14 # '\n> 00:33:14 # doesn't match '(?^:processed: 125/125)'\n> 00:33:14 # Failed test 'concurrent OID generation stderr /(?^:^$)/'\n> 00:33:14 # at t/001_pgbench_with_server.pl line 31.\n> 00:33:14 # 'pgbench: error: client 2 script 0 aborted in command 0 query 0: ERROR: could not extend file \"base/5/3501\" with FileFallocate(): Interrupted system call\n> 00:33:14 # HINT: Check free disk space.\n> 00:33:14 # pgbench: error: Run was aborted; the above results are incomplete.\n> 00:33:14 # '\n> 00:33:14 # doesn't match '(?^:^$)'\n> 00:33:26 # Looks like you failed 3 tests of 428.\n> 00:33:26 t/001_pgbench_with_server.pl ..\n> 00:33:26 not ok 1 - concurrent OID generation status (got 2 vs expected 0)\n> \n> I don't think the disk is full since it's always hitting that same\n> spot, on some of the builds:\n> \n> https://pgdgbuild.dus.dg-i.net/job/postgresql-16-binaries-snapshot/833/\n> \n> This is overlayfs with tmpfs (upper)/ext4 (lower). Manually running\n> that test works though, and the FS seems to support posix_fallocate:\n> \n> #include <fcntl.h>\n> #include <stdio.h>\n> \n> int main ()\n> {\n> int f;\n> int err;\n> \n> if (!(f = open(\"moo\", O_CREAT | O_RDWR, 0666)))\n> perror(\"open\");\n> \n> err = posix_fallocate(f, 0, 10);\n> perror(\"posix_fallocate\");\n> \n> return 0;\n> }\n> \n> $ ./a.out\n> posix_fallocate: Success\n> \n> The problem has been there for some weeks - I didn't report it earlier\n> as I was on vacation, in the free time trying to bootstrap s390x\n> support for apt.pg.o, and there was this other direct IO problem\n> making all the builds fail for some time.\n\nI noticed that dsm_impl_posix_resize() does a do while rc==EINTR and\nFileFallocate() doesn't. From what the comment says in\ndsm_impl_posix_resize() and some cursory googling, posix_fallocate()\ndoesn't restart automatically on most systems, so a do while() rc==EINTR\nis often used. Is there a reason it isn't used in FileFallocate() I\nwonder?\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Apr 2023 11:58:55 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 10:53:35 +0200, Christoph Berg wrote:\n> I'm often seeing PG16 builds erroring out in the pgbench tests:\n\nInteresting!\n\n\n> I don't think the disk is full since it's always hitting that same\n> spot, on some of the builds:\n\nYea, the EINTR pretty clearly indicates that it's not really out-of-space.\n\n\n> https://pgdgbuild.dus.dg-i.net/job/postgresql-16-binaries-snapshot/833/\n> \n> This is overlayfs with tmpfs (upper)/ext4 (lower). Manually running\n> that test works though, and the FS seems to support posix_fallocate:\n\nI guess it requires a bunch of memory (?) pressure for this to happen\n(triggering blocking during fallocate, opening the window for a signal to\narrive), which likely only happens when running things concurrently.\n\n\nWe obviously can add a retry loop to FileFallocate(), similar to what's\nalready present e.g. in FileRead(). But I wonder if we shouldn't go a bit\nfurther, and do it for all the fd.c routines where it's remotely plausible\nEINTR could be returned? It's a bit silly to add EINTR retries one-by-one to\nthe functions.\n\n\nThe following are documented to potentially return EINTR, without fd.c having\ncode to retry:\n\n- FileWriteback() / pg_flush_data()\n- FileSync() / pg_fsync()\n- FileFallocate()\n- FileTruncate()\n\nWith the first two there's the added complication that it's not entirely\nobvious whether it'd be better to handle this in File* or pg_*. I'd argue the\nlatter is a bit more sensible?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:32:25 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 15:32:25 -0700, Andres Freund wrote:\n> On 2023-04-24 10:53:35 +0200, Christoph Berg wrote:\n> > I'm often seeing PG16 builds erroring out in the pgbench tests:\n> > I don't think the disk is full since it's always hitting that same\n> > spot, on some of the builds:\n> \n> Yea, the EINTR pretty clearly indicates that it's not really out-of-space.\n\nFWIW, I tried to reproduce this, without success - not too surprising, I\nassume it's rather timing dependent.\n\n\n> We obviously can add a retry loop to FileFallocate(), similar to what's\n> already present e.g. in FileRead(). But I wonder if we shouldn't go a bit\n> further, and do it for all the fd.c routines where it's remotely plausible\n> EINTR could be returned? It's a bit silly to add EINTR retries one-by-one to\n> the functions.\n> \n> \n> The following are documented to potentially return EINTR, without fd.c having\n> code to retry:\n> \n> - FileWriteback() / pg_flush_data()\n> - FileSync() / pg_fsync()\n> - FileFallocate()\n> - FileTruncate()\n> \n> With the first two there's the added complication that it's not entirely\n> obvious whether it'd be better to handle this in File* or pg_*. I'd argue the\n> latter is a bit more sensible?\n\nA prototype of that approach is attached. I pushed the retry handling into the\npg_* routines where applicable. I guess we could add pg_* routines for\nFileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n\nChristoph, could you verify this fixes your issue?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 24 Apr 2023 17:16:23 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Re: Andres Freund\n> A prototype of that approach is attached. I pushed the retry handling into the\n> pg_* routines where applicable. I guess we could add pg_* routines for\n> FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n> \n> Christoph, could you verify this fixes your issue?\n\nEverything green with the patch applied. Thanks!\n\nhttps://pgdgbuild.dus.dg-i.net/job/postgresql-16-binaries-snapshot/839/\n\nChristoph\n\n\n",
"msg_date": "Tue, 25 Apr 2023 20:24:30 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "On Tue, Apr 25, 2023 at 12:16 PM Andres Freund <[email protected]> wrote:\n> On 2023-04-24 15:32:25 -0700, Andres Freund wrote:\n> > We obviously can add a retry loop to FileFallocate(), similar to what's\n> > already present e.g. in FileRead(). But I wonder if we shouldn't go a bit\n> > further, and do it for all the fd.c routines where it's remotely plausible\n> > EINTR could be returned? It's a bit silly to add EINTR retries one-by-one to\n> > the functions.\n> >\n> >\n> > The following are documented to potentially return EINTR, without fd.c having\n> > code to retry:\n> >\n> > - FileWriteback() / pg_flush_data()\n> > - FileSync() / pg_fsync()\n> > - FileFallocate()\n> > - FileTruncate()\n> >\n> > With the first two there's the added complication that it's not entirely\n> > obvious whether it'd be better to handle this in File* or pg_*. I'd argue the\n> > latter is a bit more sensible?\n>\n> A prototype of that approach is attached. I pushed the retry handling into the\n> pg_* routines where applicable. I guess we could add pg_* routines for\n> FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n\nOne problem we ran into with the the shm_open() case (which is nearly\nidentical under the covers, since shm_open() just opens a file in a\ntmpfs on Linux) was that a simple retry loop like this could never\nterminate if the process was receiving a lot of signals from the\nrecovery process, which is why we went with the idea of masking\nsignals instead. Eventually we should probably grow the file in\nsmaller chunks with a CFI in between so that we both guarantee that we\nmake progress (by masking for smaller size increases) and service\ninterrupts in a timely fashion (by unmasking between loops). I don't\nthink that applies here because we're not trying to fallocate\nhumongous size increases in one go, but I just want to note that we're\nmaking a different choice. I think this looks reasonable for the use\ncases we actually have here.\n\n\n",
"msg_date": "Wed, 26 Apr 2023 11:37:55 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Re: Andres Freund\n> A prototype of that approach is attached. I pushed the retry handling into the\n> pg_* routines where applicable. I guess we could add pg_* routines for\n> FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n> \n> Christoph, could you verify this fixes your issue?\n\nHi,\n\nI believe this issue is still open for PG16.\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 May 2023 16:25:59 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "On Tue, May 23, 2023 at 04:25:59PM +0200, Christoph Berg wrote:\n> I believe this issue is still open for PG16.\n\nRight. I've added an item to the list, to not forget.\n--\nMichael",
"msg_date": "Wed, 24 May 2023 10:42:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "At Wed, 26 Apr 2023 11:37:55 +1200, Thomas Munro <[email protected]> wrote in \r\n> On Tue, Apr 25, 2023 at 12:16 PM Andres Freund <[email protected]> wrote:\r\n> > On 2023-04-24 15:32:25 -0700, Andres Freund wrote:\r\n> > > We obviously can add a retry loop to FileFallocate(), similar to what's\r\n> > > already present e.g. in FileRead(). But I wonder if we shouldn't go a bit\r\n> > > further, and do it for all the fd.c routines where it's remotely plausible\r\n> > > EINTR could be returned? It's a bit silly to add EINTR retries one-by-one to\r\n> > > the functions.\r\n> > >\r\n> > >\r\n> > > The following are documented to potentially return EINTR, without fd.c having\r\n> > > code to retry:\r\n> > >\r\n> > > - FileWriteback() / pg_flush_data()\r\n> > > - FileSync() / pg_fsync()\r\n> > > - FileFallocate()\r\n> > > - FileTruncate()\r\n> > >\r\n> > > With the first two there's the added complication that it's not entirely\r\n> > > obvious whether it'd be better to handle this in File* or pg_*. I'd argue the\r\n> > > latter is a bit more sensible?\r\n> >\r\n> > A prototype of that approach is attached. I pushed the retry handling into the\r\n> > pg_* routines where applicable. I guess we could add pg_* routines for\r\n> > FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\r\n> \r\n> One problem we ran into with the the shm_open() case (which is nearly\r\n> identical under the covers, since shm_open() just opens a file in a\r\n> tmpfs on Linux) was that a simple retry loop like this could never\r\n> terminate if the process was receiving a lot of signals from the\r\n> recovery process, which is why we went with the idea of masking\r\n> signals instead. Eventually we should probably grow the file in\r\n> smaller chunks with a CFI in between so that we both guarantee that we\r\n> make progress (by masking for smaller size increases) and service\r\n> interrupts in a timely fashion (by unmasking between loops). I don't\r\n> think that applies here because we're not trying to fallocate\r\n> humongous size increases in one go, but I just want to note that we're\r\n> making a different choice. I think this looks reasonable for the use\r\n> cases we actually have here.\r\n\r\nFWIW I share the same feeling about looping by EINTR without signals\r\nbeing blocked. If we just retry the same operation without processing\r\nsignals after getting EINTR, I think blocking signals is better. We\r\ncould block signals more gracefully, but I'm not sure it's worth the\r\ncomplexity.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 24 May 2023 10:56:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-24 10:56:28 +0900, Kyotaro Horiguchi wrote:\n> At Wed, 26 Apr 2023 11:37:55 +1200, Thomas Munro <[email protected]> wrote in \n> > On Tue, Apr 25, 2023 at 12:16 PM Andres Freund <[email protected]> wrote:\n> > > On 2023-04-24 15:32:25 -0700, Andres Freund wrote:\n> > > > We obviously can add a retry loop to FileFallocate(), similar to what's\n> > > > already present e.g. in FileRead(). But I wonder if we shouldn't go a bit\n> > > > further, and do it for all the fd.c routines where it's remotely plausible\n> > > > EINTR could be returned? It's a bit silly to add EINTR retries one-by-one to\n> > > > the functions.\n> > > >\n> > > >\n> > > > The following are documented to potentially return EINTR, without fd.c having\n> > > > code to retry:\n> > > >\n> > > > - FileWriteback() / pg_flush_data()\n> > > > - FileSync() / pg_fsync()\n> > > > - FileFallocate()\n> > > > - FileTruncate()\n> > > >\n> > > > With the first two there's the added complication that it's not entirely\n> > > > obvious whether it'd be better to handle this in File* or pg_*. I'd argue the\n> > > > latter is a bit more sensible?\n> > >\n> > > A prototype of that approach is attached. I pushed the retry handling into the\n> > > pg_* routines where applicable. I guess we could add pg_* routines for\n> > > FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n> > \n> > One problem we ran into with the the shm_open() case (which is nearly\n> > identical under the covers, since shm_open() just opens a file in a\n> > tmpfs on Linux) was that a simple retry loop like this could never\n> > terminate if the process was receiving a lot of signals from the\n> > recovery process, which is why we went with the idea of masking\n> > signals instead. Eventually we should probably grow the file in\n> > smaller chunks with a CFI in between so that we both guarantee that we\n> > make progress (by masking for smaller size increases) and service\n> > interrupts in a timely fashion (by unmasking between loops). I don't\n> > think that applies here because we're not trying to fallocate\n> > humongous size increases in one go, but I just want to note that we're\n> > making a different choice. I think this looks reasonable for the use\n> > cases we actually have here.\n> \n> FWIW I share the same feeling about looping by EINTR without signals\n> being blocked. If we just retry the same operation without processing\n> signals after getting EINTR, I think blocking signals is better. We\n> could block signals more gracefully, but I'm not sure it's worth the\n> complexity.\n\nI seriously doubt it's a good path to go down in this case. As Thomas\nmentioned, this case isn't really comparable to the shm_open() one, due to the\nbounded vs unbounded amount of memory we're dealing with.\n\nWhat would be the benefit?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 May 2023 19:28:45 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "At Tue, 23 May 2023 19:28:45 -0700, Andres Freund <[email protected]> wrote in \r\n> Hi,\r\n> \r\n> On 2023-05-24 10:56:28 +0900, Kyotaro Horiguchi wrote:\r\n> > At Wed, 26 Apr 2023 11:37:55 +1200, Thomas Munro <[email protected]> wrote in \r\n> > > On Tue, Apr 25, 2023 at 12:16 PM Andres Freund <[email protected]> wrote:\r\n> > > > On 2023-04-24 15:32:25 -0700, Andres Freund wrote:\r\n> > > > > We obviously can add a retry loop to FileFallocate(), similar to what's\r\n> > > > > already present e.g. in FileRead(). But I wonder if we shouldn't go a bit\r\n> > > > > further, and do it for all the fd.c routines where it's remotely plausible\r\n> > > > > EINTR could be returned? It's a bit silly to add EINTR retries one-by-one to\r\n> > > > > the functions.\r\n> > > > >\r\n> > > > >\r\n> > > > > The following are documented to potentially return EINTR, without fd.c having\r\n> > > > > code to retry:\r\n> > > > >\r\n> > > > > - FileWriteback() / pg_flush_data()\r\n> > > > > - FileSync() / pg_fsync()\r\n> > > > > - FileFallocate()\r\n> > > > > - FileTruncate()\r\n> > > > >\r\n> > > > > With the first two there's the added complication that it's not entirely\r\n> > > > > obvious whether it'd be better to handle this in File* or pg_*. I'd argue the\r\n> > > > > latter is a bit more sensible?\r\n> > > >\r\n> > > > A prototype of that approach is attached. I pushed the retry handling into the\r\n> > > > pg_* routines where applicable. I guess we could add pg_* routines for\r\n> > > > FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\r\n> > > \r\n> > > One problem we ran into with the the shm_open() case (which is nearly\r\n> > > identical under the covers, since shm_open() just opens a file in a\r\n> > > tmpfs on Linux) was that a simple retry loop like this could never\r\n> > > terminate if the process was receiving a lot of signals from the\r\n> > > recovery process, which is why we went with the idea of masking\r\n> > > signals instead. Eventually we should probably grow the file in\r\n> > > smaller chunks with a CFI in between so that we both guarantee that we\r\n> > > make progress (by masking for smaller size increases) and service\r\n> > > interrupts in a timely fashion (by unmasking between loops). I don't\r\n> > > think that applies here because we're not trying to fallocate\r\n> > > humongous size increases in one go, but I just want to note that we're\r\n> > > making a different choice. I think this looks reasonable for the use\r\n> > > cases we actually have here.\r\n> > \r\n> > FWIW I share the same feeling about looping by EINTR without signals\r\n> > being blocked. If we just retry the same operation without processing\r\n> > signals after getting EINTR, I think blocking signals is better. We\r\n> > could block signals more gracefully, but I'm not sure it's worth the\r\n> > complexity.\r\n> \r\n> I seriously doubt it's a good path to go down in this case. As Thomas\r\n> mentioned, this case isn't really comparable to the shm_open() one, due to the\r\n> bounded vs unbounded amount of memory we're dealing with.\r\n> \r\n> What would be the benefit?\r\n\r\nI'm not certain what you mean by \"it\" here. Regarding signal blocking,\r\nthe benefit would be a lower chance of getting constantly interrupted\r\nby a string of frequent interrupts, which can't be prevented just by\r\nlooping over. From what I gathered, Thomas meant that we don't need to\r\nuse chunking to prevent long periods of ignoring interrupts because\r\nwe're extending a file by a few blocks. However, I might have\r\nmisunderstood.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 24 May 2023 13:13:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "On 2023-Apr-24, Andres Freund wrote:\n\n> A prototype of that approach is attached. I pushed the retry handling into the\n> pg_* routines where applicable. I guess we could add pg_* routines for\n> FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n> \n> Christoph, could you verify this fixes your issue?\n\nSo, is anyone making progress on this? I don't see anything in the\nthread.\n\nOn adding the missing pg_* wrappers: I think if we don't (and we leave\nthe retry loops at the File* layer), then the risk is that some external\ncode would add calls to the underlying File* routines trusting them to\ndo the retrying, which would then become broken when we move the retry\nloops to the pg_* wrappers when we add them. That doesn't seem terribly\nserious to me.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n",
"msg_date": "Tue, 6 Jun 2023 21:53:00 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Re: Alvaro Herrera\n> > Christoph, could you verify this fixes your issue?\n> \n> So, is anyone making progress on this? I don't see anything in the\n> thread.\n\nWell, I had reported that I haven't been seeing any problems since I\napplied the patch to the postgresql-16.deb package. So for me, the\npatch looks like it solves the problem.\n\nChristoph\n\n\n",
"msg_date": "Wed, 7 Jun 2023 15:18:42 +0200",
"msg_from": "Christoph Berg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-06 21:53:00 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-24, Andres Freund wrote:\n> \n> > A prototype of that approach is attached. I pushed the retry handling into the\n> > pg_* routines where applicable. I guess we could add pg_* routines for\n> > FileFallocate(), FilePrewarm() etc as well, but I didn't do that here.\n> > \n> > Christoph, could you verify this fixes your issue?\n> \n> So, is anyone making progress on this? I don't see anything in the\n> thread.\n\nThanks for bringing it up again, I had lost track. I now added an open items\nentry.\n\nMy gut feeling is that we should go with something quite minimal at this\nstage.\n\n\n> On adding the missing pg_* wrappers: I think if we don't (and we leave\n> the retry loops at the File* layer), then the risk is that some external\n> code would add calls to the underlying File* routines trusting them to\n> do the retrying, which would then become broken when we move the retry\n> loops to the pg_* wrappers when we add them. That doesn't seem terribly\n> serious to me.\n\nI'm not too worried about that either.\n\n\nUnless somebody strongly advocates a different path, I plan to push something\nalong the lines of the prototype I had posted. After reading over it a bunch\nmore times, some of the code is a bit finnicky.\n\n\nI wish we had some hack that made syscalls EINTR at a random intervals, just\nto make it realistic to test these paths...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Jun 2023 21:35:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi Tom,\n\nUnfortunately, due to some personal life business, it took until for me to\nfeel comfortable pushing the fix for\nhttps://www.postgresql.org/message-id/[email protected]\n(FileFallocate() erroring out with EINTR due to running on tmpfs).\n\nDo you want me to hold off before beta2 is wrapped? I did a bunch of CI runs\nwith the patch and patch + test infra, and they all passed.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 19 Jun 2023 10:27:06 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Unfortunately, due to some personal life business, it took until for me to\n> feel comfortable pushing the fix for\n> https://www.postgresql.org/message-id/[email protected]\n> (FileFallocate() erroring out with EINTR due to running on tmpfs).\n> Do you want me to hold off before beta2 is wrapped? I did a bunch of CI runs\n> with the patch and patch + test infra, and they all passed.\n\nWe still have a week till beta2 wrap, so I'd say push. If the buildfarm\ngets unhappy you can revert.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jun 2023 13:37:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi,\n\nOn June 19, 2023 10:37:45 AM PDT, Tom Lane <[email protected]> wrote:\n>Andres Freund <[email protected]> writes:\n>> Unfortunately, due to some personal life business, it took until for me to\n>> feel comfortable pushing the fix for\n>> https://www.postgresql.org/message-id/[email protected]\n>> (FileFallocate() erroring out with EINTR due to running on tmpfs).\n>> Do you want me to hold off before beta2 is wrapped? I did a bunch of CI runs\n>> with the patch and patch + test infra, and they all passed.\n>\n>We still have a week till beta2 wrap, so I'd say push. If the buildfarm\n>gets unhappy you can revert.\n\nHah. Somehow I confused myself into thinking you're wrapping later today. Calendar math vs Andres: 6753:3\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 19 Jun 2023 11:16:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?US-ASCII?Q?Re=3A_could_not_extend_file_=22base/5/3501=22_wi?=\n =?US-ASCII?Q?th_FileFallocate=28=29=3A_Interrupted_system_call?="
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 11:47 PM Andres Freund <[email protected]> wrote:\n>\n> On June 19, 2023 10:37:45 AM PDT, Tom Lane <[email protected]> wrote:\n> >Andres Freund <[email protected]> writes:\n> >> Unfortunately, due to some personal life business, it took until for me to\n> >> feel comfortable pushing the fix for\n> >> https://www.postgresql.org/message-id/[email protected]\n> >> (FileFallocate() erroring out with EINTR due to running on tmpfs).\n> >> Do you want me to hold off before beta2 is wrapped? I did a bunch of CI runs\n> >> with the patch and patch + test infra, and they all passed.\n> >\n> >We still have a week till beta2 wrap, so I'd say push. If the buildfarm\n> >gets unhappy you can revert.\n>\n> Hah. Somehow I confused myself into thinking you're wrapping later today. Calendar math vs Andres: 6753:3\n>\n\nCan we close the open item corresponding to this after your commit 0d369ac650?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 08:54:48 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-21 08:54:48 +0530, Amit Kapila wrote:\n> On Mon, Jun 19, 2023 at 11:47 PM Andres Freund <[email protected]> wrote:\n> >\n> > On June 19, 2023 10:37:45 AM PDT, Tom Lane <[email protected]> wrote:\n> > >Andres Freund <[email protected]> writes:\n> > >> Unfortunately, due to some personal life business, it took until for me to\n> > >> feel comfortable pushing the fix for\n> > >> https://www.postgresql.org/message-id/[email protected]\n> > >> (FileFallocate() erroring out with EINTR due to running on tmpfs).\n> > >> Do you want me to hold off before beta2 is wrapped? I did a bunch of CI runs\n> > >> with the patch and patch + test infra, and they all passed.\n> > >\n> > >We still have a week till beta2 wrap, so I'd say push. If the buildfarm\n> > >gets unhappy you can revert.\n> >\n> > Hah. Somehow I confused myself into thinking you're wrapping later today. Calendar math vs Andres: 6753:3\n> >\n> \n> Can we close the open item corresponding to this after your commit 0d369ac650?\n\nYes, sorry for forgetting that. Done now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Jun 2023 20:49:20 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: could not extend file \"base/5/3501\" with FileFallocate():\n Interrupted system call"
}
] |
[
{
"msg_contents": "Hi,\n\nI just saw the following failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-04-05%2017%3A47%3A03\nafter a commit of mine. The symptoms look unrelated though.\n\n[17:54:42.188](258.345s) # poll_query_until timed out executing this query:\n# SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n# expecting this output:\n# lost\n# last actual query output:\n# unreserved\n# with stderr:\ntimed out waiting for slot to be lost at /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/recovery/t/019_replslot_limit.pl line 400.\n\nWe're expecting \"lost\" but are getting \"unreserved\".\n\n\nAt first I though this was just a race - it's not guaranteed that a checkpoint\nto remove the WAL files occurs anytime soon.\n\nBut there might be something else going on - in this case a checkpoint\nstarted, but never finished:\n\n2023-04-05 17:50:23.786 UTC [345177] 019_replslot_limit.pl LOG: statement: SELECT pg_switch_wal();\n2023-04-05 17:50:23.787 UTC [342404] LOG: checkpoints are occurring too frequently (2 seconds apart)\n2023-04-05 17:50:23.787 UTC [342404] HINT: Consider increasing the configuration parameter \"max_wal_size\".\n2023-04-05 17:50:23.787 UTC [342404] LOG: checkpoint starting: wal\n2023-04-05 17:50:23.837 UTC [345264] 019_replslot_limit.pl LOG: statement: CREATE TABLE t ();\n2023-04-05 17:50:23.839 UTC [345264] 019_replslot_limit.pl LOG: statement: DROP TABLE t;\n2023-04-05 17:50:23.840 UTC [345264] 019_replslot_limit.pl LOG: statement: SELECT pg_switch_wal();\n2023-04-05 17:50:23.841 UTC [342404] LOG: terminating process 344783 to release replication slot \"rep3\"\n2023-04-05 17:50:23.841 UTC [342404] DETAIL: The slot's restart_lsn 0/7000D8 exceeds the limit by 1048360 bytes.\n2023-04-05 17:50:23.841 UTC [342404] HINT: You might need to increase max_slot_wal_keep_size.\n2023-04-05 17:50:23.862 UTC [344783] standby_3 FATAL: terminating connection due to administrator command\n2023-04-05 17:50:23.862 UTC [344783] standby_3 STATEMENT: START_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1\n2023-04-05 17:50:23.893 UTC [345314] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n[many repetitions of the above, just differing in time and pid]\n2023-04-05 17:54:42.084 UTC [491062] 019_replslot_limit.pl LOG: statement: SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n2023-04-05 17:54:42.200 UTC [342365] LOG: received immediate shutdown request\n2023-04-05 17:54:42.229 UTC [342365] LOG: database system is shut down\n\nNote that a checkpoint started at \"17:50:23.787\", but didn't finish before the\ndatabase was shut down. As far as I can tell, this can not be caused by\ncheckpoint_timeout, because by the time we get to invalidating replication\nslots, we already did CheckPointBuffers(), and that's the only thing that\ndelays based on checkpoint_timeout.\n\nISTM that this indicates that checkpointer got stuck after signalling\n344783.\n\nDo you see any other explanation?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 11:48:53 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "failure in 019_replslot_limit"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-05 11:48:53 -0700, Andres Freund wrote:\n> Note that a checkpoint started at \"17:50:23.787\", but didn't finish before the\n> database was shut down. As far as I can tell, this can not be caused by\n> checkpoint_timeout, because by the time we get to invalidating replication\n> slots, we already did CheckPointBuffers(), and that's the only thing that\n> delays based on checkpoint_timeout.\n> \n> ISTM that this indicates that checkpointer got stuck after signalling\n> 344783.\n> \n> Do you see any other explanation?\n\nThis all sounded vaguely familiar. After a bit bit of digging I found this:\n\nhttps://postgr.es/m/20220223014855.4lsddr464i7mymk2%40alap3.anarazel.de\n\nWhich seems like it plausibly explains the failed test?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 11:55:14 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: failure in 019_replslot_limit"
},
{
"msg_contents": "At Wed, 5 Apr 2023 11:55:14 -0700, Andres Freund <[email protected]> wrote in \n> Hi,\n> \n> On 2023-04-05 11:48:53 -0700, Andres Freund wrote:\n> > Note that a checkpoint started at \"17:50:23.787\", but didn't finish before the\n> > database was shut down. As far as I can tell, this can not be caused by\n> > checkpoint_timeout, because by the time we get to invalidating replication\n> > slots, we already did CheckPointBuffers(), and that's the only thing that\n> > delays based on checkpoint_timeout.\n> > \n> > ISTM that this indicates that checkpointer got stuck after signalling\n> > 344783.\n> > \n> > Do you see any other explanation?\n> \n> This all sounded vaguely familiar. After a bit bit of digging I found this:\n> \n> https://postgr.es/m/20220223014855.4lsddr464i7mymk2%40alap3.anarazel.de\n> \n> Which seems like it plausibly explains the failed test?\n\nAs my understanding, ConditionVariableSleep() can experience random\nwake-ups and ReplicationSlotControlLock doesn't prevent slot\nrelease. So, I can imagine a situation where that blocking might\nhappen. If the call ConditionVariableSleep(&s->active_cv) wakes up\nunexpectedly due to a latch set for reasons other than the CV\nbroadcast, and the target process releases the slot between fetching\nactive_pid in the loop and the following call to\nConditionVariablePrepareToSleep(), the CV broadcast triggered by the\nslot release might be missed. If that's the case, we'll need to check\nactive_pid again after the calling ConditionVariablePrepareToSleep().\n\nDoes this make sense?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 06 Apr 2023 12:09:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: failure in 019_replslot_limit"
},
{
"msg_contents": "Hello Andres,\n\n05.04.2023 21:48, Andres Freund wrote:\n> I just saw the following failure:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-04-05%2017%3A47%3A03\n> after a commit of mine. The symptoms look unrelated though.\n>\n> [17:54:42.188](258.345s) # poll_query_until timed out executing this query:\n> # SELECT wal_status FROM pg_replication_slots WHERE slot_name = 'rep3'\n> # expecting this output:\n> # lost\n> # last actual query output:\n> # unreserved\n> # with stderr:\n> timed out waiting for slot to be lost at /home/bf/bf-build/mylodon/HEAD/pgsql/src/test/recovery/t/019_replslot_limit.pl line 400.\n>\n> We're expecting \"lost\" but are getting \"unreserved\".\n>\n> ...\n>\n> ISTM that this indicates that checkpointer got stuck after signalling\n> 344783.\n>\n> Do you see any other explanation?\n>\n\nI've managed to reproduce this issue (which still persists:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-02-04%2001%3A53%3A44\n) and saw that it's not checkpointer, but walsender is hanging:\n[12:15:03.753](34.771s) ok 17 - have walsender pid 317885\n[12:15:03.875](0.122s) ok 18 - have walreceiver pid 317884\n[12:15:04.808](0.933s) ok 19 - walsender termination logged\n...\n\nLast essential messages in _primary3.log are:\n2024-02-09 12:15:04.823 UTC [318036][not initialized][:0] LOG: connection received: host=[local]\n2024-02-09 12:15:04.823 UTC [317885][walsender][3/0:0] FATAL: terminating connection due to administrator command\n2024-02-09 12:15:04.823 UTC [317885][walsender][3/0:0] STATEMENT: START_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1\n(then the test just queries the slot state, there are no other messages\nrelated to walsender)\n\nAnd I see the walsender process still running (I've increased the timeout\nto keep the test running and to connect to the process in question), with\nthe following stack trace:\n#0 0x00007fe4feac3d16 in epoll_wait (epfd=5, events=0x55b279b70f38, maxevents=1, timeout=timeout@entry=-1) at \n../sysdeps/unix/sysv/linux/epoll_wait.c:30\n#1 0x000055b278b9ab32 in WaitEventSetWaitBlock (set=set@entry=0x55b279b70eb8, cur_timeout=cur_timeout@entry=-1, \noccurred_events=occurred_events@entry=0x7ffda5ffac90, nevents=nevents@entry=1) at latch.c:1571\n#2 0x000055b278b9b6b6 in WaitEventSetWait (set=0x55b279b70eb8, timeout=timeout@entry=-1, \noccurred_events=occurred_events@entry=0x7ffda5ffac90, nevents=nevents@entry=1, \nwait_event_info=wait_event_info@entry=100663297) at latch.c:1517\n#3 0x000055b278a3f11f in secure_write (port=0x55b279b65aa0, ptr=ptr@entry=0x55b279bfbd08, len=len@entry=21470) at \nbe-secure.c:296\n#4 0x000055b278a460dc in internal_flush () at pqcomm.c:1356\n#5 0x000055b278a461d4 in internal_putbytes (s=s@entry=0x7ffda5ffad3c \"E\\177\", len=len@entry=1) at pqcomm.c:1302\n#6 0x000055b278a46299 in socket_putmessage (msgtype=<optimized out>, s=0x55b279b363c0 \"SFATAL\", len=112) at pqcomm.c:1483\n#7 0x000055b278a48670 in pq_endmessage (buf=buf@entry=0x7ffda5ffada0) at pqformat.c:302\n#8 0x000055b278d0c82a in send_message_to_frontend (edata=edata@entry=0x55b27908e500 <errordata>) at elog.c:3590\n#9 0x000055b278d0cfe2 in EmitErrorReport () at elog.c:1716\n#10 0x000055b278d0d17d in errfinish (filename=filename@entry=0x55b278eaa480 \"postgres.c\", lineno=lineno@entry=3295, \nfuncname=funcname@entry=0x55b278eaaef0 <__func__.16> \"ProcessInterrupts\") at elog.c:551\n#11 0x000055b278bc41c9 in ProcessInterrupts () at postgres.c:3295\n#12 0x000055b278b6c9af in WalSndLoop (send_data=send_data@entry=0x55b278b6c346 <XLogSendPhysical>) at walsender.c:2680\n#13 0x000055b278b6cef1 in StartReplication (cmd=cmd@entry=0x55b279b733e0) at walsender.c:987\n#14 0x000055b278b6d865 in exec_replication_command (cmd_string=cmd_string@entry=0x55b279b39c60 \"START_REPLICATION SLOT \n\\\"rep3\\\" 0/700000 TIMELINE 1\") at walsender.c:2039\n#15 0x000055b278bc7d71 in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4649\n#16 0x000055b278b2329d in BackendRun (port=port@entry=0x55b279b65aa0) at postmaster.c:4464\n#17 0x000055b278b263ae in BackendStartup (port=port@entry=0x55b279b65aa0) at postmaster.c:4140\n#18 0x000055b278b26539 in ServerLoop () at postmaster.c:1776\n#19 0x000055b278b27ac5 in PostmasterMain (argc=argc@entry=4, argv=argv@entry=0x55b279b34180) at postmaster.c:1475\n#20 0x000055b278a49ab0 in main (argc=4, argv=0x55b279b34180) at main.c:198\n\n(gdb) frame 9\n(gdb) print edata->message\n$3 = 0x55b279b367d0 \"terminating connection due to administrator command\"\n\nSo it looks like walsender tries to send the message to walreceiver, which\nis paused, and thus walsender gets stuck on it.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 9 Feb 2024 18:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: failure in 019_replslot_limit"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-09 18:00:01 +0300, Alexander Lakhin wrote:\n> I've managed to reproduce this issue (which still persists:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-02-04%2001%3A53%3A44\n> ) and saw that it's not checkpointer, but walsender is hanging:\n\nHow did you reproduce this?\n\n\n\n> And I see the walsender process still running (I've increased the timeout\n> to keep the test running and to connect to the process in question), with\n> the following stack trace:\n> #0� 0x00007fe4feac3d16 in epoll_wait (epfd=5, events=0x55b279b70f38,\n> maxevents=1, timeout=timeout@entry=-1) at\n> ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n> #1� 0x000055b278b9ab32 in WaitEventSetWaitBlock\n> (set=set@entry=0x55b279b70eb8, cur_timeout=cur_timeout@entry=-1,\n> occurred_events=occurred_events@entry=0x7ffda5ffac90,\n> nevents=nevents@entry=1) at latch.c:1571\n> #2� 0x000055b278b9b6b6 in WaitEventSetWait (set=0x55b279b70eb8,\n> timeout=timeout@entry=-1,\n> occurred_events=occurred_events@entry=0x7ffda5ffac90,\n> nevents=nevents@entry=1, wait_event_info=wait_event_info@entry=100663297) at\n> latch.c:1517\n> #3� 0x000055b278a3f11f in secure_write (port=0x55b279b65aa0,\n> ptr=ptr@entry=0x55b279bfbd08, len=len@entry=21470) at be-secure.c:296\n> #4� 0x000055b278a460dc in internal_flush () at pqcomm.c:1356\n> #5� 0x000055b278a461d4 in internal_putbytes (s=s@entry=0x7ffda5ffad3c \"E\\177\", len=len@entry=1) at pqcomm.c:1302\n\nSo it's the issue that we wait effectively forever to to send a FATAL. I've\npreviously proposed that we should not block sending out fatal errors, given\nthat allows clients to do prevent graceful restarts and a lot of other things.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Feb 2024 10:59:15 -0800",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: failure in 019_replslot_limit"
},
{
"msg_contents": "09.02.2024 21:59, Andres Freund wrote:\n>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-02-04%2001%3A53%3A44\n>> ) and saw that it's not checkpointer, but walsender is hanging:\n> How did you reproduce this?\n\nAs kestrel didn't produce this failure until recently, I supposed that the\ncause is the same as with subscription/031_column_list — longer test\nduration, so I ran this test in parallel (with 20-30 jobs) in a slowed\ndown VM, so that one successful test duration increased to 100-120 seconds.\nAnd I was lucky enough to catch it within 100 iterations. But now, that we\nknow what's happening there, I think I could reproduce it much easily,\nwith some sleep(s) added, if it would be of any interest.\n\n> So it's the issue that we wait effectively forever to to send a FATAL. I've\n> previously proposed that we should not block sending out fatal errors, given\n> that allows clients to do prevent graceful restarts and a lot of other things.\n>\n\nYes, I had demonstrated one of those unpleasant things previously too:\nhttps://www.postgresql.org/message-id/91c8860a-a866-71a7-a060-3f07af531295%40gmail.com\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sat, 10 Feb 2024 06:00:01 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: failure in 019_replslot_limit"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently the standby reaches ResolveRecoveryConflictWithVirtualXIDs()\nwith a waitlist of VXIDs that are preventing it from making progress.\nFor each one it enters a sleep/poll loop that repeatedly tries to\nacquire (conditionally) the VXID lock with VirtualXactLock(vxid,\nwait=false), as a reliable wait to know that that VXID is gone. If it\ncan't get it, it sleeps initially for 1ms, doubling it each time\nthrough the loop with a cap at 1s, until GetStandbyLimitTime() is\nexceeded. Then it switches to a machine-gun signal mode where it\ncalls CancelVirtualTransaction() repeatedly with a 5ms sleep, until\neventually it succeeds in acquiring the VXID lock or the universe\nends.\n\nA boring observation: the explanation in comments for the 1s cap is\nnow extra wrong because pg_usleep() *is* interruptible by signals due\nto a recent change, but it was already wrong to assume that that was a\nreliable property and has been forever. We should probably change\nthat pg_usleep() to a WaitLatch() so we can process incidental\ninterrupts faster, perhaps with something like the attached. It still\ndoesn't help with the condition that the loop is actually waiting for,\nthough.\n\nI don't really like WaitExceedsMaxStandbyDelay() at all, and would\nprobably rather refactor this a bit more, though. It has a misleading\nname, an egregious global variable, and generally feels like it wants\nto be moved back inside the loop that calls it.\n\nContemplating that made me brave enough to start wondering what this\ncode *really* wants, with a view to improving it for v17. What if,\ninstead of VirtualXactLock(vxid, wait) we could do\nVirtualXactLock(vxid, timeout_ms)? Then the pseudo-code for each VXID\nmight be as simple as:\n\nif (!VirtualXactLock(vxid, polite_wait_time))\n{\n CancelVirtualTransaction(vxid);\n VirtualXactLock(vxid, -1); // wait forever\n}\n\n... with some extra details because after some delay we want to log a\nmessage. You could ignore the current code that sets PS display with\n\" ... waiting\" after a short time, because that's emulating the lock\nmanager's PS display now the lock manager would do that.\n\nWhich leads to the question: why does the current code believe that it\nis necessary to cancel the VXID more than once? Dare I ask... could\nit be because the code on the receiving side of the proc signal is/was\nso buggy?\n\nInitially I was suspicious that there may be tricky races to deal with\naround that wakeup logic, and the poll/sleep loop was due to an\ninability to come up with something reliable.\n\nMaybe someone's going to balk at the notion of pushing a timeout down\nthrough the lock manager. I'm not sure how far into the code that'd\nneed to go, because haven't tried and I don't know off the top of my\nhead whether that'd best be done internally with timers or through\ntimeout arguments (probably better but requires more threading through\nmore layers), or if there is some technical reason to object to both\nof these.",
"msg_date": "Thu, 6 Apr 2023 07:46:36 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "How should we wait for recovery conflict resolution?"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 7:46 AM Thomas Munro <[email protected]> wrote:\n> Initially I was suspicious that there may be tricky races to deal with\n> around that wakeup logic, and the poll/sleep loop was due to an\n> inability to come up with something reliable.\n\n(Oops lost a sentence) ... but then I realised that we're just using\nthe lock manager here (as opposed to a special purpose rarely used\nsignaling system), which had better work.\n\n\n",
"msg_date": "Thu, 6 Apr 2023 07:49:34 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How should we wait for recovery conflict resolution?"
}
] |
[
{
"msg_contents": "psql: add an optional execution-count limit to \\watch.\n\n\\watch can now be told to stop after N executions of the query.\n\nWith the idea that we might want to add more options to \\watch\nin future, this patch generalizes the command's syntax to a list\nof name=value options, with the interval allowed to omit the name\nfor backwards compatibility.\n\nAndrey Borodin, reviewed by Kyotaro Horiguchi, Nathan Bossart,\nMichael Paquier, Yugo Nagata, and myself\n\nDiscussion: https://postgr.es/m/CAAhFRxiZ2-n_L1ErMm9AZjgmUK=qS6VHb+0SaMn8sqqbhF7How@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/00beecfe839c878abb366b68272426ed5296bc2b\n\nModified Files\n--------------\ndoc/src/sgml/ref/psql-ref.sgml | 10 +++-\nsrc/bin/psql/command.c | 118 +++++++++++++++++++++++++++++++------\nsrc/bin/psql/help.c | 2 +-\nsrc/bin/psql/t/001_basic.pl | 33 ++++++++---\nsrc/test/regress/expected/psql.out | 2 +-\nsrc/test/regress/sql/psql.sql | 2 +-\n6 files changed, 135 insertions(+), 32 deletions(-)",
"msg_date": "Thu, 06 Apr 2023 17:18:30 +0000",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Hi!\n\nOn Thu, Apr 6, 2023 at 8:18 PM Tom Lane <[email protected]> wrote:\n> psql: add an optional execution-count limit to \\watch.\n>\n> \\watch can now be told to stop after N executions of the query.\n\nThis commit makes tests fail for me. psql parses 'i' option of\n'\\watch' using locale-aware strtod(), but 001_basic.pl uses hard-coded\ndecimal separator. The proposed fix is attached.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 7 Apr 2023 15:04:00 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Thu, Apr 6, 2023 at 8:18 PM Tom Lane <[email protected]> wrote:\n>> psql: add an optional execution-count limit to \\watch.\n\n> This commit makes tests fail for me. psql parses 'i' option of\n> '\\watch' using locale-aware strtod(), but 001_basic.pl uses hard-coded\n> decimal separator.\n\nHuh, yeah, I see it too if I set LANG=ru_RU.utf8 before running psql's\nTAP tests. It seems unfortunate that none of the buildfarm has noticed\nthis. I guess all the TAP tests are run under C locale?\n\n> The proposed fix is attached.\n\nLGTM, will push in a bit (unless you want to?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 10:00:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 5:00 PM Tom Lane <[email protected]> wrote:\n> Alexander Korotkov <[email protected]> writes:\n> > On Thu, Apr 6, 2023 at 8:18 PM Tom Lane <[email protected]> wrote:\n> >> psql: add an optional execution-count limit to \\watch.\n>\n> > This commit makes tests fail for me. psql parses 'i' option of\n> > '\\watch' using locale-aware strtod(), but 001_basic.pl uses hard-coded\n> > decimal separator.\n>\n> Huh, yeah, I see it too if I set LANG=ru_RU.utf8 before running psql's\n> TAP tests. It seems unfortunate that none of the buildfarm has noticed\n> this. I guess all the TAP tests are run under C locale?\n\nI wonder if we can setup as least some buildfarm members to exercise\nTAP tests on non-C locales.\n\n> > The proposed fix is attached.\n>\n> LGTM, will push in a bit (unless you want to?)\n\nPlease push.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 7 Apr 2023 17:06:34 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Hi,\n\n> I wonder if we can setup as least some buildfarm members to exercise\n> TAP tests on non-C locales.\n>\n> > > The proposed fix is attached.\n> >\n> > LGTM, will push in a bit (unless you want to?)\n>\n> Please push.\n\nThe test still fails under the following conditions:\n\n```\n$ env | grep UTF-8\nLC_ADDRESS=ru_RU.UTF-8\nLC_NAME=ru_RU.UTF-8\nLC_MONETARY=ru_RU.UTF-8\nLC_PAPER=ru_RU.UTF-8\nLANG=en_US.UTF-8\nLC_IDENTIFICATION=ru_RU.UTF-8\nLC_TELEPHONE=ru_RU.UTF-8\nLC_MEASUREMENT=ru_RU.UTF-8\nLC_CTYPE=en_US.UTF-8\nLC_TIME=ru_RU.UTF-8\nLC_ALL=en_US.UTF-8\nLC_NUMERIC=ru_RU.UTF-8\n```\n\nThis is up-to-dated Ubuntu 22.04 with pretty much default settings\nexcept for the timezone changed to MSK and enabled Russian keyboard\nlayout.\n\nHere is a proposed fix. I realize this is a somewhat suboptimal\nsolution, but it makes the test pass regardless of the locale\nsettings.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 10 Apr 2023 15:48:57 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> The test still fails under the following conditions:\n\n> $ env | grep UTF-8\n> LANG=en_US.UTF-8\n> LC_ALL=en_US.UTF-8\n> LC_NUMERIC=ru_RU.UTF-8\n\nHmm, so psql is honoring the LC_NUMERIC setting in that environment,\nbut perl isn't. For me, it appears that adding 'use locale;' to\nthe test script will fix it ... can you confirm if it's OK for you?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Apr 2023 09:54:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Hi Tom,\n\n> Aleksander Alekseev <[email protected]> writes:\n> > The test still fails under the following conditions:\n>\n> > $ env | grep UTF-8\n> > LANG=en_US.UTF-8\n> > LC_ALL=en_US.UTF-8\n> > LC_NUMERIC=ru_RU.UTF-8\n>\n> Hmm, so psql is honoring the LC_NUMERIC setting in that environment,\n> but perl isn't. For me, it appears that adding 'use locale;' to\n> the test script will fix it ... can you confirm if it's OK for you?\n\nRight, src/bin/psql/t/001_basic.pl has \"use locale;\" since cd82e5c7\nand it fails nevertheless.\n\nIf I set LC_NUMERIC manually:\n\n```\nLC_NUMERIC=en_US.UTF-8 meson test -C build --suite postgresql:psql\n```\n\n... the test passes. I can confirm that Perl doesn't seem to be\nhonoring LC_NUMERIC:\n\n```\n$ LC_ALL=en_US.UTF-8 LC_NUMERIC=en_US.UTF-8 perl -e 'use locale;\nprintf(\"%g\\n\", 0.01)'\n0.01\n$ LC_ALL=en_US.UTF-8 LC_NUMERIC=ru_RU.UTF-8 perl -e 'use locale;\nprintf(\"%g\\n\", 0.01)'\n0.01\n$ LC_ALL=ru_RU.UTF-8 LC_NUMERIC=en_US.UTF-8 perl -e 'use locale;\nprintf(\"%g\\n\", 0.01)'\n0,01\n$ LC_ALL=ru_RU.UTF-8 LC_NUMERIC=ru_RU.UTF-8 perl -e 'use locale;\nprintf(\"%g\\n\", 0.01)'\n0,01\n```\n\nThe Perl version is 5.34.0.\n\nIt is consistent with `perdoc perllocale`:\n\n```\n The initial program is started up using the locale specified from the\n environment, as currently, described in \"ENVIRONMENT\". [...]\n\nENVIRONMENT\n[...]\n \"LC_ALL\" \"LC_ALL\" is the \"override-all\" locale environment variable.\n If set, it overrides all the rest of the locale environment\n variables.\n```\n\nSo it looks like what happens is LC_ALL overwrites LC_NUMERIC for perl\nbut not for psql.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 10 Apr 2023 17:30:00 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> Hmm, so psql is honoring the LC_NUMERIC setting in that environment,\n>> but perl isn't. For me, it appears that adding 'use locale;' to\n>> the test script will fix it ... can you confirm if it's OK for you?\n\n> Right, src/bin/psql/t/001_basic.pl has \"use locale;\" since cd82e5c7\n> and it fails nevertheless.\n> ...\n> So it looks like what happens is LC_ALL overwrites LC_NUMERIC for perl\n> but not for psql.\n\nOh, right, there already is one :-(. After some more research,\nI believe I see the problem: Utils.pm does\n\nBEGIN\n{\n\t# Set to untranslated messages, to be able to compare program output\n\t# with expected strings.\n\tdelete $ENV{LANGUAGE};\n\tdelete $ENV{LC_ALL};\n\t$ENV{LC_MESSAGES} = 'C';\n\nNormally, with your settings, LC_ALL=en_US.UTF-8 would dominate\neverything. After removing that from the environment, the child\npsql process will honor LC_NUMERIC=ru_RU.UTF-8 and expect \\watch's\nargument to be \"0,01\". However, I bet that perl has already made\nits decisions about what its internal locale is, so it still thinks\nit should print \"0.01\".\n\nI am betting that we need to make Utils.pm do\n\n\tsetlocale(LC_ALL, \"\");\n\nafter the above-quoted bit, else it isn't doing what it is supposed to\nif the calling script has already done \"use locale;\", as indeed\npsql/t/001_basic.pl (and a small number of other places) do.\n\nThe attached makes check-world pass for me under these conflicting\nenvironment settings, but it's kind of a scary change. Thoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 10 Apr 2023 11:09:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Hi,\n\n> The attached makes check-world pass for me under these conflicting\n> environment settings, but it's kind of a scary change. Thoughts?\n\nFWIW my MacOS and Linux laptops have no complaints about the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 10 Apr 2023 19:44:54 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> The attached makes check-world pass for me under these conflicting\n>> environment settings, but it's kind of a scary change. Thoughts?\n\n> FWIW my MacOS and Linux laptops have no complaints about the patch.\n\nI realized that we don't actually need to \"use locale\" in Utils.pm\nitself for this to work, which greatly assuages my fears of unexpected\nside-effects. Pushed that way; I shall now retire to a safe distance\nand watch the buildfarm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 13:34:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Hello, hackers.\n\nOn 18/04/2023 20:34, Tom Lane wrote (on pgsql-committers):\n > I shall now retire to a safe distance and watch the buildfarm.\n\nUnfortunately, on fresh perl (5.38.2 verified) and on ru_RU.UTF-8 \nlocale, it breaks basic float comparison: 0 < 0.5 is no longer true.\n\nThis is the reproduction on REL_16_STABLE (but it affects master\nas well), using fresh Ubuntu 24.04 container.\n\n0. I've used lxc to get a fresh container:\n$ lxc launch ubuntu-daily:noble u2404\nBut I don't think lxc or containerization in general matters in this \ncase. Also, I think any environment with fresh enough Perl would work, \nUbuntu 24.04 is just an easy example.\n\n(obviously, install necessary dev packages)\n\n1. Generate ru_RU.UTF-8 locale:\na. In /etc/locale.gen, uncomment the line:\n# ru_RU.UTF-8 UTF-8\n\nb. Run locale-gen as root. For me, it says:\n$ sudo locale-gen\nGenerating locales (this might take a while)...\n en_US.UTF-8... done\n ru_RU.UTF-8... done\nGeneration complete.\n\n2. Apply 0001-demo-of-weird-Perl-setlocale-effect-on-float-numbers.patch\n(adding src/test/authentication/t/999_broken.pl)\n\n3. Run the test\nLANG=ru_RU.UTF-8 make check -C src/test/authentication \nPROVE_TESTS=t/999_broken.pl PROVE_FLAGS=--verbose\n\nThe test is, basically:\nuse PostgreSQL::Test::Utils;\nuse Test::More tests => 1;\nok(0 < 0.5, \"0 < 0.5\");\n\nIf I comment-out the \"use PostgreSQL::Test::Utils\" line, the test works. \nOtherwise it fails to notice that 0 is less than 0.5.\n\nAlternatively, the test fails if I replace that \"use\" line with\nBEGIN {\n\tuse POSIX qw(locale_h);\n\tsetlocale(LC_NUMERIC, \"\");\n}\n\n\"BEGIN\" part is essential: mere use/setlocale is fine.\n\nAlso, adding\nuse locale;\nor even\nuse locale ':numeric';\nfixes the test, but I doubt whether it's a good idea to add that to \nUtils.pm.\n\nObviously, one of the reasons is that according to ru_RU.UTF-8 locale \nfor LC_NUMERIC, fractional part separator is \",\", not \".\". So one could, \ntechnically, parse \"0.5\" as \"0\" and then unparsed \".5\" tail. I think it \nmight even be a Perl bug, because, according to my quick browsing of man \nperlfunc (setlocale) and man perllocale, this should not affect the code \noutside \"use locale\", not in such a fundamental way. After all, we're \ntalking not about strtod etc, but about floating-point numbers in the \nsource code.\n\nP.S. $ perl --version\n\nThis is perl 5, version 38, subversion 2 (v5.38.2) built for \nx86_64-linux-gnu-thread-multi\n(with 44 registered patches, see perl -V for more detail)\n\nP.P.S. I'm replying to pgsql-hackers, even though part of previous \ndiscussion have been on pgsql-committers. Hopefully, it's OK.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru",
"msg_date": "Thu, 25 Apr 2024 21:22:49 +0300",
"msg_from": "Anton Voloshin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Anton Voloshin <[email protected]> writes:\n> On 18/04/2023 20:34, Tom Lane wrote (on pgsql-committers):\n>>> I shall now retire to a safe distance and watch the buildfarm.\n\n> Unfortunately, on fresh perl (5.38.2 verified) and on ru_RU.UTF-8 \n> locale, it breaks basic float comparison: 0 < 0.5 is no longer true.\n\nHaven't we worked around that everywhere it matters, in commits such\nas 8421f6bce and 605062227? For me, check-world passes under\nLANG=ru_RU, even with perl 5.38.2 (where I do confirm that your\ntest script fails). The buildfarm isn't unhappy either.\n\n> Obviously, one of the reasons is that according to ru_RU.UTF-8 locale \n> for LC_NUMERIC, fractional part separator is \",\", not \".\". So one could, \n> technically, parse \"0.5\" as \"0\" and then unparsed \".5\" tail. I think it \n> might even be a Perl bug, because, according to my quick browsing of man \n> perlfunc (setlocale) and man perllocale, this should not affect the code \n> outside \"use locale\", not in such a fundamental way. After all, we're \n> talking not about strtod etc, but about floating-point numbers in the \n> source code.\n\nI agree that it's a Perl bug, mainly because your test case doesn't\nfail in Perls as recent as v5.32.1 (released about 3 years ago).\nIt's impossible to believe that they intentionally broke basic\nPerl constant syntax now, after so many years. Particularly in\nthis way --- what are we supposed to do, write \"if (0 < 0,5)\"?\nThat means something else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Apr 2024 22:20:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "On 26/04/2024 05:20, Tom Lane wrote:\n> Haven't we worked around that everywhere it matters, in commits such\n> as 8421f6bce and 605062227?\n\nYes, needing 8421f6bce and 605062227 was, perhaps, surprising, but \nreasonable. Unlike breaking floating point constants in the source code. \nBut, I guess, you're right and, since it does look like a Perl bug, \nwe'll have to work around that in all places where we use floating-point \nconstants in Perl code, which are surprisingly few.\n\n > For me, check-world passes under\n > LANG=ru_RU, even with perl 5.38.2 (where I do confirm that your\n > test script fails). The buildfarm isn't unhappy either.\n\nIndeed, check-world seems to run fine on my machine and on the bf as well.\n\nGrepping and browsing through, I've only found three spots with \\d\\.\\d \ndirectly in Perl code as a float, only one of them needs correction.\n\n1. src/test/perl/PostgreSQL/Test/Kerberos.pm in master\nsrc/test/kerberos/t/001_auth.pl in REL_16_STABLE\n > if ($krb5_version >= 1.15)\n\nI guess adding use locale ':numeric' would be easiest workaround here.\nAlternatively, we could also split version into krb5_major_version and \nkrb5_minor_version while parsing krb5-config --version's output above, \nbut I don't think that's warranted. So I suggest something along the \nlines of 0001-use-numeric-locale-in-kerberos-test-rel16.patch and \n*-master.patch (attached, REL_16 and master need this change in \ndifferent places).\n\nI did verify by providing fake 'krb5-config' that before the fix, with \nLANG=ru_RU.UTF-8 and Perl 5.38.2 and with, say, krb5 \"version\" 1.13 it \nwould still add the \"listen\" lines to kdc.conf by mistake (presumably, \nconfusing some versions of kerberos).\n\n2 and 3. contrib/intarray/bench/create_test.pl\n > if (rand() < 0.7)\nand\n > if ($#sect < 0 || rand() < 0.1)\n\nPostgreSQL::Test::Utils is not used there, so it's OK, no change needed.\n\nI did not find any other float constants in .pl/.pm files in master (I \ncould have missed something).\n\n > Particularly in\n > this way --- what are we supposed to do, write \"if (0 < 0,5)\"?\n > That means something else.\n\nYep. I will try to report this to Perl community later.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru",
"msg_date": "Fri, 26 Apr 2024 17:38:53 +0300",
"msg_from": "Anton Voloshin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "On 26/04/2024 17:38, Anton Voloshin wrote:\n> I will try to report this to Perl community later.\n\nReported under https://github.com/Perl/perl5/issues/22176\n\nPerl 5.36.3 seems to be fine (latest stable release before 5.38.x).\n5.38.0 and 5.38.2 are broken.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n",
"msg_date": "Fri, 26 Apr 2024 20:04:23 +0300",
"msg_from": "Anton Voloshin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "Anton Voloshin <[email protected]> writes:\n> On 26/04/2024 17:38, Anton Voloshin wrote:\n>> I will try to report this to Perl community later.\n\n> Reported under https://github.com/Perl/perl5/issues/22176\n\nThanks for doing that.\n\n> Perl 5.36.3 seems to be fine (latest stable release before 5.38.x).\n> 5.38.0 and 5.38.2 are broken.\n\nIf the misbehavior is that new, I'm inclined to do nothing about it,\nfiguring that they'll fix it sooner not later. If we were seeing\nfailures in main-line check-world tests then maybe it'd be worth\nband-aiding those, but AFAICS we're not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Apr 2024 13:20:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
},
{
"msg_contents": "\nOn 2023-04-07 Fr 10:00, Tom Lane wrote:\n> Alexander Korotkov<[email protected]> writes:\n>> On Thu, Apr 6, 2023 at 8:18 PM Tom Lane<[email protected]> wrote:\n>>> psql: add an optional execution-count limit to \\watch.\n>> This commit makes tests fail for me. psql parses 'i' option of\n>> '\\watch' using locale-aware strtod(), but 001_basic.pl uses hard-coded\n>> decimal separator.\n> Huh, yeah, I see it too if I set LANG=ru_RU.utf8 before running psql's\n> TAP tests. It seems unfortunate that none of the buildfarm has noticed\n> this. I guess all the TAP tests are run under C locale?\n\n\n[just noticed this, redirecting to -hackers]\n\n\nWhen run under meson, yes unless the LANG/LC_* settings are explicitly \nin the build_env. I'm fixing that so we will allow them to pass through. \nWhen run with configure/make they run with whatever is in the calling \nenvironment unless overridden in the build_env.\n\nWe do have support for running installchecks with multiple locales.This \nis done by passing --locale=foo to initdb.\n\nWe could locale-enable the non-install checks (for meson builds, that's \nthe 'misc-check' step, for configure/make builds it's more or less \neverything between the install stages and the (first) initdb step. We'd \nhave to do that via appropriate environment settings, I guess. Would it \nbe enough to set LANG, or do we need to set the LC_foo settings \nindividually? Not sure how we manage it on Windows. Maybe just not \nenable it for the first go-round.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 27 Apr 2024 08:15:36 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: psql: add an optional execution-count limit to \\watch."
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI opened https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree and\nclone the repo. The README file says that I can read the INSTALL file to\nunderstand how to build from source. But there is no such file in Git\nsources. Is it expected? If so, why?\n\nBest,\ntison.\n\nHi Hackers,I opened https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree and clone the repo. The README file says that I can read the INSTALL file to understand how to build from source. But there is no such file in Git sources. Is it expected? If so, why?Best,tison.",
"msg_date": "Fri, 7 Apr 2023 04:13:51 +0800",
"msg_from": "tison <[email protected]>",
"msg_from_op": true,
"msg_subject": "Git sources doesn't contain the INSATLL file?"
},
{
"msg_contents": "> On 6 Apr 2023, at 22:13, tison <[email protected]> wrote:\n> \n> Hi Hackers,\n> \n> I opened https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree and clone the repo. The README file says that I can read the INSTALL file to understand how to build from source. But there is no such file in Git sources. Is it expected? If so, why?\n\nit's very expected, the README.git file contains:\n\n\t\"In a release or snapshot tarball of PostgreSQL, a documentation file\n\tnamed INSTALL will appear in this directory. However, this file is not\n\tstored in git and so will not be present if you are using a git\n\tcheckout.\"\n\nThe INSTALL file was removed in 54d314c93c0baf5d3bd303d206d8ab9f58be1c37 a long\ntime ago.\n\nThat being said, maybe README should have wording along the lines of the above\nsince it's now referring to a file which might not exist?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 22:21:31 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Git sources doesn't contain the INSATLL file?"
},
{
"msg_contents": "Hi Daniel,\n\nAs a first-time developer I don't know how the released README / INSTALL\nare generated. So it's quite unintuitive to find the README.git file.\n\nI may expect an entry in README or a BUILD file when I obtain the sources\nby git clone, while a conventional ./configure && make works so it may be\nfine.\n\nI don't know why we need a different README.git file and cannot include\nINSTALL in the source cloned via git, so I cannot make suggestions here\nbecause the obvious solution is 'mv README.git INSTALL'.\n\nBest,\ntison.\n\n\nDaniel Gustafsson <[email protected]> 于2023年4月7日周五 04:21写道:\n\n> > On 6 Apr 2023, at 22:13, tison <[email protected]> wrote:\n> >\n> > Hi Hackers,\n> >\n> > I opened https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree and\n> clone the repo. The README file says that I can read the INSTALL file to\n> understand how to build from source. But there is no such file in Git\n> sources. Is it expected? If so, why?\n>\n> it's very expected, the README.git file contains:\n>\n> \"In a release or snapshot tarball of PostgreSQL, a documentation\n> file\n> named INSTALL will appear in this directory. However, this file\n> is not\n> stored in git and so will not be present if you are using a git\n> checkout.\"\n>\n> The INSTALL file was removed in 54d314c93c0baf5d3bd303d206d8ab9f58be1c37 a\n> long\n> time ago.\n>\n> That being said, maybe README should have wording along the lines of the\n> above\n> since it's now referring to a file which might not exist?\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nHi Daniel,As a first-time developer I don't know how the released README / INSTALL are generated. So it's quite unintuitive to find the README.git file.I may expect an entry in README or a BUILD file when I obtain the sources by git clone, while a conventional ./configure && make works so it may be fine.I don't know why we need a different README.git file and cannot include INSTALL in the source cloned via git, so I cannot make suggestions here because the obvious solution is 'mv README.git INSTALL'.Best,tison.Daniel Gustafsson <[email protected]> 于2023年4月7日周五 04:21写道:> On 6 Apr 2023, at 22:13, tison <[email protected]> wrote:\n> \n> Hi Hackers,\n> \n> I opened https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree and clone the repo. The README file says that I can read the INSTALL file to understand how to build from source. But there is no such file in Git sources. Is it expected? If so, why?\n\nit's very expected, the README.git file contains:\n\n \"In a release or snapshot tarball of PostgreSQL, a documentation file\n named INSTALL will appear in this directory. However, this file is not\n stored in git and so will not be present if you are using a git\n checkout.\"\n\nThe INSTALL file was removed in 54d314c93c0baf5d3bd303d206d8ab9f58be1c37 a long\ntime ago.\n\nThat being said, maybe README should have wording along the lines of the above\nsince it's now referring to a file which might not exist?\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 8 Jun 2023 19:14:18 +0800",
"msg_from": "tison <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Git sources doesn't contain the INSATLL file?"
}
] |
[
{
"msg_contents": "I was reading the logical replication code and found a little\nunnecessary work we are doing.\n\nThe confirmed_flushed_lsn cannot reasonably be ahead of the\ncurrent_lsn, so there is no point of calling\nLogicalConfirmReceivedLocation() every time we update the candidate\nxmin or restart_lsn.\n\nPatch is attached.",
"msg_date": "Fri, 7 Apr 2023 10:35:18 -0700",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unnecessary confirm work on logical replication"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 11:06 PM Emre Hasegeli <[email protected]> wrote:\n>\n> I was reading the logical replication code and found a little\n> unnecessary work we are doing.\n>\n> The confirmed_flushed_lsn cannot reasonably be ahead of the\n> current_lsn, so there is no point of calling\n> LogicalConfirmReceivedLocation() every time we update the candidate\n> xmin or restart_lsn.\n\nIn fact, the WAL sender always starts reading WAL from restart_lsn,\nwhich in turn is always <= confirmed_flush_lsn. While reading WAL, WAL\nsender may read XLOG_RUNNING_XACTS WAL record with lsn <=\nconfirmed_flush_lsn. While processing XLOG_RUNNING_XACTS record it may\nupdate its restart_lsn and catalog_xmin with current_lsn = lsn fo\nXLOG_RUNNING_XACTS record. In this situation current_lsn <=\nconfirmed_flush_lsn.\n\nDoes that make sense?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 10 Apr 2023 16:55:58 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary confirm work on logical replication"
},
{
"msg_contents": "> In fact, the WAL sender always starts reading WAL from restart_lsn,\n> which in turn is always <= confirmed_flush_lsn. While reading WAL, WAL\n> sender may read XLOG_RUNNING_XACTS WAL record with lsn <=\n> confirmed_flush_lsn. While processing XLOG_RUNNING_XACTS record it may\n> update its restart_lsn and catalog_xmin with current_lsn = lsn fo\n> XLOG_RUNNING_XACTS record. In this situation current_lsn <=\n> confirmed_flush_lsn.\n\nThis can only happen when the WAL sender is restarted. However in\nthis case, the restart_lsn and catalog_xmin should have already been\npersisted by the previous run of the WAL sender.\n\nI still doubt these calls are necessary. I think there is a\ncomplicated chicken and egg problem here. Here is my logic:\n\n1) LogicalConfirmReceivedLocation() is called explicitly when\nconfirmed_flush is sent by the replication client.\n\n2) LogicalConfirmReceivedLocation() is the only place that updates\nconfirmed_flush.\n\n3) The replication client can only send a confirmed_flush for a\ncurrent_lsn it has already received.\n\n4) These two functions have already run for any current_lsn the\nreplication client has received.\n\n5) These two functions call LogicalConfirmReceivedLocation() only if\ncurrent_lsn <= confirmed_flush.\n\nThank you for your patience.\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:58:05 +0200",
"msg_from": "Emre Hasegeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unnecessary confirm work on logical replication"
},
{
"msg_contents": "On 4/11/23 14:58, Emre Hasegeli wrote:\n>> In fact, the WAL sender always starts reading WAL from restart_lsn,\n>> which in turn is always <= confirmed_flush_lsn. While reading WAL, WAL\n>> sender may read XLOG_RUNNING_XACTS WAL record with lsn <=\n>> confirmed_flush_lsn. While processing XLOG_RUNNING_XACTS record it may\n>> update its restart_lsn and catalog_xmin with current_lsn = lsn fo\n>> XLOG_RUNNING_XACTS record. In this situation current_lsn <=\n>> confirmed_flush_lsn.\n> \n> This can only happen when the WAL sender is restarted. However in\n> this case, the restart_lsn and catalog_xmin should have already been\n> persisted by the previous run of the WAL sender.\n> \n> I still doubt these calls are necessary. I think there is a\n> complicated chicken and egg problem here. Here is my logic:\n> \n> 1) LogicalConfirmReceivedLocation() is called explicitly when\n> confirmed_flush is sent by the replication client.\n> \n> 2) LogicalConfirmReceivedLocation() is the only place that updates\n> confirmed_flush.\n> \n> 3) The replication client can only send a confirmed_flush for a\n> current_lsn it has already received.\n> \n> 4) These two functions have already run for any current_lsn the\n> replication client has received.\n> \n> 5) These two functions call LogicalConfirmReceivedLocation() only if\n> current_lsn <= confirmed_flush.\n> \n> Thank you for your patience.\n> \n\nHi Emre,\n\nI was going through my TODO list of messages, and I stumbled on this\nthread from a couple months back. Do you still think this is something\nwe should do?\n\nI see there was some discussion about whether this update is needed, or\nwhether current_lsn can ever be <= confirmed_flush_lsn. I think you may\nbe right this can't happen, but I wonder if we could verify that by an\nassert in a convenient place (instead of just removing the update).\n\nAlso, did you try to quantify how expensive this is? The update seems\nvery cheap, but I guess you just noticed by eye-balling the code, which\nis fine ofc. Even if it's cheap/not noticeable, it still may be worth\nremoving so as not to confuse people improving the code in the future.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Nov 2023 22:34:43 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unnecessary confirm work on logical replication"
}
] |
[
{
"msg_contents": "Hi,\n\nIf you build with --with-wal-blocksize=/-Dwal_blocksize= anything but\n8, this breaks:\n\nrunning bootstrap script ... LOG: GUC (PGC_INT)\nwal_writer_flush_after, boot_val=256, C-var=128\nTRAP: failed Assert(\"check_GUC_init(hentry->gucvar)\"), File: \"guc.c\",\nLine: 1519, PID: 84605",
"msg_date": "Sat, 8 Apr 2023 13:18:55 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "check_GUC_init(wal_writer_flush_after) fails with non-default block\n size"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-08 13:18:55 +1200, Thomas Munro wrote:\n> If you build with --with-wal-blocksize=/-Dwal_blocksize= anything but\n> 8, this breaks:\n> \n> running bootstrap script ... LOG: GUC (PGC_INT)\n> wal_writer_flush_after, boot_val=256, C-var=128\n> TRAP: failed Assert(\"check_GUC_init(hentry->gucvar)\"), File: \"guc.c\",\n> Line: 1519, PID: 84605\n\n> From 48d971e0b19f770991e334b8dc38422462b4485e Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <[email protected]>\n> Date: Sat, 8 Apr 2023 13:12:48 +1200\n> Subject: [PATCH] Fix default wal_writer_flush_after value.\n\nLGTM.\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Fri, 7 Apr 2023 22:21:03 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: check_GUC_init(wal_writer_flush_after) fails with non-default\n block size"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 5:21 PM Andres Freund <[email protected]> wrote:\n> On 2023-04-08 13:18:55 +1200, Thomas Munro wrote:\n> > Subject: [PATCH] Fix default wal_writer_flush_after value.\n> LGTM.\n\nThanks, pushed.\n\n\n",
"msg_date": "Mon, 15 May 2023 11:26:31 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: check_GUC_init(wal_writer_flush_after) fails with non-default\n block size"
},
{
"msg_contents": "On Mon, May 15, 2023 at 11:26:31AM +1200, Thomas Munro wrote:\n> On Sat, Apr 8, 2023 at 5:21 PM Andres Freund <[email protected]> wrote:\n> > On 2023-04-08 13:18:55 +1200, Thomas Munro wrote:\n> > > Subject: [PATCH] Fix default wal_writer_flush_after value.\n> > LGTM.\n> \n> Thanks, pushed.\n\nSomewhat forgot about this thread. Thanks for the fix!\n--\nMichael",
"msg_date": "Mon, 15 May 2023 08:53:32 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: check_GUC_init(wal_writer_flush_after) fails with non-default\n block size"
}
] |
[
{
"msg_contents": "Greetings,\n\nLooks like longfin has a particularly old Kerberos/GSSAPI installation\non it which pre-dates MIT release 1.11 from circa 2012 and is missing\ngssapi_ext.h, causing the recently committed patch to add Kerberos\ncredential delegation to fail to build.\n\nI'm inclined to update our configure check to explicitly check for the\nneeded function (gss_store_cred_into) as no one should really be running\nwith such an out-dated (over a decade old...) version of MIT Kerberos.\n\nThoughts?\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 22:18:18 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Looks like longfin has a particularly old Kerberos/GSSAPI installation\n> on it\n\nIt's whatever Apple is shipping, or was shipping last year or so.\n\n> I'm inclined to update our configure check to explicitly check for the\n> needed function (gss_store_cred_into)\n\nSounds like a possible fix, although I wonder whether you shouldn't\nexplicitly check for the presence of this header.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 22:24:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Looks like longfin has a particularly old Kerberos/GSSAPI installation\n> > on it\n> \n> It's whatever Apple is shipping, or was shipping last year or so.\n\nSadly they've not been maintaining the Kerberos libraries at all on\ntheir systems.\n\n> > I'm inclined to update our configure check to explicitly check for the\n> > needed function (gss_store_cred_into)\n> \n> Sounds like a possible fix, although I wonder whether you shouldn't\n> explicitly check for the presence of this header.\n\nI'm good with either.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 22:28:05 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> * Tom Lane ([email protected]) wrote:\n>> It's whatever Apple is shipping, or was shipping last year or so.\n\n> Sadly they've not been maintaining the Kerberos libraries at all on\n> their systems.\n\nIndeed :-(. I wouldn't be surprised if there are security issues in\ntheir version. Perhaps what we really ought to do is refuse to build\nwith their version --- but if so, we need some clearer error message\nabout it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 22:31:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > * Tom Lane ([email protected]) wrote:\n> >> It's whatever Apple is shipping, or was shipping last year or so.\n> \n> > Sadly they've not been maintaining the Kerberos libraries at all on\n> > their systems.\n> \n> Indeed :-(. I wouldn't be surprised if there are security issues in\n> their version. Perhaps what we really ought to do is refuse to build\n> with their version --- but if so, we need some clearer error message\n> about it.\n\nThe attached should (I believe?) at least add the needed check for\ngssapi_ext.h which will cause builds to fail and complain about the\nheader being missing from their installation.\n\nI'm certainly open to ideas about how to provide a better error message,\nparticularly on OSX systems which have an ancient version, to make it\nclear that people need to install an updated version. I don't have an\nOSX system at hand though.\n\nShould I push this to at least address the header check ... ?\n\nThanks,\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 22:34:22 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost ([email protected]) wrote:\n> * Tom Lane ([email protected]) wrote:\n> > Stephen Frost <[email protected]> writes:\n> > > * Tom Lane ([email protected]) wrote:\n> > >> It's whatever Apple is shipping, or was shipping last year or so.\n> > \n> > > Sadly they've not been maintaining the Kerberos libraries at all on\n> > > their systems.\n> > \n> > Indeed :-(. I wouldn't be surprised if there are security issues in\n> > their version. Perhaps what we really ought to do is refuse to build\n> > with their version --- but if so, we need some clearer error message\n> > about it.\n> \n> The attached should (I believe?) at least add the needed check for\n> gssapi_ext.h which will cause builds to fail and complain about the\n> header being missing from their installation.\n> \n> I'm certainly open to ideas about how to provide a better error message,\n> particularly on OSX systems which have an ancient version, to make it\n> clear that people need to install an updated version. I don't have an\n> OSX system at hand though.\n> \n> Should I push this to at least address the header check ... ?\n\nLooks like buildfarm animal hake, at least, has a version recent enough\nto have gssapi_ext.h ... but still older than 1.11 and therefore\ndoesn't have the type gss_key_value_element_desc defined, so maybe the\ncheck for gss_store_cred_into would be better?\n\nCertainly interesting how many old kerberos library installations there\nare, even in our buildfarm..\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 22:38:33 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Looks like buildfarm animal hake, at least, has a version recent enough\n> to have gssapi_ext.h ... but still older than 1.11 and therefore\n> doesn't have the type gss_key_value_element_desc defined, so maybe the\n> check for gss_store_cred_into would be better?\n\nWell, now we're getting into value judgements about which gssapi\nversions are still worth supporting. Are you really willing to toss\noverboard all versions that don't support gss_store_cred_into? Or\nshould credential delegation be viewed as an incremental feature that\nwe can support or not?\n\nTBH, committing things with significant portability hazards ten hours\nbefore feature freeze is not high on my list of good development\npractices.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 22:50:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Looks like buildfarm animal hake, at least, has a version recent enough\n> > to have gssapi_ext.h ... but still older than 1.11 and therefore\n> > doesn't have the type gss_key_value_element_desc defined, so maybe the\n> > check for gss_store_cred_into would be better?\n> \n> Well, now we're getting into value judgements about which gssapi\n> versions are still worth supporting. Are you really willing to toss\n> overboard all versions that don't support gss_store_cred_into? Or\n> should credential delegation be viewed as an incremental feature that\n> we can support or not?\n\nI'm open to considering support for older versions, however ...\n\n> TBH, committing things with significant portability hazards ten hours\n> before feature freeze is not high on my list of good development\n> practices.\n\nbut as pointed out, these APIs are all over a decade old and systems\nwhich don't support them have a pretty high risk of having security\nissues due to shipping these out-dated libraries.\n\nI agree it's a value judgement and something to consider but I don't see\nApple changing their mind any time soon on actually updating the\nKerberos version they ship and no one should really be using what they\ndo ship. The same is true for any other system that's shipping a\nversion of a core security library that's not been updated in over a\ndecade.\n\nWe are currently requiring at least OpenSSL 1.0.1 which was released in\n2012. Having a similar requirement for MIT Kerberos, for our release of\nPG in 2023, doesn't strike me as unreasonable.\n\nAttached is a more fully-formed patch with a regenerated configure that\nadds in a check for gssapi_ext.h and updates the function check to look\nfor gss_store_cred_into().\n\nThanks!\n\nStephen",
"msg_date": "Fri, 7 Apr 2023 23:06:00 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> I'm open to considering support for older versions, however ...\n\nNetBSD 9.3, which is their *latest production release*, doesn't have\ngssapi_ext.h [1]. For that matter, it doesn't look like their\nnot-yet-released 10.0BETA does either (my NetBSD 10 animals would\nbe failing if they had --with-gssapi). I do not think it's going\nto be acceptable to require this feature.\n\nI'm now going to reiterate my opinion that this patch should not\nhave been pushed in at this point of the dev cycle. A month ago,\nthere was time to deal with these sorts of issues.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2023-04-08%2002%3A38%3A31\n\n\n",
"msg_date": "Sat, 08 Apr 2023 00:20:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 22:50:18 -0400, Tom Lane wrote:\n> Or should credential delegation be viewed as an incremental feature that we\n> can support or not?\n\nThat seems like the best way forward here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Apr 2023 02:41:27 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > I'm open to considering support for older versions, however ...\n> \n> NetBSD 9.3, which is their *latest production release*, doesn't have\n> gssapi_ext.h [1]. For that matter, it doesn't look like their\n> not-yet-released 10.0BETA does either (my NetBSD 10 animals would\n> be failing if they had --with-gssapi). I do not think it's going\n> to be acceptable to require this feature.\n\nI'm certainly curious to understand what Kerberos library they're using\nand how they're maintaining it. At least some of the documentation I've\nfound seems to indicate that it might be Heimdal, which does have\ngss_store_cred_into in gssapi.h.\n\n> I'm now going to reiterate my opinion that this patch should not\n> have been pushed in at this point of the dev cycle. A month ago,\n> there was time to deal with these sorts of issues.\n\nI suspected there would be an issue with OSX but hadn't expected an\nissue with NetBSD. I had tested this across a few Linux platforms and\ncfbot showed it wasn't causing issues on Windows or the platforms that\nare run there. Would be really great to have a way to test these things\nout on these other platforms other than just committing them and seeing\nwhat happens on the buildfarm.\n\nIn any case, I've reverted it and we can pick this up for the next\ncycle. I'll play around with Heimdal locally as it appears to be\navailable on Ubuntu (I had actually thought Heimdal to be mostly gone at\nthis point since it only ever existed really due to silly export\nrestrictions...) and see if there's actually anything other than making\ngssapi_ext.h itself be optional to be pulled in that's needed to make it\nwork.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 8 Apr 2023 07:43:38 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund ([email protected]) wrote:\n> On 2023-04-07 22:50:18 -0400, Tom Lane wrote:\n> > Or should credential delegation be viewed as an incremental feature that we\n> > can support or not?\n> \n> That seems like the best way forward here.\n\nYeah, that's certainly doable too, though I'm really not sure we should\nbe accepting OSX's GSSAPI library and that might really be the only case\nat issue here. Either way, I've reverted it and will see about picking\nit up for the next cycle (again) and hopefully be able to work through\nthese issues either by having it be optional or, if NetBSD and Heimdal\nactually support all the APIs just with different headers, perhaps\ndeciding we're willing to require them.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 8 Apr 2023 07:47:45 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> I suspected there would be an issue with OSX but hadn't expected an\n> issue with NetBSD. I had tested this across a few Linux platforms and\n> cfbot showed it wasn't causing issues on Windows or the platforms that\n> are run there. Would be really great to have a way to test these things\n> out on these other platforms other than just committing them and seeing\n> what happens on the buildfarm.\n\nI poked around a bit more and found that:\n\n* NetBSD's package collection[1] includes both Heimdal and MIT Kerberos\n(mit-krb5). Apparently what's installed on at least some of the buildfarm\nanimals is the former.\n\n* FreeBSD seems to offer *only* Heimdal [2]; OpenBSD ditto [3].\n\n* I cannot find any sign of either gss_store_cred_into or gssapi_ext.h\nin FreeBSD's Heimdal (7.8.0_6).\n\nSo it does not look like supporting Heimdal is going to be optional,\nand that means the credential delegation feature is going to have\nto be optional, or else we need to find some equivalent Heimdal APIs.\n\nI share your feeling that we could probably blow off Apple's built-in\nGSSAPI. MacPorts offers both Heimdal and kerberos5, and I imagine\nHomebrew has at least one of them, so Mac people could easily get\nhold of newer implementations. But the BSDen are going to be a\nproblem.\n\n\t\t\tregards, tom lane\n\n[1] https://cdn.netbsd.org/pub/pkgsrc/current/pkgsrc/security/index.html\n[2] https://ports.freebsd.org/cgi/ports.cgi?query=kerberos&stype=all&sektion=all\n[3] https://cdn.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/\n\n\n",
"msg_date": "Sat, 08 Apr 2023 13:47:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > I suspected there would be an issue with OSX but hadn't expected an\n> > issue with NetBSD. I had tested this across a few Linux platforms and\n> > cfbot showed it wasn't causing issues on Windows or the platforms that\n> > are run there. Would be really great to have a way to test these things\n> > out on these other platforms other than just committing them and seeing\n> > what happens on the buildfarm.\n> \n> I poked around a bit more and found that:\n> \n> * NetBSD's package collection[1] includes both Heimdal and MIT Kerberos\n> (mit-krb5). Apparently what's installed on at least some of the buildfarm\n> animals is the former.\n> \n> * FreeBSD seems to offer *only* Heimdal [2]; OpenBSD ditto [3].\n> \n> * I cannot find any sign of either gss_store_cred_into or gssapi_ext.h\n> in FreeBSD's Heimdal (7.8.0_6).\n> \n> So it does not look like supporting Heimdal is going to be optional,\n> and that means the credential delegation feature is going to have\n> to be optional, or else we need to find some equivalent Heimdal APIs.\n\nThanks for doing that digging!\n\nI've been looking too and while Heimdal added gss_store_cred_into in\ntheir development branch 5 years ago[1] (!), it's not made it into an\nactual release. Good that they seem to at least be maintaining it\nenough to deal with CVEs, but unfortunately I'm fairly confident that\nthere won't be a way to support constrained delegation (which is the\nnext goal, once unconstrained delegation is in and working) on the\nHeimdal platforms. I suspected that would have to be optional anyway,\nbut I hadn't expected it to hit all the BSD platforms.\n\nIn any case, for this I'm working switching over to gss_store_cred()\nwhich does seem to be available in the Heimdal Debian packages that I\nwas able to install locally (looks to be 7.7.0) and should work just\nfine for these purposes, though it requires a bit more work on the\nlibpq side as we need to tell libpq explicitly the name which was on\nthe delegated credential when we call gss_acquire_cred().\n\nOnce that's done, should be able to drop the gssapi_ext.h include\nentirely and still have the test suite able to run with MIT Kerberos.\n\nOne thing I'm on the fence about is trying to make the test suite\nactually work with Heimdal.. I'm planning to install the Heimdal KDC,\net al, and see what happens but if it ends up looking like it's a lot of\nwork then I might forgo that effort. I'm not sure it's really necessary\nbut I could be argued out of that position without too much effort. The\nstated Heimdal goal is to be a re-implementation of MIT Kerberos and\nthese are all documented APIs with RFCs, after all.\n\n> I share your feeling that we could probably blow off Apple's built-in\n> GSSAPI. MacPorts offers both Heimdal and kerberos5, and I imagine\n> Homebrew has at least one of them, so Mac people could easily get\n> hold of newer implementations. But the BSDen are going to be a\n> problem.\n\nYeah. Unfortunate that Heimdal doesn't seem to really be moving forward\nin terms of new development.\n\nThanks,\n\nStephen\n\n[1] https://github.com/heimdal/heimdal/commit/e0bb9c10cad0fd98245caecf8af8fca855b2df49",
"msg_date": "Sat, 8 Apr 2023 14:04:41 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "I wrote:\n> * NetBSD's package collection[1] includes both Heimdal and MIT Kerberos\n> (mit-krb5). Apparently what's installed on at least some of the buildfarm\n> animals is the former.\n\nOh! New data: the core NetBSD OS includes a copy of Heimdal (looks\nto be 7.7.0 in the 10.0_BETA sources). The installable package is\na slightly newer version, 7.8.0, but I think it's a very solid bet\nthat the relevant buildfarm animals are just using the core copy\nand haven't installed the add-on package. Even if they had, it\nwould take some fooling around with include and link paths to pull\nin the packaged version rather than the built-in one.\n\nThe exact same thing applies to FreeBSD, except that their in-core\nHeimdal is ancient (1.5.2). Also, they do have MIT Kerberos\navailable as a package [1]. I'd been misled by the lack of a hit\non \"kerberos\", but \"krb5\" finds it. Our code does compile against\nthat version of Heimdal, but src/test/kerberos/ refuses to try to\nrun.\n\nI've not dug into OpenBSD any further.\n\n\t\t\tregards, tom lane\n\n[1] https://ports.freebsd.org/cgi/ports.cgi?query=krb5&stype=all&sektion=all\n\n\n",
"msg_date": "Sat, 08 Apr 2023 14:40:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On Sun, Apr 9, 2023 at 6:40 AM Tom Lane <[email protected]> wrote:\n> The exact same thing applies to FreeBSD, except that their in-core\n> Heimdal is ancient (1.5.2). Also, they do have MIT Kerberos\n> available as a package [1]. I'd been misled by the lack of a hit\n> on \"kerberos\", but \"krb5\" finds it. Our code does compile against\n> that version of Heimdal, but src/test/kerberos/ refuses to try to\n> run.\n\nFWIW my FBSD animal elver has krb5 installed. Sorry it wasn't running\nwhen the relevant commit landed. Stupid network cable wriggled out.\n\n\n",
"msg_date": "Sun, 9 Apr 2023 10:32:25 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro ([email protected]) wrote:\n> On Sun, Apr 9, 2023 at 6:40 AM Tom Lane <[email protected]> wrote:\n> > The exact same thing applies to FreeBSD, except that their in-core\n> > Heimdal is ancient (1.5.2). Also, they do have MIT Kerberos\n> > available as a package [1]. I'd been misled by the lack of a hit\n> > on \"kerberos\", but \"krb5\" finds it. Our code does compile against\n> > that version of Heimdal, but src/test/kerberos/ refuses to try to\n> > run.\n> \n> FWIW my FBSD animal elver has krb5 installed. Sorry it wasn't running\n> when the relevant commit landed. Stupid network cable wriggled out.\n\nYeah, I wouldn't be the least bit surprised if many folks running\nFreeBSD with any interest in Kerberos have MIT Kerberos installed given\nthat Heimdal doesn't seem to be under any kind of ongoing active\ndevelopment and is just in this maintenance mode.\n\nHave you tried running the tests in src/test/kerberos with elver? Or is\nit configured to run them? Would be awesome if it could be, or if\nthere's issues with running the tests on FBSD w/ MIT Kerberos, I'd be\nhappy to try and help work through them.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 10 Apr 2023 10:31:54 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Yeah, I wouldn't be the least bit surprised if many folks running\n> FreeBSD with any interest in Kerberos have MIT Kerberos installed given\n> that Heimdal doesn't seem to be under any kind of ongoing active\n> development and is just in this maintenance mode.\n\nYeah, that's a pretty scary situation for security-critical software.\nMaybe we should just desupport Heimdal, rather than investing effort\nto the contrary?\n\nAlso, the core-code versions of Heimdal in these BSDen are even scarier\nthan the upstream releases, so I'm thinking that the fact that we\ncurrently compile against them is more a net negative than a positive.\n(Same logic as for macOS, really.)\n\nIOW, maybe it'd be okay to de-revert 3d4fa227b and add documentation\nsaying that --with-gssapi requires MIT Kerberos not Heimdal.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Apr 2023 11:16:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Yeah, I wouldn't be the least bit surprised if many folks running\n> > FreeBSD with any interest in Kerberos have MIT Kerberos installed given\n> > that Heimdal doesn't seem to be under any kind of ongoing active\n> > development and is just in this maintenance mode.\n> \n> Yeah, that's a pretty scary situation for security-critical software.\n\nAgreed.\n\n> Maybe we should just desupport Heimdal, rather than investing effort\n> to the contrary?\n\nAs this is for a new major PG release, I'd be in support of that. I\nwould like to get the kerberos tests working on a FreeBSD buildfarm\nanimal with MIT Kerberos installed, if possible.\n\n> Also, the core-code versions of Heimdal in these BSDen are even scarier\n> than the upstream releases, so I'm thinking that the fact that we\n> currently compile against them is more a net negative than a positive.\n> (Same logic as for macOS, really.)\n\nAgreed. Still, I wouldn't go and break it for minor releases, but for a\nnew major version saying we no longer support Heimdal seems reasonable.\nThen folks have the usual 5-ish years (if they want to delay as long as\npossible) to move to MIT Kerberos.\n\n> IOW, maybe it'd be okay to de-revert 3d4fa227b and add documentation\n> saying that --with-gssapi requires MIT Kerberos not Heimdal.\n\nI'd be happy with that and can add the appropriate documentation noting\nthat we require MIT Kerberos. Presumably the appropriate step at this\npoint would be to check with the RMT?\n\nThanks!\n\nStephen",
"msg_date": "Mon, 10 Apr 2023 11:30:14 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> * Tom Lane ([email protected]) wrote:\n>> IOW, maybe it'd be okay to de-revert 3d4fa227b and add documentation\n>> saying that --with-gssapi requires MIT Kerberos not Heimdal.\n\n> I'd be happy with that and can add the appropriate documentation noting\n> that we require MIT Kerberos. Presumably the appropriate step at this\n> point would be to check with the RMT?\n\nYeah, RMT would have final say at this stage.\n\nIf you pull the trigger, a note to buildfarm-members would be\nappropriate too, so people will know if they need to remove\n--with-gssapi from animal configurations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Apr 2023 11:37:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On 4/10/23 11:37 AM, Tom Lane wrote:\r\n> Stephen Frost <[email protected]> writes:\r\n>> * Tom Lane ([email protected]) wrote:\r\n>>> IOW, maybe it'd be okay to de-revert 3d4fa227b and add documentation\r\n>>> saying that --with-gssapi requires MIT Kerberos not Heimdal.\r\n> \r\n>> I'd be happy with that and can add the appropriate documentation noting\r\n>> that we require MIT Kerberos. Presumably the appropriate step at this\r\n>> point would be to check with the RMT?\r\n> \r\n> Yeah, RMT would have final say at this stage.\r\n> \r\n> If you pull the trigger, a note to buildfarm-members would be\r\n> appropriate too, so people will know if they need to remove\r\n> --with-gssapi from animal configurations.\r\n\r\nThe RMT discussed this thread and agrees to a \"de-revert\" of the \"Add \r\nsupport for Kerberos delegation\" patch, provided that:\r\n\r\n1. The appropriate documentation is added AND\r\n2. The de-revert occurs no later than 2023-04-15 (upper bound 2023-04-16 \r\n0:00 AoE).\r\n\r\nThere's an open item[1] for this task.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#Open_Issues",
"msg_date": "Mon, 10 Apr 2023 22:40:31 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 2:31 AM Stephen Frost <[email protected]> wrote:\n> Have you tried running the tests in src/test/kerberos with elver? Or is\n> it configured to run them? Would be awesome if it could be, or if\n> there's issues with running the tests on FBSD w/ MIT Kerberos, I'd be\n> happy to try and help work through them.\n\nI'm also happy to test/help/improve the animal/teach CI to do\nit/whatever. I've made a note to test out the reverted commit later\ntoday when I'll be in front of the right computers.\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:53:51 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 2:53 PM Thomas Munro <[email protected]> wrote:\n> On Tue, Apr 11, 2023 at 2:31 AM Stephen Frost <[email protected]> wrote:\n> > Have you tried running the tests in src/test/kerberos with elver? Or is\n> > it configured to run them? Would be awesome if it could be, or if\n> > there's issues with running the tests on FBSD w/ MIT Kerberos, I'd be\n> > happy to try and help work through them.\n\nOh, the FreeBSD CI already runs the kerberos test and it uses the krb5\npackage, because the image it uses installs that[1] and at least the\nMeson build automatically prefers that over Heimdal. So the CI in the\npostgres/postgres account tested it[2] as soon as you committed, and\ncfbot was testing all along in the commitfest. It's not skipped and\nthat test would clearly BAIL_OUT if it detected Heimdal. Is that good\nenough?\n\nI have OpenBSD and NetBSD vagrant images around, do you want me to\ntest those too?\n\nAs for elver, I remembered an unfortunate detail: it doesn't currently\nhave kerberos enabled in PG_TEXT_EXTRA, because it the test depends on\nlocalhost being 127.0.0.1 which isn't quite true on this box\n(container tech with an unusual network stack, long boring story) and\nI hadn't got around to figuring out what to do about that. I can look\ninto it if you want, or perhaps you are satisfied with CI proving that\nFreeBSD likes your patch.\n\n[1] https://github.com/anarazel/pg-vm-images/blob/main/packer/freebsd.pkr.hcl\n[2] https://cirrus-ci.com/task/6378179762323456?logs=test_world#L43\n\n\n",
"msg_date": "Tue, 11 Apr 2023 16:48:29 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Jonathan S. Katz ([email protected]) wrote:\n> On 4/10/23 11:37 AM, Tom Lane wrote:\n> > Stephen Frost <[email protected]> writes:\n> > > * Tom Lane ([email protected]) wrote:\n> > > > IOW, maybe it'd be okay to de-revert 3d4fa227b and add documentation\n> > > > saying that --with-gssapi requires MIT Kerberos not Heimdal.\n> > \n> > > I'd be happy with that and can add the appropriate documentation noting\n> > > that we require MIT Kerberos. Presumably the appropriate step at this\n> > > point would be to check with the RMT?\n> > \n> > Yeah, RMT would have final say at this stage.\n> > \n> > If you pull the trigger, a note to buildfarm-members would be\n> > appropriate too, so people will know if they need to remove\n> > --with-gssapi from animal configurations.\n> \n> The RMT discussed this thread and agrees to a \"de-revert\" of the \"Add\n> support for Kerberos delegation\" patch, provided that:\n> \n> 1. The appropriate documentation is added AND\n> 2. The de-revert occurs no later than 2023-04-15 (upper bound 2023-04-16\n> 0:00 AoE).\n> \n> There's an open item[1] for this task.\n\nUnderstood. Please find attached the updated patch with changes to the\ncommit message to indicate that we now require MIT Kerberos, an\nadditional explicit check for gssapi_ext.h in configure.ac/configure,\nalong with updated documentation explicitly saying we require MIT\nKerberos for GSSAPI support.\n\nI'll plan to push this tomorrow.\n\nOf course, suggestions/improvements on documentation or anything else\nalways welcome.\n\nThanks all!\n\nStephen",
"msg_date": "Tue, 11 Apr 2023 21:18:08 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "> configure | 27 ++\n> configure.ac | 2 +\n\nDoes meson.build need the corresponding change ?\n\n\n",
"msg_date": "Tue, 11 Apr 2023 20:26:13 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Justin Pryzby ([email protected]) wrote:\n> > configure | 27 ++\n> > configure.ac | 2 +\n> \n> Does meson.build need the corresponding change ?\n\nAh, yes, presumably.\n\nSomething like the attached?\n\nThanks,\n\nStephen",
"msg_date": "Tue, 11 Apr 2023 21:30:16 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost ([email protected]) wrote:\n> Greetings,\n> \n> * Justin Pryzby ([email protected]) wrote:\n> > > configure | 27 ++\n> > > configure.ac | 2 +\n> > \n> > Does meson.build need the corresponding change ?\n> \n> Ah, yes, presumably.\n\nNo, more like attached actually. Picks up on the dependency properly\nwith this when I ran meson/ninja, at least.\n\nI'll include this then (along with any other suggestions, of course).\n\nThanks!\n\nStephen",
"msg_date": "Tue, 11 Apr 2023 21:45:13 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Understood. Please find attached the updated patch with changes to the\n> commit message to indicate that we now require MIT Kerberos, an\n> additional explicit check for gssapi_ext.h in configure.ac/configure,\n> along with updated documentation explicitly saying we require MIT\n> Kerberos for GSSAPI support.\n\nUm ... could you package this as a straight un-revert of the\nprevious commit, then a delta patch? Would be easier to review.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 00:00:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Understood. Please find attached the updated patch with changes to the\n> > commit message to indicate that we now require MIT Kerberos, an\n> > additional explicit check for gssapi_ext.h in configure.ac/configure,\n> > along with updated documentation explicitly saying we require MIT\n> > Kerberos for GSSAPI support.\n> \n> Um ... could you package this as a straight un-revert of the\n> previous commit, then a delta patch? Would be easier to review.\n\nSure, reworked that way and attached.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 12 Apr 2023 10:33:22 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On 4/12/23 10:33 AM, Stephen Frost wrote:\r\n> Greetings,\r\n> \r\n> * Tom Lane ([email protected]) wrote:\r\n>> Stephen Frost <[email protected]> writes:\r\n>>> Understood. Please find attached the updated patch with changes to the\r\n>>> commit message to indicate that we now require MIT Kerberos, an\r\n>>> additional explicit check for gssapi_ext.h in configure.ac/configure,\r\n>>> along with updated documentation explicitly saying we require MIT\r\n>>> Kerberos for GSSAPI support.\r\n>>\r\n>> Um ... could you package this as a straight un-revert of the\r\n>> previous commit, then a delta patch? Would be easier to review.\r\n> \r\n> Sure, reworked that way and attached.\r\n\r\nDocs read well. A few questions/commenets:\r\n\r\n* On [1] -- do we want to add a note that it's not just Kerberos, but \r\nMIT Kerberos?\r\n\r\n* On [2] -- we mention \"kadmin tool of MIT-compatible Kerberos 5\" which \r\nis AIUI is still technically correct, but do we want to drop the \r\n\"-compatible?\" (precedent in [3])\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/devel/install-requirements.html\r\n[2] https://www.postgresql.org/docs/devel/gssapi-auth.html\r\n[3] https://www.postgresql.org/docs/devel/regress-run.html",
"msg_date": "Wed, 12 Apr 2023 10:40:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "> On 12 Apr 2023, at 16:33, Stephen Frost <[email protected]> wrote:\n\n> Sure, reworked that way and attached.\n\nWhile not changed in this hunk, does the comment regarding Heimdal still apply?\n\n@@ -918,6 +919,7 @@ pg_GSS_recvauth(Port *port)\n \tint\t\t\tmtype;\n \tStringInfoData buf;\n \tgss_buffer_desc gbuf;\n+\tgss_cred_id_t delegated_creds;\n \n \t/*\n \t * Use the configured keytab, if there is one. Unfortunately, Heimdal\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 12 Apr 2023 16:45:33 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Jonathan S. Katz ([email protected]) wrote:\n> On 4/12/23 10:33 AM, Stephen Frost wrote:\n> > * Tom Lane ([email protected]) wrote:\n> > > Stephen Frost <[email protected]> writes:\n> > > > Understood. Please find attached the updated patch with changes to the\n> > > > commit message to indicate that we now require MIT Kerberos, an\n> > > > additional explicit check for gssapi_ext.h in configure.ac/configure,\n> > > > along with updated documentation explicitly saying we require MIT\n> > > > Kerberos for GSSAPI support.\n> > > \n> > > Um ... could you package this as a straight un-revert of the\n> > > previous commit, then a delta patch? Would be easier to review.\n> > \n> > Sure, reworked that way and attached.\n> \n> Docs read well. A few questions/commenets:\n> \n> * On [1] -- do we want to add a note that it's not just Kerberos, but MIT\n> Kerberos?\n\nYes, makes sense, updated.\n\n> * On [2] -- we mention \"kadmin tool of MIT-compatible Kerberos 5\" which is\n> AIUI is still technically correct, but do we want to drop the \"-compatible?\"\n> (precedent in [3])\n\nYup, cleaned that up also.\n\nUpdated patch set attached.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 12 Apr 2023 10:47:58 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On 4/12/23 10:47 AM, Stephen Frost wrote:\r\n> Greetings,\r\n> \r\n> * Jonathan S. Katz ([email protected]) wrote:\r\n>> On 4/12/23 10:33 AM, Stephen Frost wrote:\r\n>>> * Tom Lane ([email protected]) wrote:\r\n>>>> Stephen Frost <[email protected]> writes:\r\n>>>>> Understood. Please find attached the updated patch with changes to the\r\n>>>>> commit message to indicate that we now require MIT Kerberos, an\r\n>>>>> additional explicit check for gssapi_ext.h in configure.ac/configure,\r\n>>>>> along with updated documentation explicitly saying we require MIT\r\n>>>>> Kerberos for GSSAPI support.\r\n>>>>\r\n>>>> Um ... could you package this as a straight un-revert of the\r\n>>>> previous commit, then a delta patch? Would be easier to review.\r\n>>>\r\n>>> Sure, reworked that way and attached.\r\n>>\r\n>> Docs read well. A few questions/commenets:\r\n>>\r\n>> * On [1] -- do we want to add a note that it's not just Kerberos, but MIT\r\n>> Kerberos?\r\n> \r\n> Yes, makes sense, updated.\r\n> \r\n>> * On [2] -- we mention \"kadmin tool of MIT-compatible Kerberos 5\" which is\r\n>> AIUI is still technically correct, but do we want to drop the \"-compatible?\"\r\n>> (precedent in [3])\r\n> \r\n> Yup, cleaned that up also.\r\n> \r\n> Updated patch set attached.\r\n\r\nThanks! I'll sign off on the docs portion.\r\n\r\nThe meson build code looks good to me (I just compared it to what \r\nalready exists). Similar comment to the autoconf code.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 12 Apr 2023 10:54:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Daniel Gustafsson ([email protected]) wrote:\n> > On 12 Apr 2023, at 16:33, Stephen Frost <[email protected]> wrote:\n> > Sure, reworked that way and attached.\n> \n> While not changed in this hunk, does the comment regarding Heimdal still apply?\n> \n> @@ -918,6 +919,7 @@ pg_GSS_recvauth(Port *port)\n> \tint\t\t\tmtype;\n> \tStringInfoData buf;\n> \tgss_buffer_desc gbuf;\n> +\tgss_cred_id_t delegated_creds;\n> \n> \t/*\n> \t * Use the configured keytab, if there is one. Unfortunately, Heimdal\n\nGood catch. No, it doesn't. I'm not anxious to actually change that\ncode at this point but we could certainly consider changing it in the\nfuture. I'll update this comment (and the identical one in\nsecure_open_gssapi) accordingly.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 12 Apr 2023 10:55:21 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Updated patch set attached.\n\nLGTM\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 11:55:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Updated patch set attached.\n> \n> LGTM\n\nGreat, thanks.\n\nI cleaned up the commit messages a bit more and added links to the\ndiscussion. If there isn't anything more then I'll plan to push these\nlater today or tomorrow.\n\nThanks again!\n\nStephen",
"msg_date": "Wed, 12 Apr 2023 12:22:57 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "> On 12 Apr 2023, at 16:55, Stephen Frost <[email protected]> wrote:\n> \n> Greetings,\n> \n> * Daniel Gustafsson ([email protected]) wrote:\n>>> On 12 Apr 2023, at 16:33, Stephen Frost <[email protected]> wrote:\n>>> Sure, reworked that way and attached.\n>> \n>> While not changed in this hunk, does the comment regarding Heimdal still apply?\n>> \n>> @@ -918,6 +919,7 @@ pg_GSS_recvauth(Port *port)\n>> \tint\t\t\tmtype;\n>> \tStringInfoData buf;\n>> \tgss_buffer_desc gbuf;\n>> +\tgss_cred_id_t delegated_creds;\n>> \n>> \t/*\n>> \t * Use the configured keytab, if there is one. Unfortunately, Heimdal\n> \n> Good catch. No, it doesn't. I'm not anxious to actually change that\n> code at this point but we could certainly consider changing it in the\n> future. I'll update this comment (and the identical one in\n> secure_open_gssapi) accordingly.\n\nSounds like a good plan.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 12 Apr 2023 18:37:26 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "On 4/12/23 12:22 PM, Stephen Frost wrote:\r\n> Greetings,\r\n> \r\n> * Tom Lane ([email protected]) wrote:\r\n>> Stephen Frost <[email protected]> writes:\r\n>>> Updated patch set attached.\r\n>>\r\n>> LGTM\r\n> \r\n> Great, thanks.\r\n> \r\n> I cleaned up the commit messages a bit more and added links to the\r\n> discussion. If there isn't anything more then I'll plan to push these\r\n> later today or tomorrow.\r\n\r\nGreat -- thanks for your attention to this. I'm glad we have an \r\nopportunity to de-revert (devert?).\r\n\r\nJonathan",
"msg_date": "Wed, 12 Apr 2023 12:39:12 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Jonathan S. Katz ([email protected]) wrote:\n> On 4/12/23 12:22 PM, Stephen Frost wrote:\n> > * Tom Lane ([email protected]) wrote:\n> > > Stephen Frost <[email protected]> writes:\n> > > > Updated patch set attached.\n> > > \n> > > LGTM\n> > \n> > Great, thanks.\n> > \n> > I cleaned up the commit messages a bit more and added links to the\n> > discussion. If there isn't anything more then I'll plan to push these\n> > later today or tomorrow.\n> \n> Great -- thanks for your attention to this. I'm glad we have an opportunity\n> to de-revert (devert?).\n\nPushed, thanks again to everyone.\n\nI'll monitor the buildfarm and assuming there isn't anything unexpected\nthen I'll mark the open item as resolved now.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 13 Apr 2023 08:58:59 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Pushed, thanks again to everyone.\n> I'll monitor the buildfarm and assuming there isn't anything unexpected\n> then I'll mark the open item as resolved now.\n\nThe Debian 7 (Wheezy) members of the buildfarm (lapwing, skate, snapper)\nare all getting past the gssapi_ext.h check you added and then failing\nlike this:\n\nccache gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Werror -I../../../src/include -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/et -c -o be-gssapi-common.o be-gssapi-common.c\nbe-gssapi-common.c: In function 'pg_store_delegated_credential':\nbe-gssapi-common.c:110:2: error: unknown type name 'gss_key_value_element_desc'\nbe-gssapi-common.c:111:2: error: unknown type name 'gss_key_value_set_desc'\nbe-gssapi-common.c:113:4: error: request for member 'key' in something not a structure or union\nbe-gssapi-common.c:114:4: error: request for member 'value' in something not a structure or union\nbe-gssapi-common.c:115:7: error: request for member 'count' in something not a structure or union\nbe-gssapi-common.c:116:7: error: request for member 'elements' in something not a structure or union\nbe-gssapi-common.c:119:2: error: implicit declaration of function 'gss_store_cred_into' [-Werror=implicit-function-declaration]\n\nDebian 7 has been EOL five years or so, so I don't mind saying \"get a\nnewer OS or disable gssapi\". However, is it worth adding another\nconfigure check to fail a little faster with whatever Kerberos\nversion this is? Checking that gss_store_cred_into() exists\nseems like the most obvious one of these things to test for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 21:33:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Pushed, thanks again to everyone.\n> > I'll monitor the buildfarm and assuming there isn't anything unexpected\n> > then I'll mark the open item as resolved now.\n> \n> The Debian 7 (Wheezy) members of the buildfarm (lapwing, skate, snapper)\n> are all getting past the gssapi_ext.h check you added and then failing\n> like this:\n> \n> ccache gcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Werror -I../../../src/include -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/et -c -o be-gssapi-common.o be-gssapi-common.c\n> be-gssapi-common.c: In function 'pg_store_delegated_credential':\n> be-gssapi-common.c:110:2: error: unknown type name 'gss_key_value_element_desc'\n> be-gssapi-common.c:111:2: error: unknown type name 'gss_key_value_set_desc'\n> be-gssapi-common.c:113:4: error: request for member 'key' in something not a structure or union\n> be-gssapi-common.c:114:4: error: request for member 'value' in something not a structure or union\n> be-gssapi-common.c:115:7: error: request for member 'count' in something not a structure or union\n> be-gssapi-common.c:116:7: error: request for member 'elements' in something not a structure or union\n> be-gssapi-common.c:119:2: error: implicit declaration of function 'gss_store_cred_into' [-Werror=implicit-function-declaration]\n> \n> Debian 7 has been EOL five years or so, so I don't mind saying \"get a\n> newer OS or disable gssapi\". However, is it worth adding another\n> configure check to fail a little faster with whatever Kerberos\n> version this is? Checking that gss_store_cred_into() exists\n> seems like the most obvious one of these things to test for.\n\nSure, I can certainly do that and agreed that it makes sense to check\nfor gss_store_cred_into().\n\nHow about the attached which just switches from testing for\ngss_init_sec_context to testing for gss_store_cred_into?\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Apr 2023 08:57:07 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> How about the attached which just switches from testing for\n> gss_init_sec_context to testing for gss_store_cred_into?\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 09:37:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > How about the attached which just switches from testing for\n> > gss_init_sec_context to testing for gss_store_cred_into?\n> \n> WFM.\n\nDone that way.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 17 Apr 2023 09:51:54 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> Done that way.\n\nLooks like you neglected to update the configure script proper?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 10:01:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Done that way.\n> \n> Looks like you neglected to update the configure script proper?\n\nPah, indeed. Will fix. Sorry about that.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Apr 2023 10:01:58 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost ([email protected]) wrote:\n> * Tom Lane ([email protected]) wrote:\n> > Stephen Frost <[email protected]> writes:\n> > > Done that way.\n> > \n> > Looks like you neglected to update the configure script proper?\n> \n> Pah, indeed. Will fix. Sorry about that.\n\nFixed.\n\nI'm guessing it's not really an issue but it does make changing\nconfigure a bit annoying on my Ubuntu 22.04, when I run autoconf2.69, I\nend up with this additional hunk as changed from what our configure\ncurrently has.\n\nThoughts?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Apr 2023 10:17:23 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> I'm guessing it's not really an issue but it does make changing\n> configure a bit annoying on my Ubuntu 22.04, when I run autoconf2.69, I\n> end up with this additional hunk as changed from what our configure\n> currently has.\n\nNot surprising. Thanks to autoconf's long release cycles, individual\ndistros often are carrying local patches that affect its output.\nTo ensure consistent results across committers, our policy is that you\nshould use built-from-upstream-source autoconf not a vendor's version.\n(In principle that could bite us sometime, but it hasn't yet.)\n\nAlso, you should generally run autoheader after autoconf.\nChecking things here, I notice that pg_config.h.in hasn't been\nupdated for the last few gssapi-related commits:\n\n$ git diff\ndiff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in\nindex 3665e79..6d572c3 100644\n*** a/src/include/pg_config.h.in\n--- b/src/include/pg_config.h.in\n***************\n*** 196,201 ****\n--- 196,207 ----\n /* Define to 1 if you have the `getpeerucred' function. */\n #undef HAVE_GETPEERUCRED\n \n+ /* Define to 1 if you have the <gssapi_ext.h> header file. */\n+ #undef HAVE_GSSAPI_EXT_H\n+ \n+ /* Define to 1 if you have the <gssapi/gssapi_ext.h> header file. */\n+ #undef HAVE_GSSAPI_GSSAPI_EXT_H\n+ \n /* Define to 1 if you have the <gssapi/gssapi.h> header file. */\n #undef HAVE_GSSAPI_GSSAPI_H\n \nShall I push that, or do you want to?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 10:30:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: longfin missing gssapi_ext.h"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > I'm guessing it's not really an issue but it does make changing\n> > configure a bit annoying on my Ubuntu 22.04, when I run autoconf2.69, I\n> > end up with this additional hunk as changed from what our configure\n> > currently has.\n> \n> Not surprising. Thanks to autoconf's long release cycles, individual\n> distros often are carrying local patches that affect its output.\n> To ensure consistent results across committers, our policy is that you\n> should use built-from-upstream-source autoconf not a vendor's version.\n> (In principle that could bite us sometime, but it hasn't yet.)\n\n... making me more excited about the idea of getting over to meson.\n\n> Also, you should generally run autoheader after autoconf.\n> Checking things here, I notice that pg_config.h.in hasn't been\n> updated for the last few gssapi-related commits:\n> \n> $ git diff\n> diff --git a/src/include/pg_config.h.in b/src/include/pg_config.h.in\n> index 3665e79..6d572c3 100644\n> *** a/src/include/pg_config.h.in\n> --- b/src/include/pg_config.h.in\n> ***************\n> *** 196,201 ****\n> --- 196,207 ----\n> /* Define to 1 if you have the `getpeerucred' function. */\n> #undef HAVE_GETPEERUCRED\n> \n> + /* Define to 1 if you have the <gssapi_ext.h> header file. */\n> + #undef HAVE_GSSAPI_EXT_H\n> + \n> + /* Define to 1 if you have the <gssapi/gssapi_ext.h> header file. */\n> + #undef HAVE_GSSAPI_GSSAPI_EXT_H\n> + \n> /* Define to 1 if you have the <gssapi/gssapi.h> header file. */\n> #undef HAVE_GSSAPI_GSSAPI_H\n> \n> Shall I push that, or do you want to?\n\nHrmpf. Sorry about that. Please feel free and thanks for pointing it\nout.\n\nThanks again,\n\nStephen",
"msg_date": "Mon, 17 Apr 2023 11:07:37 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: longfin missing gssapi_ext.h"
}
] |
[
{
"msg_contents": "Hi\n\non fresh Fedora 38, I cannot to run regress tests\n\n+ERROR: could not load library\n\"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib/llvmjit.so\":\n/home/pavel/src/p\nostgresql.master/tmp_install/usr/local/pgsql/master/lib/llvmjit.so:\nundefined symbol: LLVMBuildGEP\n SELECT BOOLTBL1.*, BOOLTBL2.*\n FROM BOOLTBL1, BOOLTBL2\n WHERE boolne(BOOLTBL2.f1,BOOLTBL1.f1);\n\nThere is lot of compile warnings\n\nIn file included from llvmjit_expr.c:31:\n../../../../src/include/jit/llvmjit_emit.h: In function ‘l_load_struct_gep’:\n../../../../src/include/jit/llvmjit_emit.h:112:30: warning: implicit\ndeclaration of function ‘LLVMBuildStructGEP’; did you mean\n‘LLVMBuildStructGEP2’? [-Wimplicit-function-declaration]\n 112 | LLVMValueRef v_ptr = LLVMBuildStructGEP(b, v, idx, \"\");\n | ^~~~~~~~~~~~~~~~~~\n | LLVMBuildStructGEP2\n../../../../src/include/jit/llvmjit_emit.h:112:30: warning: initialization\nof ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer\nfrom integer without a cast [-Wint-conversion]\n../../../../src/include/jit/llvmjit_emit.h:114:16: warning: implicit\ndeclaration of function ‘LLVMBuildLoad’; did you mean ‘LLVMBuildLoad2’?\n[-Wimplicit-function-declaration]\n 114 | return LLVMBuildLoad(b, v_ptr, name);\n | ^~~~~~~~~~~~~\n | LLVMBuildLoad2\n../../../../src/include/jit/llvmjit_emit.h:114:16: warning: returning ‘int’\nfrom a function with return type ‘LLVMValueRef’ {aka ‘struct\nLLVMOpaqueValue *’} makes pointer from integer without a cast\n[-Wint-conversion]\n 114 | return LLVMBuildLoad(b, v_ptr, name);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../src/include/jit/llvmjit_emit.h: In function ‘l_load_gep1’:\n../../../../src/include/jit/llvmjit_emit.h:123:30: warning: implicit\ndeclaration of function ‘LLVMBuildGEP’; did you mean ‘LLVMBuildGEP2’?\n[-Wimplicit-function-declaration]\n 123 | LLVMValueRef v_ptr = LLVMBuildGEP(b, v, &idx, 1, \"\");\n | ^~~~~~~~~~~~\n | LLVMBuildGEP2\n../../../../src/include/jit/llvmjit_emit.h:123:30: warning: initialization\nof ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer\nfrom integer without a cast [-Wint-conversion]\n../../../../src/include/jit/llvmjit_emit.h:125:16: warning: returning ‘int’\nfrom a function with return type ‘LLVMValueRef’ {aka ‘struct\nLLVMOpaqueValue *’} makes pointer from integer without a cast\n[-Wint-conversion]\n\n\nRegards\n\nPavel\n\nHion fresh Fedora 38, I cannot to run regress tests+ERROR: could not load library \"/home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib/llvmjit.so\": /home/pavel/src/postgresql.master/tmp_install/usr/local/pgsql/master/lib/llvmjit.so: undefined symbol: LLVMBuildGEP SELECT BOOLTBL1.*, BOOLTBL2.* FROM BOOLTBL1, BOOLTBL2 WHERE boolne(BOOLTBL2.f1,BOOLTBL1.f1);There is lot of compile warningsIn file included from llvmjit_expr.c:31:../../../../src/include/jit/llvmjit_emit.h: In function ‘l_load_struct_gep’:../../../../src/include/jit/llvmjit_emit.h:112:30: warning: implicit declaration of function ‘LLVMBuildStructGEP’; did you mean ‘LLVMBuildStructGEP2’? [-Wimplicit-function-declaration] 112 | LLVMValueRef v_ptr = LLVMBuildStructGEP(b, v, idx, \"\"); | ^~~~~~~~~~~~~~~~~~ | LLVMBuildStructGEP2../../../../src/include/jit/llvmjit_emit.h:112:30: warning: initialization of ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]../../../../src/include/jit/llvmjit_emit.h:114:16: warning: implicit declaration of function ‘LLVMBuildLoad’; did you mean ‘LLVMBuildLoad2’? [-Wimplicit-function-declaration] 114 | return LLVMBuildLoad(b, v_ptr, name); | ^~~~~~~~~~~~~ | LLVMBuildLoad2../../../../src/include/jit/llvmjit_emit.h:114:16: warning: returning ‘int’ from a function with return type ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} makes pointer from integer without a cast [-Wint-conversion] 114 | return LLVMBuildLoad(b, v_ptr, name); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~../../../../src/include/jit/llvmjit_emit.h: In function ‘l_load_gep1’:../../../../src/include/jit/llvmjit_emit.h:123:30: warning: implicit declaration of function ‘LLVMBuildGEP’; did you mean ‘LLVMBuildGEP2’? [-Wimplicit-function-declaration] 123 | LLVMValueRef v_ptr = LLVMBuildGEP(b, v, &idx, 1, \"\"); | ^~~~~~~~~~~~ | LLVMBuildGEP2../../../../src/include/jit/llvmjit_emit.h:123:30: warning: initialization of ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} from ‘int’ makes pointer from integer without a cast [-Wint-conversion]../../../../src/include/jit/llvmjit_emit.h:125:16: warning: returning ‘int’ from a function with return type ‘LLVMValueRef’ {aka ‘struct LLVMOpaqueValue *’} makes pointer from integer without a cast [-Wint-conversion]RegardsPavel",
"msg_date": "Sat, 8 Apr 2023 10:03:49 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "broken master branch"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 8:04 PM Pavel Stehule <[email protected]> wrote:\n> on fresh Fedora 38, I cannot to run regress tests\n\nLooks like the new LLVM 16. I'll try to look at this again next week.\nIn the meantime you could try using 15.\n\n\n",
"msg_date": "Sat, 8 Apr 2023 20:38:00 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master branch"
},
{
"msg_contents": "so 8. 4. 2023 v 10:38 odesílatel Thomas Munro <[email protected]>\nnapsal:\n\n> On Sat, Apr 8, 2023 at 8:04 PM Pavel Stehule <[email protected]>\n> wrote:\n> > on fresh Fedora 38, I cannot to run regress tests\n>\n> Looks like the new LLVM 16. I'll try to look at this again next week.\n> In the meantime you could try using 15.\n>\n\nok\n\nThank you for info\n\nPavel\n\nso 8. 4. 2023 v 10:38 odesílatel Thomas Munro <[email protected]> napsal:On Sat, Apr 8, 2023 at 8:04 PM Pavel Stehule <[email protected]> wrote:\n> on fresh Fedora 38, I cannot to run regress tests\n\nLooks like the new LLVM 16. I'll try to look at this again next week.\nIn the meantime you could try using 15.okThank you for infoPavel",
"msg_date": "Sat, 8 Apr 2023 11:13:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: broken master branch"
},
{
"msg_contents": "Thomas Munro <[email protected]> writes:\n> On Sat, Apr 8, 2023 at 8:04 PM Pavel Stehule <[email protected]> wrote:\n>> on fresh Fedora 38, I cannot to run regress tests\n\n> Looks like the new LLVM 16. I'll try to look at this again next week.\n> In the meantime you could try using 15.\n\nI've become entirely desensitized to seawasp failing, which is probably\na bad thing, but today I happened to look at it and discovered that\nits compiler has been dumping core for some time now:\n\nclang: /home/fabien/llvm-src/llvm/lib/Transforms/Scalar/SROA.cpp:1745: llvm::Value* getAdjustedPtr({anonymous}::IRBuilderTy&, const llvm::DataLayout&, llvm::Value*, llvm::APInt, llvm::Type*, const llvm::Twine&): Assertion `Ptr->getType()->isOpaquePointerTy() && \"Only opaque pointers supported\"' failed.\nPLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace, preprocessed source, and associated run script.\nStack dump:\n0.\tProgram arguments: /home/fabien/clgtk/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Xclang -no-opaque-pointers -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -O2 -I../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -flto=thin -emit-llvm -c -o strftime.bc strftime.c\n1.\t<eof> parser at end of file\n2.\tOptimizer\n\nSeems like we ought to look into that, and report it as requested.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 11:04:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: broken master branch"
}
] |
[
{
"msg_contents": "TAP test for logical decoding on standby\n\nAuthor: \"Drouvot, Bertrand\" <[email protected]>\nAuthor: Amit Khandekar <[email protected]>\nAuthor: Craig Ringer <[email protected]> (in an older version)\nAuthor: Andres Freund <[email protected]>\nReviewed-by: \"Drouvot, Bertrand\" <[email protected]>\nReviewed-by: Andres Freund <[email protected]>\nReviewed-by: Robert Haas <[email protected]>\nReviewed-by: Amit Kapila <[email protected]>\nReviewed-by: Fabrízio de Royes Mello <[email protected]>\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/fcd77d53217b4c4049d176072a1763d6e11ca478\n\nModified Files\n--------------\nsrc/test/perl/PostgreSQL/Test/Cluster.pm | 37 ++\nsrc/test/recovery/meson.build | 1 +\n.../recovery/t/035_standby_logical_decoding.pl | 734 +++++++++++++++++++++\n3 files changed, 772 insertions(+)",
"msg_date": "Sat, 08 Apr 2023 09:26:51 +0000",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: TAP test for logical decoding on standby"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 5:26 AM Andres Freund <[email protected]> wrote:\n> TAP test for logical decoding on standby\n\nSmall nitpicks:\n\n1. The test names generated by check_slots_conflicting_status() start\nwith a capital letter, while most other test names start with a\nlower-case letter.\n\n2. The function is called 7 times, 6 with a true argument and 1 with a\nfalse argument, but the test name only depends on whether the argument\nis true or false, so we get the same test name 6 times. Maybe there's\nnot a reasonable way to do better, I'm not sure, but it's not ideal.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 May 2023 11:15:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: TAP test for logical decoding on standby"
},
{
"msg_contents": "Hi,\n\nOn 5/23/23 5:15 PM, Robert Haas wrote:\n> On Sat, Apr 8, 2023 at 5:26 AM Andres Freund <[email protected]> wrote:\n>> TAP test for logical decoding on standby\n> \n> Small nitpicks:\n> \n> 1. The test names generated by check_slots_conflicting_status() start\n> with a capital letter, while most other test names start with a\n> lower-case letter.\n> \n\nYeah, not sure that would deserve a fix for its own but if we address 2.\nthen let's do 1. too.\n\n> 2. The function is called 7 times, 6 with a true argument and 1 with a\n> false argument, but the test name only depends on whether the argument\n> is true or false, so we get the same test name 6 times. Maybe there's\n> not a reasonable way to do better, I'm not sure, but it's not ideal.\n> \n\nI agree that's not ideal (but one could still figure out which one is\nfailing if any by looking at the perl script).\n\nIf we want to \"improve\" this, what about passing a second argument that\nwould provide more context in the test name?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 24 May 2023 13:58:54 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: TAP test for logical decoding on standby"
}
] |
[
{
"msg_contents": "Hi,\n\nI'd planned to write this soon anyway, but it was just brought up in [1].\n\nOriginally we had planned to drop src/tools/msvc support shortly after meson\nwent in. Unfortunately, it took a bit longer than originally hoped for to\nmerge meson support and then longer than hoped to add buildfarm support. I\ndon't think there's been any buildfarm client release with meson support yet -\nbut we do have windows buildfarm animals using it.\n\nDo we want to drop src/tools/msvc support in 16 (i.e. now), or do it early in\n17?\n\nI do have a set of patches removing src/tools/msvc. There are a few loose ends\nthat I know of (my eyes glaze over every time I try to reconcile the\nsrc/tools/perl.m4 comments with the referenced comments in Mkvcbuild.pm), and\nprobably more small references that grep terms didn't find.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/3598664.1680976414%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sat, 8 Apr 2023 12:10:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "When to drop src/tools/msvc support"
},
{
"msg_contents": "On 4/8/23 3:10 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> I'd planned to write this soon anyway, but it was just brought up in [1].\r\n> \r\n> Originally we had planned to drop src/tools/msvc support shortly after meson\r\n> went in. Unfortunately, it took a bit longer than originally hoped for to\r\n> merge meson support and then longer than hoped to add buildfarm support. I\r\n> don't think there's been any buildfarm client release with meson support yet -\r\n> but we do have windows buildfarm animals using it.\r\n> \r\n> Do we want to drop src/tools/msvc support in 16 (i.e. now), or do it early in\r\n> 17?\r\n> \r\n> I do have a set of patches removing src/tools/msvc. There are a few loose ends\r\n> that I know of (my eyes glaze over every time I try to reconcile the\r\n> src/tools/perl.m4 comments with the referenced comments in Mkvcbuild.pm), and\r\n> probably more small references that grep terms didn't find.\r\n\r\n(reads [1])\r\n\r\nCan we treat this as an \"open item\" for completing the transition to \r\nmeson for builds as part of v16?\r\n\r\nWith my personal hat on, it seems silly to wait until v17 to do this, \r\nand I don't see why we would want to wait. If there's limited risk in \r\ndoing this and it'll make our builds both more stable + faster, it seems \r\nlike we should just do it.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 8 Apr 2023 15:19:20 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Do we want to drop src/tools/msvc support in 16 (i.e. now), or do it early in\n> 17?\n\nOn the one hand, it feels like something we shouldn't do after\nfeature freeze. On the other hand, continuing to maintain three\nbuild systems is a real drag (although you could argue that there\nshouldn't be much churn there until the tree opens for 17).\n\nWe clearly can't consider it in any case until the buildfarm\nis prepared, with all the Windows animals updated to a compatible\nclient script. I don't know what timeline Andrew has in mind\nfor that.\n\nI guess I'd vote for pulling the trigger in v16 if we can get that\ndone by the end of April. Once we're close to beta I think it\nmust wait for v17 to open.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 15:30:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 3:30 PM Tom Lane <[email protected]> wrote:\n> I guess I'd vote for pulling the trigger in v16 if we can get that\n> done by the end of April. Once we're close to beta I think it\n> must wait for v17 to open.\n\nI think that sounds reasonable. It would be to the project's advantage\nnot to have to maintain three build systems for an extra year, but we\ncan't still be whacking things around right up until the moment we\nexpect to ship a beta.\n\nHowever, if this is the direction we're going, we probably need to\ngive pgsql-packagers a heads up ASAP, because anybody who is still\nrelying on the MSVC system to build Windows binaries is presumably\ngoing to need some time to adjust. If we rip out the build system\nsomebody is using a couple of weeks before beta, that might make it\ndifficult for that person to get the beta out promptly. And I think\nthere's probably more than just EDB who would be in that situation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Apr 2023 12:34:47 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> However, if this is the direction we're going, we probably need to\n> give pgsql-packagers a heads up ASAP, because anybody who is still\n> relying on the MSVC system to build Windows binaries is presumably\n> going to need some time to adjust. If we rip out the build system\n> somebody is using a couple of weeks before beta, that might make it\n> difficult for that person to get the beta out promptly. And I think\n> there's probably more than just EDB who would be in that situation.\n\nOh ... that's a good point. Is there anyone besides EDB shipping\nMSVC-built executables? Would it even be practical to switch to\nmeson with a month-or-so notice? Seems kind of tight, and it's\nnot like the packagers volunteered to make this switch.\n\nMaybe we have to bite the bullet and keep MSVC for v16.\nIf we drop it as soon as v17 opens, there's probably not that\nmuch incremental work involved compared to dropping for v16.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Apr 2023 12:56:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 12:56 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > However, if this is the direction we're going, we probably need to\n> > give pgsql-packagers a heads up ASAP, because anybody who is still\n> > relying on the MSVC system to build Windows binaries is presumably\n> > going to need some time to adjust. If we rip out the build system\n> > somebody is using a couple of weeks before beta, that might make it\n> > difficult for that person to get the beta out promptly. And I think\n> > there's probably more than just EDB who would be in that situation.\n>\n> Oh ... that's a good point. Is there anyone besides EDB shipping\n> MSVC-built executables? Would it even be practical to switch to\n> meson with a month-or-so notice? Seems kind of tight, and it's\n> not like the packagers volunteered to make this switch.\n\nI can't really speak to those questions with confidence.\n\nPerhaps instead of telling pgsql-packagers what we're doing, we could\ninstead ask them if it would work for them if we did XYZ. Then we\ncould use that information to inform our decision-making.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Apr 2023 13:34:35 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Mon, 10 Apr 2023 at 18:34, Robert Haas <[email protected]> wrote:\n\n> On Mon, Apr 10, 2023 at 12:56 PM Tom Lane <[email protected]> wrote:\n> > Robert Haas <[email protected]> writes:\n> > > However, if this is the direction we're going, we probably need to\n> > > give pgsql-packagers a heads up ASAP, because anybody who is still\n> > > relying on the MSVC system to build Windows binaries is presumably\n> > > going to need some time to adjust. If we rip out the build system\n> > > somebody is using a couple of weeks before beta, that might make it\n> > > difficult for that person to get the beta out promptly. And I think\n> > > there's probably more than just EDB who would be in that situation.\n> >\n> > Oh ... that's a good point. Is there anyone besides EDB shipping\n> > MSVC-built executables? Would it even be practical to switch to\n> > meson with a month-or-so notice? Seems kind of tight, and it's\n> > not like the packagers volunteered to make this switch.\n>\n> I can't really speak to those questions with confidence.\n>\n> Perhaps instead of telling pgsql-packagers what we're doing, we could\n> instead ask them if it would work for them if we did XYZ. Then we\n> could use that information to inform our decision-making.\n\n\nProjects other than the EDB installers use the MSVC build system - e.g.\npgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc)\nthat are pretty heavily baked into a fully automated build system (even the\nbuild servers and all their requirements are baked into Ansible).\n\nChanging that lot would be non-trivial, though certainly possible, and I\nsuspect we’re not the only ones doing that sort of thing.\n\n-- \n-- \nDave Page\nhttps://pgsnake.blogspot.com\n\nEDB Postgres\nhttps://www.enterprisedb.com\n\nOn Mon, 10 Apr 2023 at 18:34, Robert Haas <[email protected]> wrote:On Mon, Apr 10, 2023 at 12:56 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > However, if this is the direction we're going, we probably need to\n> > give pgsql-packagers a heads up ASAP, because anybody who is still\n> > relying on the MSVC system to build Windows binaries is presumably\n> > going to need some time to adjust. If we rip out the build system\n> > somebody is using a couple of weeks before beta, that might make it\n> > difficult for that person to get the beta out promptly. And I think\n> > there's probably more than just EDB who would be in that situation.\n>\n> Oh ... that's a good point. Is there anyone besides EDB shipping\n> MSVC-built executables? Would it even be practical to switch to\n> meson with a month-or-so notice? Seems kind of tight, and it's\n> not like the packagers volunteered to make this switch.\n\nI can't really speak to those questions with confidence.\n\nPerhaps instead of telling pgsql-packagers what we're doing, we could\ninstead ask them if it would work for them if we did XYZ. Then we\ncould use that information to inform our decision-making.Projects other than the EDB installers use the MSVC build system - e.g. pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc) that are pretty heavily baked into a fully automated build system (even the build servers and all their requirements are baked into Ansible). Changing that lot would be non-trivial, though certainly possible, and I suspect we’re not the only ones doing that sort of thing.-- -- Dave Pagehttps://pgsnake.blogspot.comEDB Postgreshttps://www.enterprisedb.com",
"msg_date": "Mon, 10 Apr 2023 19:55:35 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 6:56 PM Tom Lane <[email protected]> wrote:\n>\n> Robert Haas <[email protected]> writes:\n> > However, if this is the direction we're going, we probably need to\n> > give pgsql-packagers a heads up ASAP, because anybody who is still\n> > relying on the MSVC system to build Windows binaries is presumably\n> > going to need some time to adjust. If we rip out the build system\n> > somebody is using a couple of weeks before beta, that might make it\n> > difficult for that person to get the beta out promptly. And I think\n> > there's probably more than just EDB who would be in that situation.\n>\n> Oh ... that's a good point. Is there anyone besides EDB shipping\n> MSVC-built executables? Would it even be practical to switch to\n> meson with a month-or-so notice? Seems kind of tight, and it's\n> not like the packagers volunteered to make this switch.\n>\n> Maybe we have to bite the bullet and keep MSVC for v16.\n> If we drop it as soon as v17 opens, there's probably not that\n> much incremental work involved compared to dropping for v16.\n\nNot involved with any such build tasks anymore, but I think we can\ndefinitely assume there are others than EDB who do that. It's also\nused by people who build add-on modules to be loaded in the\nEDB-installer-installed systems, I'm sure.\n\nIt seems a bit aggressive to those to drop the entire build system out\njust before beta.\n\nThus, +1 on actually keeping it up and dropping it immediately as v17\nopens, giving them a year of advantage. And probably updating the docs\n(if anybody were to read them.. but at least then we tried) stating\nthat it's deprecated and will be removed in v17.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 10 Apr 2023 22:39:28 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> Thus, +1 on actually keeping it up and dropping it immediately as v17\n> opens, giving them a year of advantage. And probably updating the docs\n> (if anybody were to read them.. but at least then we tried) stating\n> that it's deprecated and will be removed in v17.\n\nYeah, I think that's the only feasible answer at this point.\nMaybe a month or two back we could have done differently,\nbut there's not a lot of runway now.\n\nOnce we do drop src/tools/msvc from HEAD, we should make a point\nof reminding -packagers about it, in hopes that they'll work on\nthe transition sooner than next May.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Apr 2023 16:50:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-10 19:55:35 +0100, Dave Page wrote:\n> Projects other than the EDB installers use the MSVC build system - e.g.\n> pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc)\n> that are pretty heavily baked into a fully automated build system (even the\n> build servers and all their requirements are baked into Ansible).\n> \n> Changing that lot would be non-trivial, though certainly possible, and I\n> suspect we’re not the only ones doing that sort of thing.\n\nDo you have a link to the code for that, if it's open? Just to get an\nimpression for how hard it'd be to switch over?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Apr 2023 15:27:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-10 16:50:20 -0400, Tom Lane wrote:\n> Yeah, I think that's the only feasible answer at this point.\n> Maybe a month or two back we could have done differently,\n> but there's not a lot of runway now.\n> \n> Once we do drop src/tools/msvc from HEAD, we should make a point\n> of reminding -packagers about it, in hopes that they'll work on\n> the transition sooner than next May.\n\nSo the plan is:\n\n- add note to docs in HEAD that the src/tools/msvc style of build is\n deprecated\n- give -packagers a HEADS up, once the deprecation notice has been added to\n the docs\n- have a patch ready to drop src/tools/msvc from HEAD once 16 has branched off\n (i.e. a polished version of what I posted upthread)\n\nOn IM Thomas made some point about CI - I wonder if we should add building 16\nwith src/tools/msvc as an optional CI task? We can't enable it by default\n(yet), because we'd not have enough resources to also run that for cfbot. Once\n16 forked, we then could set to run automatically in the 16 branch, as cfbot\nwon't run those.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Apr 2023 15:32:19 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 03:32:19PM -0700, Andres Freund wrote:\n> On IM Thomas made some point about CI - I wonder if we should add building 16\n> with src/tools/msvc as an optional CI task? We can't enable it by default\n> (yet), because we'd not have enough resources to also run that for cfbot. Once\n> 16 forked, we then could set to run automatically in the 16 branch, as cfbot\n> won't run those.\n\nGetting a CI job able to do some validation for MSVC would be indeed\nnice. What's the plan in the buildfarm with this coverage? Would all\nthe animals switch to meson (Chocolatey + StrawberryPerl, I assume)\nfor the job or will there still be some coverage with MSVC for v16\nthere?\n--\nMichael",
"msg_date": "Tue, 11 Apr 2023 08:52:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 4/10/23 4:50 PM, Tom Lane wrote:\r\n> Magnus Hagander <[email protected]> writes:\r\n>> Thus, +1 on actually keeping it up and dropping it immediately as v17\r\n>> opens, giving them a year of advantage. And probably updating the docs\r\n>> (if anybody were to read them.. but at least then we tried) stating\r\n>> that it's deprecated and will be removed in v17.\r\n> \r\n> Yeah, I think that's the only feasible answer at this point.\r\n> Maybe a month or two back we could have done differently,\r\n> but there's not a lot of runway now.\r\n> \r\n> Once we do drop src/tools/msvc from HEAD, we should make a point\r\n> of reminding -packagers about it, in hopes that they'll work on\r\n> the transition sooner than next May.\r\n\r\n[personal opinion, not RMT]\r\n\r\nThe last point would be my reasoning for \"why not now\" given deadlines \r\nare a pretty good motivator to get things done.\r\n\r\nThat said, if the plan is to do this \"shortly thereafter\" for v17 and it \r\nmakes the transition easier, I'm all for that.\r\n\r\nJonathan",
"msg_date": "Mon, 10 Apr 2023 19:53:15 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 12:27 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n> > Projects other than the EDB installers use the MSVC build system - e.g.\n> > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc)\n> > that are pretty heavily baked into a fully automated build system (even the\n> > build servers and all their requirements are baked into Ansible).\n> >\n> > Changing that lot would be non-trivial, though certainly possible, and I\n> > suspect we’re not the only ones doing that sort of thing.\n>\n> Do you have a link to the code for that, if it's open? Just to get an\n> impression for how hard it'd be to switch over?\n\n\nThe pgadmin docs/readme refers to\nhttps://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n\nIt clearly doesn't have the full automation stuff, but appears to have\nthe parts about building the postgres dependency.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:09:41 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Tue, 11 Apr 2023 at 08:09, Magnus Hagander <[email protected]> wrote:\n\n> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n> > > Projects other than the EDB installers use the MSVC build system - e.g.\n> > > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump\n> etc)\n> > > that are pretty heavily baked into a fully automated build system\n> (even the\n> > > build servers and all their requirements are baked into Ansible).\n> > >\n> > > Changing that lot would be non-trivial, though certainly possible, and\n> I\n> > > suspect we’re not the only ones doing that sort of thing.\n> >\n> > Do you have a link to the code for that, if it's open? Just to get an\n> > impression for how hard it'd be to switch over?\n>\n>\n> The pgadmin docs/readme refers to\n> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n>\n> It clearly doesn't have the full automation stuff, but appears to have\n> the parts about building the postgres dependency.\n>\n\nYeah, that's essentially the manual process, though I haven't tested it in\na while. The Ansible stuff is not currently public. I suspect (or rather,\nhope) that we can pull in all the additional packages required using\nChocolatey which shouldn't be too onerous.\n\nProbably my main concern is that the Meson build can use the same version\nof the VC++ compiler that we use (v14), which is carefully matched for\ncompatibility with all the various components, just in case anything passes\nCRT pointers around. Python is the one thing we don't build ourselves on\nWindows and the process will build modules like gssapi and psycopg (which\nlinks with libpq of course), so we're basically following what they use.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, 11 Apr 2023 at 08:09, Magnus Hagander <[email protected]> wrote:On Tue, Apr 11, 2023 at 12:27 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n> > Projects other than the EDB installers use the MSVC build system - e.g.\n> > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump etc)\n> > that are pretty heavily baked into a fully automated build system (even the\n> > build servers and all their requirements are baked into Ansible).\n> >\n> > Changing that lot would be non-trivial, though certainly possible, and I\n> > suspect we’re not the only ones doing that sort of thing.\n>\n> Do you have a link to the code for that, if it's open? Just to get an\n> impression for how hard it'd be to switch over?\n\n\nThe pgadmin docs/readme refers to\nhttps://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n\nIt clearly doesn't have the full automation stuff, but appears to have\nthe parts about building the postgres dependency.Yeah, that's essentially the manual process, though I haven't tested it in a while. The Ansible stuff is not currently public. I suspect (or rather, hope) that we can pull in all the additional packages required using Chocolatey which shouldn't be too onerous.Probably my main concern is that the Meson build can use the same version of the VC++ compiler that we use (v14), which is carefully matched for compatibility with all the various components, just in case anything passes CRT pointers around. Python is the one thing we don't build ourselves on Windows and the process will build modules like gssapi and psycopg (which links with libpq of course), so we're basically following what they use.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 09:05:31 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 2023-04-11 Tu 04:05, Dave Page wrote:\n>\n>\n> On Tue, 11 Apr 2023 at 08:09, Magnus Hagander <[email protected]> wrote:\n>\n> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n> > > Projects other than the EDB installers use the MSVC build\n> system - e.g.\n> > > pgAdmin uses it’s own builds of libpq and other tools (psql,\n> pg_dump etc)\n> > > that are pretty heavily baked into a fully automated build\n> system (even the\n> > > build servers and all their requirements are baked into Ansible).\n> > >\n> > > Changing that lot would be non-trivial, though certainly\n> possible, and I\n> > > suspect we’re not the only ones doing that sort of thing.\n> >\n> > Do you have a link to the code for that, if it's open? Just to\n> get an\n> > impression for how hard it'd be to switch over?\n>\n>\n> The pgadmin docs/readme refers to\n> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n>\n> It clearly doesn't have the full automation stuff, but appears to have\n> the parts about building the postgres dependency.\n>\n>\n> Yeah, that's essentially the manual process, though I haven't tested \n> it in a while. The Ansible stuff is not currently public. I suspect \n> (or rather, hope) that we can pull in all the additional packages \n> required using Chocolatey which shouldn't be too onerous.\n>\n> Probably my main concern is that the Meson build can use the same \n> version of the VC++ compiler that we use (v14), which is carefully \n> matched for compatibility with all the various components, just in \n> case anything passes CRT pointers around. Python is the one thing we \n> don't build ourselves on Windows and the process will build modules \n> like gssapi and psycopg (which links with libpq of course), so we're \n> basically following what they use.\n>\n>\n\nFor meson you just need to to \"pip install meson ninja\" in your python \ndistro and you should be good to go (they will be installed in python's \nScripts directory). Don't use chocolatey to install meson/ninja - I ran \ninto issues doing that.\n\nAFAICT meson will use whatever version of VC you have installed, \nalthough I have only been testing with VC2019.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-11 Tu 04:05, Dave Page\n wrote:\n\n\n\n\n\n\n\n\nOn Tue, 11 Apr 2023 at\n 08:09, Magnus Hagander <[email protected]>\n wrote:\n\nOn\n Tue, Apr 11, 2023 at 12:27 AM Andres Freund <[email protected]>\n wrote:\n >\n > Hi,\n >\n > On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n > > Projects other than the EDB installers use the\n MSVC build system - e.g.\n > > pgAdmin uses it’s own builds of libpq and other\n tools (psql, pg_dump etc)\n > > that are pretty heavily baked into a fully\n automated build system (even the\n > > build servers and all their requirements are baked\n into Ansible).\n > >\n > > Changing that lot would be non-trivial, though\n certainly possible, and I\n > > suspect we’re not the only ones doing that sort of\n thing.\n >\n > Do you have a link to the code for that, if it's open?\n Just to get an\n > impression for how hard it'd be to switch over?\n\n\n The pgadmin docs/readme refers to\nhttps://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n\n It clearly doesn't have the full automation stuff, but\n appears to have\n the parts about building the postgres dependency.\n\n\n\nYeah, that's essentially the manual process, though I\n haven't tested it in a while. The Ansible stuff is not\n currently public. I suspect (or rather, hope) that we can\n pull in all the additional packages required using\n Chocolatey which shouldn't be too onerous.\n\n\nProbably my main concern is that the Meson build can use\n the same version of the VC++ compiler that we use (v14),\n which is carefully matched for compatibility with all the\n various components, just in case anything passes CRT\n pointers around. Python is the one thing we don't build\n ourselves on Windows and the process will build modules like\n gssapi and psycopg (which links with libpq of course), so\n we're basically following what they use.\n\n\n\n\n\n\n\n\nFor meson you just need to to \"pip install meson ninja\" in your\n python distro and you should be good to go (they will be installed\n in python's Scripts directory). Don't use chocolatey to install\n meson/ninja - I ran into issues doing that.\nAFAICT meson will use whatever version of VC you have installed,\n although I have only been testing with VC2019.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 06:58:43 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2023-04-11 Tu 04:05, Dave Page wrote:\n>\n>\n>\n> On Tue, 11 Apr 2023 at 08:09, Magnus Hagander <[email protected]> wrote:\n>\n>> On Tue, Apr 11, 2023 at 12:27 AM Andres Freund <[email protected]>\n>> wrote:\n>> >\n>> > Hi,\n>> >\n>> > On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n>> > > Projects other than the EDB installers use the MSVC build system -\n>> e.g.\n>> > > pgAdmin uses it’s own builds of libpq and other tools (psql, pg_dump\n>> etc)\n>> > > that are pretty heavily baked into a fully automated build system\n>> (even the\n>> > > build servers and all their requirements are baked into Ansible).\n>> > >\n>> > > Changing that lot would be non-trivial, though certainly possible,\n>> and I\n>> > > suspect we’re not the only ones doing that sort of thing.\n>> >\n>> > Do you have a link to the code for that, if it's open? Just to get an\n>> > impression for how hard it'd be to switch over?\n>>\n>>\n>> The pgadmin docs/readme refers to\n>> https://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n>>\n>> It clearly doesn't have the full automation stuff, but appears to have\n>> the parts about building the postgres dependency.\n>>\n>\n> Yeah, that's essentially the manual process, though I haven't tested it in\n> a while. The Ansible stuff is not currently public. I suspect (or rather,\n> hope) that we can pull in all the additional packages required using\n> Chocolatey which shouldn't be too onerous.\n>\n> Probably my main concern is that the Meson build can use the same version\n> of the VC++ compiler that we use (v14), which is carefully matched for\n> compatibility with all the various components, just in case anything passes\n> CRT pointers around. Python is the one thing we don't build ourselves on\n> Windows and the process will build modules like gssapi and psycopg (which\n> links with libpq of course), so we're basically following what they use.\n>\n>\n>\n> For meson you just need to to \"pip install meson ninja\" in your python\n> distro and you should be good to go (they will be installed in python's\n> Scripts directory). Don't use chocolatey to install meson/ninja - I ran\n> into issues doing that.\n>\n> AFAICT meson will use whatever version of VC you have installed, although\n> I have only been testing with VC2019.\n>\nOK, that sounds easy enough then (famous last words!)\n\nThanks!\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, 11 Apr 2023 at 11:58, Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2023-04-11 Tu 04:05, Dave Page\n wrote:\n\n\n\n\n\n\n\nOn Tue, 11 Apr 2023 at\n 08:09, Magnus Hagander <[email protected]>\n wrote:\n\nOn\n Tue, Apr 11, 2023 at 12:27 AM Andres Freund <[email protected]>\n wrote:\n >\n > Hi,\n >\n > On 2023-04-10 19:55:35 +0100, Dave Page wrote:\n > > Projects other than the EDB installers use the\n MSVC build system - e.g.\n > > pgAdmin uses it’s own builds of libpq and other\n tools (psql, pg_dump etc)\n > > that are pretty heavily baked into a fully\n automated build system (even the\n > > build servers and all their requirements are baked\n into Ansible).\n > >\n > > Changing that lot would be non-trivial, though\n certainly possible, and I\n > > suspect we’re not the only ones doing that sort of\n thing.\n >\n > Do you have a link to the code for that, if it's open?\n Just to get an\n > impression for how hard it'd be to switch over?\n\n\n The pgadmin docs/readme refers to\nhttps://github.com/pgadmin-org/pgadmin4/tree/master/pkg/win32\n\n It clearly doesn't have the full automation stuff, but\n appears to have\n the parts about building the postgres dependency.\n\n\n\nYeah, that's essentially the manual process, though I\n haven't tested it in a while. The Ansible stuff is not\n currently public. I suspect (or rather, hope) that we can\n pull in all the additional packages required using\n Chocolatey which shouldn't be too onerous.\n\n\nProbably my main concern is that the Meson build can use\n the same version of the VC++ compiler that we use (v14),\n which is carefully matched for compatibility with all the\n various components, just in case anything passes CRT\n pointers around. Python is the one thing we don't build\n ourselves on Windows and the process will build modules like\n gssapi and psycopg (which links with libpq of course), so\n we're basically following what they use.\n\n\n\n\n\n\n\n\nFor meson you just need to to \"pip install meson ninja\" in your\n python distro and you should be good to go (they will be installed\n in python's Scripts directory). Don't use chocolatey to install\n meson/ninja - I ran into issues doing that.\nAFAICT meson will use whatever version of VC you have installed,\n although I have only been testing with VC2019.OK, that sounds easy enough then (famous last words!)Thanks! -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 12:54:42 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 4/11/23 7:54 AM, Dave Page wrote:\r\n> \r\n> \r\n> On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan <[email protected] \r\n> <mailto:[email protected]>> wrote:\r\n> \r\n> For meson you just need to to \"pip install meson ninja\" in your\r\n> python distro and you should be good to go (they will be installed\r\n> in python's Scripts directory). Don't use chocolatey to install\r\n> meson/ninja - I ran into issues doing that.\r\n> \r\n> AFAICT meson will use whatever version of VC you have installed,\r\n> although I have only been testing with VC2019.\r\n> \r\n> OK, that sounds easy enough then (famous last words!)\r\n\r\n[RMT hat]\r\n\r\nDave -- does this mean you see a way forward on moving the Windows \r\nbuilds over to use Meson instead of MSVC?\r\n\r\nDo you think we'll have enough info by end of this week to make a \r\ndecision on whether we can drop MSVC in v16?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 11 Apr 2023 08:52:08 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On Tue, 11 Apr 2023 at 13:52, Jonathan S. Katz <[email protected]> wrote:\n\n> On 4/11/23 7:54 AM, Dave Page wrote:\n> >\n> >\n> > On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > For meson you just need to to \"pip install meson ninja\" in your\n> > python distro and you should be good to go (they will be installed\n> > in python's Scripts directory). Don't use chocolatey to install\n> > meson/ninja - I ran into issues doing that.\n> >\n> > AFAICT meson will use whatever version of VC you have installed,\n> > although I have only been testing with VC2019.\n> >\n> > OK, that sounds easy enough then (famous last words!)\n>\n> [RMT hat]\n>\n> Dave -- does this mean you see a way forward on moving the Windows\n> builds over to use Meson instead of MSVC?\n>\n\nI can see a way forward, yes.\n\n\n>\n> Do you think we'll have enough info by end of this week to make a\n> decision on whether we can drop MSVC in v16?\n>\n\nThere's no way I can test anything this week - I'm on leave for most of it\nand AFK.\n\nBut, my point was more that there are almost certainly more projects using\nthe MSVC build system than the EDB installers; pgAdmin being just one\nexample.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, 11 Apr 2023 at 13:52, Jonathan S. Katz <[email protected]> wrote:On 4/11/23 7:54 AM, Dave Page wrote:\n> \n> \n> On Tue, 11 Apr 2023 at 11:58, Andrew Dunstan <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> For meson you just need to to \"pip install meson ninja\" in your\n> python distro and you should be good to go (they will be installed\n> in python's Scripts directory). Don't use chocolatey to install\n> meson/ninja - I ran into issues doing that.\n> \n> AFAICT meson will use whatever version of VC you have installed,\n> although I have only been testing with VC2019.\n> \n> OK, that sounds easy enough then (famous last words!)\n\n[RMT hat]\n\nDave -- does this mean you see a way forward on moving the Windows \nbuilds over to use Meson instead of MSVC?I can see a way forward, yes. \n\nDo you think we'll have enough info by end of this week to make a \ndecision on whether we can drop MSVC in v16?There's no way I can test anything this week - I'm on leave for most of it and AFK.But, my point was more that there are almost certainly more projects using the MSVC build system than the EDB installers; pgAdmin being just one example. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 14:19:34 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Dave Page <[email protected]> writes:\n> On Tue, 11 Apr 2023 at 13:52, Jonathan S. Katz <[email protected]> wrote:\n>> Do you think we'll have enough info by end of this week to make a\n>> decision on whether we can drop MSVC in v16?\n\n> There's no way I can test anything this week - I'm on leave for most of it\n> and AFK.\n> But, my point was more that there are almost certainly more projects using\n> the MSVC build system than the EDB installers; pgAdmin being just one\n> example.\n\nYeah. Even if EDB can manage this, we're talking about going from\n\"src/tools/msvc is the only option\" in v15 to \"meson is the only\noption\" in v16. That seems pretty abrupt. Notably, it's assuming\nthat there are no big problems in the meson build system that will\ntake awhile to fix once discovered by users. That's a large\nassumption for code that hasn't even reached beta yet.\n\nSadly, I think we really have to ship both build systems in v16.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:49:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 4/11/23 9:49 AM, Tom Lane wrote:\r\n> Dave Page <[email protected]> writes:\r\n>> On Tue, 11 Apr 2023 at 13:52, Jonathan S. Katz <[email protected]> wrote:\r\n>>> Do you think we'll have enough info by end of this week to make a\r\n>>> decision on whether we can drop MSVC in v16?\r\n> \r\n>> There's no way I can test anything this week - I'm on leave for most of it\r\n>> and AFK.\r\n>> But, my point was more that there are almost certainly more projects using\r\n>> the MSVC build system than the EDB installers; pgAdmin being just one\r\n>> example.\r\n> \r\n> Yeah. Even if EDB can manage this, we're talking about going from\r\n> \"src/tools/msvc is the only option\" in v15 to \"meson is the only\r\n> option\" in v16. That seems pretty abrupt. Notably, it's assuming\r\n> that there are no big problems in the meson build system that will\r\n> take awhile to fix once discovered by users. That's a large\r\n> assumption for code that hasn't even reached beta yet.\r\n[Personal hat]\r\n\r\nWe'll probably see some of this for non-Windows builds, too. Granted, \r\nautoconf still seems to work, at least based on my tests.\r\n\r\n> Sadly, I think we really have to ship both build systems in v16.\r\n\r\n[Personal hat]\r\n\r\nI've come to this conclusion, too -- it does mean 5 more years of \r\nsupporting it.\r\n\r\nBut maybe we can make it clear in the release notes + docs that this is \r\nslated for deprecation and will be removed from v17? That way we can say \r\n\"we provided ample warning to move to the new build system.\"\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 11 Apr 2023 10:03:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "\"Jonathan S. Katz\" <[email protected]> writes:\n> On 4/11/23 9:49 AM, Tom Lane wrote:\n>> Sadly, I think we really have to ship both build systems in v16.\n\n> But maybe we can make it clear in the release notes + docs that this is \n> slated for deprecation and will be removed from v17? That way we can say \n> \"we provided ample warning to move to the new build system.\"\n\nYes, we absolutely should do that, as already discussed upthread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 10:12:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 4/11/23 10:12 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <[email protected]> writes:\r\n>> On 4/11/23 9:49 AM, Tom Lane wrote:\r\n>>> Sadly, I think we really have to ship both build systems in v16.\r\n> \r\n>> But maybe we can make it clear in the release notes + docs that this is\r\n>> slated for deprecation and will be removed from v17? That way we can say\r\n>> \"we provided ample warning to move to the new build system.\"\r\n> \r\n> Yes, we absolutely should do that, as already discussed upthread.\r\n\r\nAh yes, I saw Andres' notep[1] yesterday and had already forgotten. +1 \r\non that plan.\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/20230410223219.4tllxhz3hgwhh4tm%40awork3.anarazel.de",
"msg_date": "Tue, 11 Apr 2023 10:44:09 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 09:05:31 +0100, Dave Page wrote:\n> Probably my main concern is that the Meson build can use the same version\n> of the VC++ compiler that we use (v14), which is carefully matched for\n> compatibility with all the various components, just in case anything passes\n> CRT pointers around.\n\nFWIW, Independent of meson, I don't think support for VC 2015 in postgres is\nlong for the world. Later versions of msvc have increased the C standard\ncompliance a fair bit... It's also a few years out of normal support.\n\nI've not tested building with 2015, but I don't know of anything that should\nprevent building with meson with it. I am fairly sure that it can't build\ntab-complete.c, but you're presumably not building with tab completion support\nanyway?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:23:05 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 10:44:09 -0400, Jonathan S. Katz wrote:\n> On 4/11/23 10:12 AM, Tom Lane wrote:\n> > \"Jonathan S. Katz\" <[email protected]> writes:\n> > > On 4/11/23 9:49 AM, Tom Lane wrote:\n> > > > Sadly, I think we really have to ship both build systems in v16.\n> > \n> > > But maybe we can make it clear in the release notes + docs that this is\n> > > slated for deprecation and will be removed from v17? That way we can say\n> > > \"we provided ample warning to move to the new build system.\"\n> > \n> > Yes, we absolutely should do that, as already discussed upthread.\n> \n> Ah yes, I saw Andres' notep[1] yesterday and had already forgotten. +1 on\n> that plan.\n\nHere's a draft docs change.\n\nI added the <warning/> in two places in install-windows.sgml so it's visible\non both the generated pages in the chunked output. That does mean it's visible\ntwice nearby in the single-page output, but I don't think that's commonly\nused.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 11 Apr 2023 10:09:36 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Here's a draft docs change.\n\n> I added the <warning/> in two places in install-windows.sgml so it's visible\n> on both the generated pages in the chunked output. That does mean it's visible\n> twice nearby in the single-page output, but I don't think that's commonly\n> used.\n\nI don't agree with placing that first hunk before the para that tells\npeople to use a binary distribution, as it's completely irrelevant\nif they take that advice. I'm not really sure we need it at all,\nbecause quite a bit of the text right after that is not specific to using\nthe src/tools/msvc scripts either. I think the <warning> under \"Building\nwith <productname>Visual C++</productname> ...\" is sufficient.\n\nThe other two changes look fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 13:30:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 2023-Apr-11, Michael Paquier wrote:\n\n> Getting a CI job able to do some validation for MSVC would be indeed\n> nice. What's the plan in the buildfarm with this coverage? Would all\n> the animals switch to meson (Chocolatey + StrawberryPerl, I assume)\n> for the job or will there still be some coverage with MSVC for v16\n> there?\n\nIf we keep MSVC support in 16, then I agree we should have a CI task for\nit -- and IMO we should make ti automatically triggers whenever any\nMakefile or meson.build is modified. Hopefully we won't touch them\nmuch now that the branch is feature-frozen, but it could still happen.\n\nDo we have code for it already, even if incomplete?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 11 Apr 2023 19:44:20 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 13:30:15 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Here's a draft docs change.\n> \n> > I added the <warning/> in two places in install-windows.sgml so it's visible\n> > on both the generated pages in the chunked output. That does mean it's visible\n> > twice nearby in the single-page output, but I don't think that's commonly\n> > used.\n> \n> I don't agree with placing that first hunk before the para that tells\n> people to use a binary distribution, as it's completely irrelevant\n> if they take that advice.\n\nFair point.\n\n\n> I'm not really sure we need it at all, because quite a bit of the text right\n> after that is not specific to using the src/tools/msvc scripts either. I\n> think the <warning> under \"Building with <productname>Visual\n> C++</productname> ...\" is sufficient.\n\nIt seemed nicer to have it on all the \"Installation from Source Code on Windows\"\npages, but...\n\nExcept that we're planning to remove it anyway, the structure of the docs here\nseems a bit off...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 10:57:04 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Except that we're planning to remove it anyway, the structure of the docs here\n> seems a bit off...\n\nIndeed. We'll have to migrate some of that info elsewhere when the\ntime comes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:49:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 19:44:20 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-11, Michael Paquier wrote:\n>\n> > Getting a CI job able to do some validation for MSVC would be indeed\n> > nice. What's the plan in the buildfarm with this coverage? Would all\n> > the animals switch to meson (Chocolatey + StrawberryPerl, I assume)\n> > for the job or will there still be some coverage with MSVC for v16\n> > there?\n>\n> If we keep MSVC support in 16, then I agree we should have a CI task for\n> it -- and IMO we should make ti automatically triggers whenever any\n> Makefile or meson.build is modified. Hopefully we won't touch them\n> much now that the branch is feature-frozen, but it could still happen.\n\nOnce 16 branched, we can just have it always run, I think. It's just the\ndevelopment branch where it's worth avoiding that (for cfbot and personal\nhackery).\n\nI guess we could do something like:\n\n manual: \"changesInclude('**.meson.build', '**Makefile*', '**.mk', 'src/tools/msvc/**')\"\n\nso the task would be manual triggered if none of those files change. If you\nhave write rights on the repository in question, you can trigger manual tasks\nwith a click.\n\n\n> Do we have code for it already, even if incomplete?\n\nMy meson branch has a commit adding a bunch of additional tasks. Including\nbuilding with src/tools/msvc, building with meson + msbuild, openbsd, netbsd.\n\nhttps://github.com/postgres/postgres/commit/8f7c2ffb5a5e8f0ef3722e2439484187c1356416\n\nCurrently src/tools/msvc does build successfully, although the tests haven't\nfinished yet:\nhttps://cirrus-ci.com/build/6298699714789376\n\n\n(the cause for the opensuse failure is known, need to find cycles to tackle\nthat, not related to meson)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 15:26:34 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When to drop src/tools/msvc support"
},
{
"msg_contents": "On 08.04.23 21:10, Andres Freund wrote:\n> Do we want to drop src/tools/msvc support in 16 (i.e. now), or do it early in\n> 17?\n\nCan you build using meson from a distribution tarball on Windows?\n\n\n",
"msg_date": "Wed, 12 Apr 2023 10:54:54 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to drop src/tools/msvc support"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis patch does three things in the DecodeInterval function:\n\n1) Removes dead code for handling unit type RESERVE. There used to be\na unit called \"invalid\" that was of type RESERVE. At some point that\nunit was removed and there were no more units of type RESERVE.\nTherefore, the code for RESERVE unit handling is unreachable.\n\n2) Restrict the unit \"ago\" to only appear at the end of the\ninterval. According to the docs [0], this is the only valid place to\nput it, but we allowed it multiple times at any point in the input.\n\n3) Error when the user has multiple consecutive units or a unit without\nan accompanying value. I spent a lot of time trying to come up with\nrobust ways to detect this and ultimately settled on using the \"type\"\nfield. I'm not entirely happy with this approach, because it involves\nhaving to look ahead to the next field in a couple of places. The other\napproach I was considering was to introduce a boolean flag called\n\"unhandled_unit\". After parsing a unit it would be set to true, after\napplying the unit to a number it would be set to false. If it was true\nright before parsing a unit, then we would error. Please let me know\nif you have any suggestions here.\n\nThere is one more problem I noticed, but didn't fix. We allow multiple\n\"@\" to be sprinkled anywhere in the input, even though the docs [0]\nonly allow it to appear at the beginning of the input. For example,\nthe following query works fine:\n\n # SELECT INTERVAL '1 @ year @ @ @ 6 days @ 10 @ minutes';\n interval\n ------------------------\n 1 year 6 days 00:10:00\n (1 row)\n\nUnfortunately, this can't be fixed in DecodeInterval, because all of\nthe \"@\" fields are filtered out before this method. Additionally, I\nbelieve this means that the lines\n\n if (type == IGNORE_DTF)\n continue;\n\nin DecodeInterval, that appears right after decoding the units, are\nunreachable since\n\"@\" is the only unit of type IGNORE_DTF. Since \"@\" is filtered out,\nwe'll never decode a unit of type IGNORE_DTF.\n\nFor reference, I previously merged a couple similar patches to this\none, but for other date-time types [1], [2].\n\nThanks,\nJoe Koshakow\n\n[0]\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5b3c5953553bb9fb0b171abc6041e7c7e9ca5b4d\n[2]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=bcc704b52490492e6bd73c4444056b3e9644504d",
"msg_date": "Sun, 9 Apr 2023 20:43:48 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "DecodeInterval fixes"
},
{
"msg_contents": "Hi Joe, here's a partial review:\n\nOn Sun, Apr 9, 2023 at 5:44 PM Joseph Koshakow <[email protected]> wrote:\n> 1) Removes dead code for handling unit type RESERVE.\n\nLooks good. From a quick skim it looks like the ECPG copy of this code\n(ecpg/pgtypeslib/interval.c) might need to be updated as well?\n\n> 2) Restrict the unit \"ago\" to only appear at the end of the\n> interval. According to the docs [0], this is the only valid place to\n> put it, but we allowed it multiple times at any point in the input.\n\nAlso looks reasonable to me. (Same note re: ECPG.)\n\n> 3) Error when the user has multiple consecutive units or a unit without\n> an accompanying value. I spent a lot of time trying to come up with\n> robust ways to detect this and ultimately settled on using the \"type\"\n> field. I'm not entirely happy with this approach, because it involves\n> having to look ahead to the next field in a couple of places. The other\n> approach I was considering was to introduce a boolean flag called\n> \"unhandled_unit\". After parsing a unit it would be set to true, after\n> applying the unit to a number it would be set to false. If it was true\n> right before parsing a unit, then we would error. Please let me know\n> if you have any suggestions here.\n\nI'm new to this code, but I agree that the use of `type` and the\nlookahead are not particularly obvious/intuitive. At the very least,\nthey'd need some more explanation in the code. Your boolean flag idea\nsounds reasonable, though.\n\n> There is one more problem I noticed, but didn't fix. We allow multiple\n> \"@\" to be sprinkled anywhere in the input, even though the docs [0]\n> only allow it to appear at the beginning of the input.\n\n(No particular opinion on this.)\n\nIt looks like this patch needs a rebase for the CI, too, but there are\nno conflicts.\n\nThanks!\n--Jacob\n\n\n",
"msg_date": "Fri, 7 Jul 2023 09:52:33 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> Hi Joe, here's a partial review:\n> On Sun, Apr 9, 2023 at 5:44 PM Joseph Koshakow <[email protected]> wrote:\n>> 1) Removes dead code for handling unit type RESERVE.\n\n> Looks good. From a quick skim it looks like the ECPG copy of this code\n> (ecpg/pgtypeslib/interval.c) might need to be updated as well?\n\nThe ECPG datetime datatype support code has been basically unmaintained\nfor years, and has diverged quite far at this point from the core code.\nI wouldn't expect that a patch to the core code necessarily applies\neasily to ECPG, nor would I expect somebody patching the core to bother\ntrying.\n\nPerhaps modernizing/resyncing that ECPG code would be a worthwhile\nundertaking, but it'd be a mighty large one, and I'm not sure about\nthe size of the return. In the meantime, benign neglect is the policy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Jul 2023 19:13:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "Jacob Champion <[email protected]> writes:\n> Hi Joe, here's a partial review:\n\nThanks so much for the review!\n\n> I'm new to this code, but I agree that the use of `type` and the\n> lookahead are not particularly obvious/intuitive. At the very least,\n> they'd need some more explanation in the code. Your boolean flag idea\n> sounds reasonable, though.\n\nI've updated the patch with the boolean flag idea. I think it's a\nbit cleaner and more readable.\n\n>> There is one more problem I noticed, but didn't fix. We allow multiple\n>> \"@\" to be sprinkled anywhere in the input, even though the docs [0]\n>> only allow it to appear at the beginning of the input.\n>\n> (No particular opinion on this.)\n\nI looked into this a bit. The reason this works is because the date\ntime lexer filters out all punctuation. That's what allows us to parse\nthings like `SELECT date 'January 8, 1999';`. It's probably not worth\ntrying to be smarter about what punctuation we allow where, at least\nfor now. Maybe in the future we can exclude \"@\" from the punctuation\nthat get's filtered out.\n\n> It looks like this patch needs a rebase for the CI, too, but there are\n> no conflicts.\n\nThe attached patch is rebased against master.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sat, 8 Jul 2023 13:18:32 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 4:13 PM Tom Lane <[email protected]> wrote:\n>\n> The ECPG datetime datatype support code has been basically unmaintained\n> for years, and has diverged quite far at this point from the core code.\n\nI was under the impression that anything in the postgresql.git\nrepository is considered core, and hence maintained as one unit, from\nrelease to release. An example of this, to me, were all the contrib/*\nmodules.\n\n> I wouldn't expect that a patch to the core code necessarily applies\n> easily to ECPG, nor would I expect somebody patching the core to bother\n> trying.\n\nThe above statement makes me think that only the code inside\nsrc/backend/ is considered core. Is that the right assumption?\n\n> Perhaps modernizing/resyncing that ECPG code would be a worthwhile\n> undertaking, but it'd be a mighty large one, and I'm not sure about\n> the size of the return. In the meantime, benign neglect is the policy.\n\nBenign neglect doesn't sound nice from a user's/consumer's\nperspective. Can it be labeled (i.e. declared as such in docs) as\ndeprecated.\n\nKnowing that the tool you use has now been deprecated would be a\nbetter message for someone still using it, even if it was left marked\ndeprecated indefinitely. Discovering benign neglect for the tool you\ndepend on, from secondary sources (like this list, forums, etc.), does\nnot evoke a lot of confidence.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sat, 8 Jul 2023 12:47:32 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "Gurjeet Singh <[email protected]> writes:\n> On Fri, Jul 7, 2023 at 4:13 PM Tom Lane <[email protected]> wrote:\n>> The ECPG datetime datatype support code has been basically unmaintained\n>> for years, and has diverged quite far at this point from the core code.\n\n> I was under the impression that anything in the postgresql.git\n> repository is considered core, and hence maintained as one unit, from\n> release to release.\n\nWhen I say that ecpglib is next door to unmaintained, I'm just stating\nthe facts on the ground; project policy is irrelevant. That situation\nis not likely to change until somebody steps up to do (a lot of) work\non it, which is probably unlikely to happen unless we start getting\nactual complaints from ECPG users. For the meantime, what's there now\nseems to be satisfactory to whoever is using it ... which might be\nnobody?\n\nIn any case, you don't have to look far to notice that some parts of\nthe tree are maintained far more actively than others. ecpglib is\njust one of the more identifiable bits that's receiving little love.\nThe quality of the code under contrib/ is wildly variable, and even\nthe server code itself has backwaters. (For instance, the \"bit\" types,\nwhich aren't even in the standard anymore; or the geometric types,\nor \"money\".)\n\nBy and large, I don't see this unevenness of maintenance effort as\na problem. It's more like directing our limited resources into the\nmost useful places. Code that isn't getting worked on is either not\nused at all by anybody, or it serves the needs of those who use it\nwell enough already. Since it's difficult to tell which of those\ncases applies, removing code just because it's not been improved\nlately is a hard choice to sell. But so is putting maintenance effort\ninto code that there might be little audience for. In the end we\nsolve this via the principle of \"scratch your own itch\": if somebody\nis concerned enough about a particular piece of code to put their own\ntime into improving it, then great, it'll get improved.\n\n> Benign neglect doesn't sound nice from a user's/consumer's\n> perspective. Can it be labeled (i.e. declared as such in docs) as\n> deprecated.\n\nDeprecation would imply that we're planning to remove it, which\nwe are not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Jul 2023 16:33:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 1:33 PM Tom Lane <[email protected]> wrote:\n>\n> Gurjeet Singh <[email protected]> writes:\n> > On Fri, Jul 7, 2023 at 4:13 PM Tom Lane <[email protected]> wrote:\n> >> The ECPG datetime datatype support code has been basically unmaintained\n> >> for years, and has diverged quite far at this point from the core code.\n>\n> > I was under the impression that anything in the postgresql.git\n> > repository is considered core, and hence maintained as one unit, from\n> > release to release.\n>\n> When I say that ecpglib is next door to unmaintained, I'm just stating\n> the facts on the ground; project policy is irrelevant. That situation\n> is not likely to change until somebody steps up to do (a lot of) work\n> on it, which is probably unlikely to happen unless we start getting\n> actual complaints from ECPG users. For the meantime, what's there now\n> seems to be satisfactory to whoever is using it ... which might be\n> nobody?\n>\n> In any case, you don't have to look far to notice that some parts of\n> the tree are maintained far more actively than others. ecpglib is\n> just one of the more identifiable bits that's receiving little love.\n> The quality of the code under contrib/ is wildly variable, and even\n> the server code itself has backwaters. (For instance, the \"bit\" types,\n> which aren't even in the standard anymore; or the geometric types,\n> or \"money\".)\n\nThanks for sharing your view on different parts of the code. This give\na clear direction if someone would be interested in stepping up.\n\nAs part of my mentoring a GSoC 2023 participant, last night we were\nlooking at the TODO wiki page, for something for the mentee to pick up\nnext. I feel the staleness/deficiencies you mention above are not\ncaptured in the TODO wiki page. It'd be nice if these were documented,\nso that newcomers to the community can pick up work that they feel is\nan easy lift for them.\n\n> By and large, I don't see this unevenness of maintenance effort as\n> a problem. It's more like directing our limited resources into the\n> most useful places. Code that isn't getting worked on is either not\n> used at all by anybody, or it serves the needs of those who use it\n> well enough already. Since it's difficult to tell which of those\n> cases applies, removing code just because it's not been improved\n> lately is a hard choice to sell. But so is putting maintenance effort\n> into code that there might be little audience for. In the end we\n> solve this via the principle of \"scratch your own itch\": if somebody\n> is concerned enough about a particular piece of code to put their own\n> time into improving it, then great, it'll get improved.\n>\n> > Benign neglect doesn't sound nice from a user's/consumer's\n> > perspective. Can it be labeled (i.e. declared as such in docs) as\n> > deprecated.\n>\n> Deprecation would imply that we're planning to remove it, which\n> we are not.\n\nGood to know. Sorry I took \"benign neglect\" to mean that there's no\nwillingness to improve it. This makes it clear that community wants to\nimprove and maintain ECPG, it's just a matter of someone volunteering,\nand better use of the resources available.\n\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Sat, 8 Jul 2023 14:05:53 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 5:06 PM Gurjeet Singh <[email protected]> wrote:\n\n> I feel the staleness/deficiencies you mention above are not\n> captured in the TODO wiki page. It'd be nice if these were documented,\n> so that newcomers to the community can pick up work that they feel is\n> an easy lift for them.\n\nI think that's a good idea. I've definitely been confused by this in\nprevious patches I've submitted.\n\n\nI've broken up this patch into three logical commits and attached them.\nNone of the actual code has changed.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sun, 9 Jul 2023 13:24:13 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Sat, 2023-07-08 at 13:18 -0400, Joseph Koshakow wrote:\n> Jacob Champion <[email protected]> writes:\n> > Hi Joe, here's a partial review:\n> \n> Thanks so much for the review!\n> \n> > I'm new to this code, but I agree that the use of `type` and the\n> > lookahead are not particularly obvious/intuitive. At the very\n> > least,\n> > they'd need some more explanation in the code. Your boolean flag\n> > idea\n> > sounds reasonable, though.\n> \n> I've updated the patch with the boolean flag idea. I think it's a\n> bit cleaner and more readable.\n> \n> > > There is one more problem I noticed, but didn't fix. We allow\n> > > multiple\n> > > \"@\" to be sprinkled anywhere in the input, even though the docs\n> > > [0]\n> > > only allow it to appear at the beginning of the input.\n> > \n> > (No particular opinion on this.)\n> \n> I looked into this a bit. The reason this works is because the date\n> time lexer filters out all punctuation. That's what allows us to\n> parse\n> things like `SELECT date 'January 8, 1999';`. It's probably not worth\n> trying to be smarter about what punctuation we allow where, at least\n> for now. Maybe in the future we can exclude \"@\" from the punctuation\n> that get's filtered out.\n> \n> > It looks like this patch needs a rebase for the CI, too, but there\n> > are\n> > no conflicts.\n> \n> The attached patch is rebased against master.\n> \n> Thanks,\n> Joe Koshakow\n\nApologies, I'm posting a little behind the curve here. My initial\nthoughts on the original patch mirrored Jacob's re 1 and 2 - that it looked\ngood, did we need to consider the modified ecpg copy (which has been\nanswered by Tom). I didn't have have any strong thought re 3) or the '@'. \n\nThe updated patch looks good to me. Seems a little clearer/cleaner.\n\nThanks,\nReid\n\n\n\n\n\n\n",
"msg_date": "Sun, 09 Jul 2023 15:11:22 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Sun, 2023-07-09 at 13:24 -0400, Joseph Koshakow wrote:\n> \n> I've broken up this patch into three logical commits and attached\n> them.\n> None of the actual code has changed.\n> \n> Thanks,\n> Joe Koshakow\n\nI made a another pass through the separated patches, it looks good to\nme. \n\nThanks,\nReid\n\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 13:14:12 -0400",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 10:19 AM Reid Thompson <[email protected]> wrote:\n> I made a another pass through the separated patches, it looks good to\n> me.\n\nLGTM too. (Thanks Tom for the clarification on ECPG.)\n\nMarked RfC.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 10 Jul 2023 10:42:31 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 10:42:31AM -0700, Jacob Champion wrote:\n> On Mon, Jul 10, 2023 at 10:19 AM Reid Thompson <[email protected]> wrote:\n>> I made a another pass through the separated patches, it looks good to\n>> me.\n> \n> LGTM too. (Thanks Tom for the clarification on ECPG.)\n\n+SELECT INTERVAL '42 days 2 seconds ago ago';\n+SELECT INTERVAL '2 minutes ago 5 days';\n[...]\n+SELECT INTERVAL 'hour 5 months';\n+SELECT INTERVAL '1 year months days 5 hours';\n\n0002 and 0003 make this stuff fail, but isn't there a risk that this\nbreaks applications that relied on these accidental behaviors?\nAssuming that this is OK makes me nervous.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 14:39:48 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 10:39 PM Michael Paquier <[email protected]> wrote:\n> 0002 and 0003 make this stuff fail, but isn't there a risk that this\n> breaks applications that relied on these accidental behaviors?\n> Assuming that this is OK makes me nervous.\n\nI wouldn't argue for backpatching, for sure, but I guess I saw this as\nfalling into the same vein as 5b3c5953 and bcc704b52 which were\nalready committed.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 22 Aug 2023 09:58:18 -0700",
"msg_from": "Jacob Champion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 12:58 PM Jacob Champion <[email protected]>\nwrote:\n>\n> On Mon, Aug 21, 2023 at 10:39 PM Michael Paquier <[email protected]>\nwrote:\n> > 0002 and 0003 make this stuff fail, but isn't there a risk that this\n> > breaks applications that relied on these accidental behaviors?\n> > Assuming that this is OK makes me nervous.\n>\n> I wouldn't argue for backpatching, for sure, but I guess I saw this as\n> falling into the same vein as 5b3c5953 and bcc704b52 which were\n> already committed.\n\nI agree, I don't think we should try and backport this. As Jacob\nhighlighted, we've merged similar patches for other date time types.\nIf applications were relying on this behavior, the upgrade may be a\ngood time for them to re-evaluate their usage since it's outside the\ndocumented spec and they may not be getting the units they're expecting\nfrom intervals like '1 day month'.\n\nThanks,\nJoe Koshakow\n\nOn Tue, Aug 22, 2023 at 12:58 PM Jacob Champion <[email protected]> wrote:>> On Mon, Aug 21, 2023 at 10:39 PM Michael Paquier <[email protected]> wrote:> > 0002 and 0003 make this stuff fail, but isn't there a risk that this> > breaks applications that relied on these accidental behaviors?> > Assuming that this is OK makes me nervous.>> I wouldn't argue for backpatching, for sure, but I guess I saw this as> falling into the same vein as 5b3c5953 and bcc704b52 which were> already committed.I agree, I don't think we should try and backport this. As Jacobhighlighted, we've merged similar patches for other date time types.If applications were relying on this behavior, the upgrade may be agood time for them to re-evaluate their usage since it's outside thedocumented spec and they may not be getting the units they're expectingfrom intervals like '1 day month'.Thanks,Joe Koshakow",
"msg_date": "Sun, 27 Aug 2023 16:14:00 -0400",
"msg_from": "Joseph Koshakow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DecodeInterval fixes"
},
{
"msg_contents": "On Sun, Aug 27, 2023 at 04:14:00PM -0400, Joseph Koshakow wrote:\n> On Tue, Aug 22, 2023 at 12:58 PM Jacob Champion <[email protected]>\n> wrote:\n>> I wouldn't argue for backpatching, for sure, but I guess I saw this as\n>> falling into the same vein as 5b3c5953 and bcc704b52 which were\n>> already committed.\n> \n> I agree, I don't think we should try and backport this. As Jacob\n> highlighted, we've merged similar patches for other date time types.\n> If applications were relying on this behavior, the upgrade may be a\n> good time for them to re-evaluate their usage since it's outside the\n> documented spec and they may not be getting the units they're expecting\n> from intervals like '1 day month'.\n\nI felt like asking anyway. I have looked at the patch series and the\npast compatibility changes in this area, and I kind of agree that this\nseems like an improvement against confusing interval values. So, I\nhave applied 0001, 0002 and 0003 after more review.\n\n0002 was a bit careless with the code indentation.\n\nIn 0003, I was wondering a bit if we'd better set parsing_unit_val to\nfalse for AGO, but as we do a backward lookup and because after 0002\nAGO can only be last, I've let the code as you have suggested, relying\non the initial value of this variable.\n--\nMichael",
"msg_date": "Mon, 28 Aug 2023 14:28:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DecodeInterval fixes"
}
] |
[
{
"msg_contents": "Hello, we encountered unexpected behavior from an ECPG program\ncomplied with the -C ORACLE option. The program executes the\nfollowing query:\n\n SELECT 123::numeric(3,0), 't'::char(2)\";\n\nCompilation and execution steps:\n\n$ ecpg -C ORACLE ecpgtest.pgc (attached)\n$ gcc -g -o ecpgtest ecpgtest.c -L `pg_config --libdir` -I `pg_config --includedir` -lecpg -lpgtypes\n$ ./ecpgtest\ntype: numeric : data: \"120\"\ntype: bpchar : data: \"t\"\n\nThe expected numeric value is \"123\".\n\n\nThe code below is the direct cause of the unanticipated data\nmodification.\n\ninterfaces/ecpg/ecpglib/data.c:581 (whitespaces are swueezed)\n> if (varcharsize == 0 || varcharsize > size)\n> {\n> /*\n> * compatibility mode, blank pad and null\n> * terminate char array\n> */\n> if (ORACLE_MODE(compat) && (type == ECPGt_char || type == ECPGt_unsigned_char))\n> {\n> memset(str, ' ', varcharsize);\n> memcpy(str, pval, size);\n> str[varcharsize - 1] = '\\0';\n\nThis results in overwriting str[-1], the last byte of the preceding\nnumeric in this case, with 0x00, representing the digit '0'. When\ncallers of ecpg_get_data() explicitly pass zero as varcharsize, they\nprovide storage that precisely fitting the data. However, it remains\nuncertain if this assumption is valid when ecpg_store_result() passes\nvar->varcharsize which is also zero. Consequently, the current fix\npresumes exact-fit storage when varcharsize is zero.\n\nI haven't fully checked, but at least back to 10 have this issue.\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 10 Apr 2023 17:35:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "eclg -C ORACLE breaks data"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 05:35:00PM +0900, Kyotaro Horiguchi wrote:\n> This results in overwriting str[-1], the last byte of the preceding\n> numeric in this case, with 0x00, representing the digit '0'. When\n> callers of ecpg_get_data() explicitly pass zero as varcharsize, they\n> provide storage that precisely fitting the data.\n\nGood find, that's clearly wrong. The test case is interesting. On\nHEAD, the processing of the second field eats up the data of the first\nfield.\n\n> However, it remains\n> uncertain if this assumption is valid when ecpg_store_result() passes\n> var->varcharsize which is also zero. Consequently, the current fix\n> presumes exact-fit storage when varcharsize is zero.\n\nBased on what I can read in sqlda.c (ecpg_set_compat_sqlda() and\necpg_set_native_sqlda()), the data length calculated adds an extra\nbyte to the data length when storing the data references in sqldata.\nexecute.c and ecpg_store_result() is actually much trickier than that\n(see particularly the part where the code does an \"allocate memory for\nNULL pointers\", where varcharsize could also be 0), still I agree that\nthis assumption should be OK. The code is as it is for many years,\nwith its logic to do an estimation of allocation first, and then read\nthe data at once in the whole area allocated..\n\nThis thinko has been introduced by 3b7ab43, so this needs to go down\nto v11. I'll see to that.\n--\nMichael",
"msg_date": "Mon, 17 Apr 2023 17:00:59 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: eclg -C ORACLE breaks data"
},
{
"msg_contents": "(sorry for the wrong subject..)\n\nAt Mon, 17 Apr 2023 17:00:59 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Apr 10, 2023 at 05:35:00PM +0900, Kyotaro Horiguchi wrote:\n> > This results in overwriting str[-1], the last byte of the preceding\n> > numeric in this case, with 0x00, representing the digit '0'. When\n> > callers of ecpg_get_data() explicitly pass zero as varcharsize, they\n> > provide storage that precisely fitting the data.\n> \n> Good find, that's clearly wrong. The test case is interesting. On\n> HEAD, the processing of the second field eats up the data of the first\n> field.\n>\n> > However, it remains\n> > uncertain if this assumption is valid when ecpg_store_result() passes\n> > var->varcharsize which is also zero. Consequently, the current fix\n> > presumes exact-fit storage when varcharsize is zero.\n> \n> Based on what I can read in sqlda.c (ecpg_set_compat_sqlda() and\n> ecpg_set_native_sqlda()), the data length calculated adds an extra\n> byte to the data length when storing the data references in sqldata.\n> execute.c and ecpg_store_result() is actually much trickier than that\n> (see particularly the part where the code does an \"allocate memory for\n> NULL pointers\", where varcharsize could also be 0), still I agree that\n> this assumption should be OK. The code is as it is for many years,\n> with its logic to do an estimation of allocation first, and then read\n> the data at once in the whole area allocated..\n> \n> This thinko has been introduced by 3b7ab43, so this needs to go down\n> to v11. I'll see to that.\n\nThanks for picking this up.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 17 Apr 2023 17:47:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: eclg -C ORACLE breaks data"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 05:00:59PM +0900, Michael Paquier wrote:\n> This thinko has been introduced by 3b7ab43, so this needs to go down\n> to v11. I'll see to that.\n\nSo done. mylodon is feeling unhappy about that, because this has a\nC99 declaration:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-04-18%2002%3A22%3A04\nchar_array.pgc:73:8: error: variable declaration in for loop is a C99-specific feature [-Werror,-Wc99-extensions]\n for (int i = 0 ; i < sqlda->sqld ; i++)\n\nI'll go fix that in a minute, across all the branches for\nconsistency.\n--\nMichael",
"msg_date": "Tue, 18 Apr 2023 11:35:16 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: eclg -C ORACLE breaks data"
},
{
"msg_contents": "At Tue, 18 Apr 2023 11:35:16 +0900, Michael Paquier <[email protected]> wrote in \n> On Mon, Apr 17, 2023 at 05:00:59PM +0900, Michael Paquier wrote:\n> > This thinko has been introduced by 3b7ab43, so this needs to go down\n> > to v11. I'll see to that.\n> \n> So done. mylodon is feeling unhappy about that, because this has a\n\nThanks!\n\n> C99 declaration:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-04-18%2002%3A22%3A04\n> char_array.pgc:73:8: error: variable declaration in for loop is a C99-specific feature [-Werror,-Wc99-extensions]\n> for (int i = 0 ; i < sqlda->sqld ; i++)\n> \n> I'll go fix that in a minute, across all the branches for\n> consistency.\n\nOh, I didn't realize there were differences in the\nconfigurations. Good to know.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Apr 2023 12:56:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: eclg -C ORACLE breaks data"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 12:56:32PM +0900, Kyotaro Horiguchi wrote:\n> Oh, I didn't realize there were differences in the\n> configurations. Good to know.\n\nC99 declarations are OK in v12~, so with v11 going out of sight in\napproximately 6 month, it won't matter soon ;)\n--\nMichael",
"msg_date": "Thu, 20 Apr 2023 13:00:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: eclg -C ORACLE breaks data"
},
{
"msg_contents": "At Thu, 20 Apr 2023 13:00:52 +0900, Michael Paquier <[email protected]> wrote in \n> On Thu, Apr 20, 2023 at 12:56:32PM +0900, Kyotaro Horiguchi wrote:\n> > Oh, I didn't realize there were differences in the\n> > configurations. Good to know.\n> \n> C99 declarations are OK in v12~, so with v11 going out of sight in\n> approximately 6 month, it won't matter soon ;)\n\nAh. the time goes around..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Apr 2023 13:30:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: eclg -C ORACLE breaks data"
}
] |
[
{
"msg_contents": "When I am working on \"Pushing limit into subqueries of a union\" [1], I\nfound we already have a great infrastructure to support this. For a query\nlike\n\nsubquery-1 UNION ALL subquery-2 LIMIT 3;\n\nWe have considered the root->tuple_fraction when planning the subqueries\nwithout an extra Limit node as an overhead. But the reason it doesn't work\nin my real case is flatten_simple_union_all flat the union all subqueries\ninto append relation and we didn't handle the root->tuple_fraction during\nadd_paths_to_append_rel.\n\nGiven the below query for example:\nexplain analyze\n(select * from tenk1 order by hundred)\nunion all\n(select * from tenk2 order by hundred)\nlimit 3;\n\nWithout the patch: Execution Time: 7.856 ms\nwith the patch: Execution Time: 0.224 ms\n\nAny suggestion is welcome.\n\n[1] https://www.postgresql.org/message-id/11228.1118365833%40sss.pgh.pa.us\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 10 Apr 2023 16:35:09 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix the miss consideration of tuple_fraction during\n add_paths_to_append_rel"
},
{
"msg_contents": "HI,\n\n\nOn Apr 10, 2023, 16:35 +0800, Andy Fan <[email protected]>, wrote:\n> When I am working on \"Pushing limit into subqueries of a union\" [1], I\n> found we already have a great infrastructure to support this. For a query\n> like\n>\n> subquery-1 UNION ALL subquery-2 LIMIT 3;\n>\n> We have considered the root->tuple_fraction when planning the subqueries\n> without an extra Limit node as an overhead. But the reason it doesn't work\n> in my real case is flatten_simple_union_all flat the union all subqueries\n> into append relation and we didn't handle the root->tuple_fraction during\n> add_paths_to_append_rel.\n>\n> Given the below query for example:\n> explain analyze\n> (select * from tenk1 order by hundred)\n> union all\n> (select * from tenk2 order by hundred)\n> limit 3;\n>\n> Without the patch: Execution Time: 7.856 ms\n> with the patch: Execution Time: 0.224 ms\n>\n> Any suggestion is welcome.\n>\n> [1] https://www.postgresql.org/message-id/11228.1118365833%40sss.pgh.pa.us\n>\n> --\n> Best Regards\n> Andy Fan\n\nThere is spare indent at else if.\n\n-\t\tif (childrel->pathlist != NIL &&\n+\t\tif (cheapest_startup_path && cheapest_startup_path->param_info == NULL)\n+\t\t\taccumulate_append_subpath(cheapest_startup_path,\n+\t\t\t\t\t\t\t\t\t &subpaths, NULL);\n+\t\t\telse if (childrel->pathlist != NIL &&\n \t\t\tchildrel->cheapest_total_path->param_info == NULL)\n \t\t\taccumulate_append_subpath(childrel->cheapest_total_path,\n \t\t\t\t\t\t\t\t\t &subpaths, NULL);\n\nCould we also consider tuple_fraction in partial_pathlist for parallel append?\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI, \n\n\n\n\n\nOn Apr 10, 2023, 16:35 +0800, Andy Fan <[email protected]>, wrote:\nWhen I am working on \"Pushing limit into subqueries of a union\" [1], I\nfound we already have a great infrastructure to support this. For a query\nlike\n\nsubquery-1 UNION ALL subquery-2 LIMIT 3;\n\nWe have considered the root->tuple_fraction when planning the subqueries\nwithout an extra Limit node as an overhead. But the reason it doesn't work\nin my real case is flatten_simple_union_all flat the union all subqueries\ninto append relation and we didn't handle the root->tuple_fraction during\nadd_paths_to_append_rel. \n\nGiven the below query for example:\nexplain analyze\n(select * from tenk1 order by hundred)\nunion all\n(select * from tenk2 order by hundred)\nlimit 3;\n\nWithout the patch: Execution Time: 7.856 ms\nwith the patch: Execution Time: 0.224 ms\n\nAny suggestion is welcome.\n\n[1] https://www.postgresql.org/message-id/11228.1118365833%40sss.pgh.pa.us \n\n--\nBest Regards\nAndy Fan\n\nThere is spare indent at else if.\n\n-\t\tif (childrel->pathlist != NIL &&\n+\t\tif (cheapest_startup_path && cheapest_startup_path->param_info == NULL)\n+\t\t\taccumulate_append_subpath(cheapest_startup_path,\n+\t\t\t\t\t\t\t\t\t &subpaths, NULL);\n+\t\t\telse if (childrel->pathlist != NIL &&\n \t\t\tchildrel->cheapest_total_path->param_info == NULL)\n \t\t\taccumulate_append_subpath(childrel->cheapest_total_path,\n \t\t\t\t\t\t\t\t\t &subpaths, NULL);\n\nCould we also consider tuple_fraction in partial_pathlist for parallel append?\n\n\nRegards,\nZhang Mingli",
"msg_date": "Mon, 10 Apr 2023 21:56:29 +0800",
"msg_from": "Zhang Mingli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix the miss consideration of tuple_fraction during\n add_paths_to_append_rel"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 9:56 PM Zhang Mingli <[email protected]> wrote:\n\n>\n> There is spare indent at else if.\n>\n> - if (childrel->pathlist != NIL &&\n> + if (cheapest_startup_path && cheapest_startup_path->param_info == NULL)\n> + accumulate_append_subpath(cheapest_startup_path,\n> + &subpaths, NULL);\n> + else if (childrel->pathlist != NIL &&\n> childrel->cheapest_total_path->param_info == NULL)\n> accumulate_append_subpath(childrel->cheapest_total_path,\n> &subpaths, NULL);\n>\n> Could we also consider tuple_fraction in partial_pathlist for parallel\n> append?\n>\n>\nThanks for the suggestion, the v2 has fixed the indent issue and I did\nsomething about parallel append. Besides that, I restrict the changes\nhappens under bms_equal(rel->relids, root->all_query_rels), which may\nmake this patch safer.\n\nI have added this into https://commitfest.postgresql.org/43/4270/\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Wed, 12 Apr 2023 01:45:10 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix the miss consideration of tuple_fraction during\n add_paths_to_append_rel"
}
] |
[
{
"msg_contents": "As the comment above add_path() says, 'The pathlist is kept sorted by\ntotal_cost, with cheaper paths at the front.' And it seems that\nget_cheapest_parallel_safe_total_inner() relies on this ordering\n(without being mentioned in the comments, which I think we should do).\nI'm wondering if this is the right thing to do, as in other places\ncheapest total cost is found by compare_path_costs(), which would\nconsider startup cost if paths have the same total cost, and that seems\nmore sensible.\n\nAttach a trivial patch to make get_cheapest_parallel_safe_total_inner\nact this way.\n\nThanks\nRichard",
"msg_date": "Tue, 11 Apr 2023 11:03:27 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can we rely on the ordering of paths in pathlist?"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]> wrote:\n\n> As the comment above add_path() says, 'The pathlist is kept sorted by\n> total_cost, with cheaper paths at the front.' And it seems that\n> get_cheapest_parallel_safe_total_inner() relies on this ordering\n> (without being mentioned in the comments, which I think we should do).\n>\n\nI think the answer for ${subject} should be yes. Per the comments in\nadd_partial_path, we have\n\n * add_partial_path\n *\n * As in add_path, the partial_pathlist is kept sorted with the cheapest\n * total path in front. This is depended on by multiple places, which\n * just take the front entry as the cheapest path without searching.\n *\n.\n\n> I'm wondering if this is the right thing to do, as in other places\n> cheapest total cost is found by compare_path_costs(), which would\n> consider startup cost if paths have the same total cost, and that seems\n> more sensible.\n>\n> Attach a trivial patch to make get_cheapest_parallel_safe_total_inner\n> act this way.\n>\n>\nDid you run into any real issue with the current coding? If we have to\n\"consider startup cost if paths have the same total cost\", we still can\nrely on the ordering and stop iterating when the total_cost becomes\nbigger to avoid scanning all the paths in pathlist.\n\nBut if you are complaining the function prototype, where is the pathlist\nmay be not presorted, I think the better way maybe changes it from:\nPath *get_cheapest_parallel_safe_total_inner(List *paths) to\nPath *get_cheapest_parallel_safe_total_inner(RelOptInfo *rel);\nand scan the rel->pathlist in the function body.\n\n-- \nBest Regards\nAndy Fan\n\nOn Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]> wrote:As the comment above add_path() says, 'The pathlist is kept sorted bytotal_cost, with cheaper paths at the front.' And it seems thatget_cheapest_parallel_safe_total_inner() relies on this ordering(without being mentioned in the comments, which I think we should do).I think the answer for ${subject} should be yes. Per the comments inadd_partial_path, we have * add_partial_path * *\t As in add_path, the partial_pathlist is kept sorted with the cheapest *\t total path in front. This is depended on by multiple places, which *\t just take the front entry as the cheapest path without searching. *. I'm wondering if this is the right thing to do, as in other placescheapest total cost is found by compare_path_costs(), which wouldconsider startup cost if paths have the same total cost, and that seemsmore sensible.Attach a trivial patch to make get_cheapest_parallel_safe_total_inneract this way.Did you run into any real issue with the current coding? If we have to\"consider startup cost if paths have the same total cost\", we still canrely on the ordering and stop iterating when the total_cost becomesbigger to avoid scanning all the paths in pathlist. But if you are complaining the function prototype, where is the pathlistmay be not presorted, I think the better way maybe changes it from:Path *get_cheapest_parallel_safe_total_inner(List *paths) to Path *get_cheapest_parallel_safe_total_inner(RelOptInfo *rel);and scan the rel->pathlist in the function body. -- Best RegardsAndy Fan",
"msg_date": "Tue, 11 Apr 2023 11:43:27 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we rely on the ordering of paths in pathlist?"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]> wrote:\n\n> On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]>\n> wrote:\n>\n>> As the comment above add_path() says, 'The pathlist is kept sorted by\n>> total_cost, with cheaper paths at the front.' And it seems that\n>> get_cheapest_parallel_safe_total_inner() relies on this ordering\n>> (without being mentioned in the comments, which I think we should do).\n>>\n>\n> I think the answer for ${subject} should be yes. Per the comments in\n> add_partial_path, we have\n>\n> * add_partial_path\n> *\n> * As in add_path, the partial_pathlist is kept sorted with the cheapest\n> * total path in front. This is depended on by multiple places, which\n> * just take the front entry as the cheapest path without searching.\n> *\n>\n\nI'm not sure about this conclusion. Surely we can depend on that the\npartial_pathlist is kept sorted by total_cost ASC. This is emphasized\nin the comment of add_partial_path, and also leveraged in practice, such\nas in many places we just use linitial(rel->partial_pathlist) as the\ncheapest partial path.\n\nBut get_cheapest_parallel_safe_total_inner works on pathlist not\npartial_pathlist. And for pathlist, I'm not sure if it's a good\npractice to depend on its ordering. Because\n\n1) the comment of add_path only mentions that add_path_precheck relies\non this ordering, but it does not mention that other functions also do;\n\n2) other than add_path_precheck, I haven't observed any other functions\nthat rely on this ordering. The only exception as far as I notice is\nget_cheapest_parallel_safe_total_inner.\n\nOn the other hand, if we declare that we can rely on the pathlist being\nsorted in ascending order by total_cost, we should update the comment\nfor add_path to highlight this aspect. We should also include a comment\nfor get_cheapest_parallel_safe_total_inner to clarify why an early exit\nis possible, similar to what we do for add_path_precheck. Additionally,\nin several places, we can optimize our code by taking advantage of this\nfact and terminate the iteration through the pathlist early when looking\nfor the cheapest path of a certain kind.\n\nThanks\nRichard\n\nOn Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]> wrote:On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]> wrote:As the comment above add_path() says, 'The pathlist is kept sorted bytotal_cost, with cheaper paths at the front.' And it seems thatget_cheapest_parallel_safe_total_inner() relies on this ordering(without being mentioned in the comments, which I think we should do).I think the answer for ${subject} should be yes. Per the comments inadd_partial_path, we have * add_partial_path * *\t As in add_path, the partial_pathlist is kept sorted with the cheapest *\t total path in front. This is depended on by multiple places, which *\t just take the front entry as the cheapest path without searching. *I'm not sure about this conclusion. Surely we can depend on that thepartial_pathlist is kept sorted by total_cost ASC. This is emphasizedin the comment of add_partial_path, and also leveraged in practice, suchas in many places we just use linitial(rel->partial_pathlist) as thecheapest partial path.But get_cheapest_parallel_safe_total_inner works on pathlist notpartial_pathlist. And for pathlist, I'm not sure if it's a goodpractice to depend on its ordering. Because1) the comment of add_path only mentions that add_path_precheck relieson this ordering, but it does not mention that other functions also do;2) other than add_path_precheck, I haven't observed any other functionsthat rely on this ordering. The only exception as far as I notice isget_cheapest_parallel_safe_total_inner.On the other hand, if we declare that we can rely on the pathlist beingsorted in ascending order by total_cost, we should update the commentfor add_path to highlight this aspect. We should also include a commentfor get_cheapest_parallel_safe_total_inner to clarify why an early exitis possible, similar to what we do for add_path_precheck. Additionally,in several places, we can optimize our code by taking advantage of thisfact and terminate the iteration through the pathlist early when lookingfor the cheapest path of a certain kind.ThanksRichard",
"msg_date": "Wed, 10 Jan 2024 15:07:45 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can we rely on the ordering of paths in pathlist?"
},
{
"msg_contents": "Hi Richard Guo\n Is it necessary to add some comments here?\n\n\n+ if (!innerpath->parallel_safe ||\n+ !bms_is_empty(PATH_REQ_OUTER(innerpath)))\n+ continue;\n+\n+ if (matched_path != NULL &&\n+ compare_path_costs(matched_path, innerpath, TOTAL_COST) <= 0)\n+ continue;\n+\n+ matched_path = innerpath;\n\nRichard Guo <[email protected]> 于2024年1月10日周三 15:08写道:\n\n>\n> On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]>\n> wrote:\n>\n>> On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]>\n>> wrote:\n>>\n>>> As the comment above add_path() says, 'The pathlist is kept sorted by\n>>> total_cost, with cheaper paths at the front.' And it seems that\n>>> get_cheapest_parallel_safe_total_inner() relies on this ordering\n>>> (without being mentioned in the comments, which I think we should do).\n>>>\n>>\n>> I think the answer for ${subject} should be yes. Per the comments in\n>> add_partial_path, we have\n>>\n>> * add_partial_path\n>> *\n>> * As in add_path, the partial_pathlist is kept sorted with the cheapest\n>> * total path in front. This is depended on by multiple places, which\n>> * just take the front entry as the cheapest path without searching.\n>> *\n>>\n>\n> I'm not sure about this conclusion. Surely we can depend on that the\n> partial_pathlist is kept sorted by total_cost ASC. This is emphasized\n> in the comment of add_partial_path, and also leveraged in practice, such\n> as in many places we just use linitial(rel->partial_pathlist) as the\n> cheapest partial path.\n>\n> But get_cheapest_parallel_safe_total_inner works on pathlist not\n> partial_pathlist. And for pathlist, I'm not sure if it's a good\n> practice to depend on its ordering. Because\n>\n> 1) the comment of add_path only mentions that add_path_precheck relies\n> on this ordering, but it does not mention that other functions also do;\n>\n> 2) other than add_path_precheck, I haven't observed any other functions\n> that rely on this ordering. The only exception as far as I notice is\n> get_cheapest_parallel_safe_total_inner.\n>\n> On the other hand, if we declare that we can rely on the pathlist being\n> sorted in ascending order by total_cost, we should update the comment\n> for add_path to highlight this aspect. We should also include a comment\n> for get_cheapest_parallel_safe_total_inner to clarify why an early exit\n> is possible, similar to what we do for add_path_precheck. Additionally,\n> in several places, we can optimize our code by taking advantage of this\n> fact and terminate the iteration through the pathlist early when looking\n> for the cheapest path of a certain kind.\n>\n> Thanks\n> Richard\n>\n\nHi Richard Guo Is it necessary to add some comments here?+\t\tif (!innerpath->parallel_safe ||+\t\t\t!bms_is_empty(PATH_REQ_OUTER(innerpath)))+\t\t\tcontinue;++\t\tif (matched_path != NULL &&+\t\t\tcompare_path_costs(matched_path, innerpath, TOTAL_COST) <= 0)+\t\t\tcontinue;++\t\tmatched_path = innerpath;Richard Guo <[email protected]> 于2024年1月10日周三 15:08写道:On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]> wrote:On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]> wrote:As the comment above add_path() says, 'The pathlist is kept sorted bytotal_cost, with cheaper paths at the front.' And it seems thatget_cheapest_parallel_safe_total_inner() relies on this ordering(without being mentioned in the comments, which I think we should do).I think the answer for ${subject} should be yes. Per the comments inadd_partial_path, we have * add_partial_path * *\t As in add_path, the partial_pathlist is kept sorted with the cheapest *\t total path in front. This is depended on by multiple places, which *\t just take the front entry as the cheapest path without searching. *I'm not sure about this conclusion. Surely we can depend on that thepartial_pathlist is kept sorted by total_cost ASC. This is emphasizedin the comment of add_partial_path, and also leveraged in practice, suchas in many places we just use linitial(rel->partial_pathlist) as thecheapest partial path.But get_cheapest_parallel_safe_total_inner works on pathlist notpartial_pathlist. And for pathlist, I'm not sure if it's a goodpractice to depend on its ordering. Because1) the comment of add_path only mentions that add_path_precheck relieson this ordering, but it does not mention that other functions also do;2) other than add_path_precheck, I haven't observed any other functionsthat rely on this ordering. The only exception as far as I notice isget_cheapest_parallel_safe_total_inner.On the other hand, if we declare that we can rely on the pathlist beingsorted in ascending order by total_cost, we should update the commentfor add_path to highlight this aspect. We should also include a commentfor get_cheapest_parallel_safe_total_inner to clarify why an early exitis possible, similar to what we do for add_path_precheck. Additionally,in several places, we can optimize our code by taking advantage of thisfact and terminate the iteration through the pathlist early when lookingfor the cheapest path of a certain kind.ThanksRichard",
"msg_date": "Thu, 25 Jul 2024 16:18:49 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we rely on the ordering of paths in pathlist?"
},
{
"msg_contents": "Hi Richard Guo\n Today is the last day of the commitfest cycle.I think this patch should\nbe commented ,Except for the comments, I tested it good to me\n\n\nThanks\n\nwenhui qiu <[email protected]> 于2024年7月25日周四 16:18写道:\n\n> Hi Richard Guo\n> Is it necessary to add some comments here?\n>\n>\n> + if (!innerpath->parallel_safe ||\n> + !bms_is_empty(PATH_REQ_OUTER(innerpath)))\n> + continue;\n> +\n> + if (matched_path != NULL &&\n> + compare_path_costs(matched_path, innerpath, TOTAL_COST) <= 0)\n> + continue;\n> +\n> + matched_path = innerpath;\n>\n> Richard Guo <[email protected]> 于2024年1月10日周三 15:08写道:\n>\n>>\n>> On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]>\n>> wrote:\n>>\n>>> On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]>\n>>> wrote:\n>>>\n>>>> As the comment above add_path() says, 'The pathlist is kept sorted by\n>>>> total_cost, with cheaper paths at the front.' And it seems that\n>>>> get_cheapest_parallel_safe_total_inner() relies on this ordering\n>>>> (without being mentioned in the comments, which I think we should do).\n>>>>\n>>>\n>>> I think the answer for ${subject} should be yes. Per the comments in\n>>> add_partial_path, we have\n>>>\n>>> * add_partial_path\n>>> *\n>>> * As in add_path, the partial_pathlist is kept sorted with the cheapest\n>>> * total path in front. This is depended on by multiple places, which\n>>> * just take the front entry as the cheapest path without searching.\n>>> *\n>>>\n>>\n>> I'm not sure about this conclusion. Surely we can depend on that the\n>> partial_pathlist is kept sorted by total_cost ASC. This is emphasized\n>> in the comment of add_partial_path, and also leveraged in practice, such\n>> as in many places we just use linitial(rel->partial_pathlist) as the\n>> cheapest partial path.\n>>\n>> But get_cheapest_parallel_safe_total_inner works on pathlist not\n>> partial_pathlist. And for pathlist, I'm not sure if it's a good\n>> practice to depend on its ordering. Because\n>>\n>> 1) the comment of add_path only mentions that add_path_precheck relies\n>> on this ordering, but it does not mention that other functions also do;\n>>\n>> 2) other than add_path_precheck, I haven't observed any other functions\n>> that rely on this ordering. The only exception as far as I notice is\n>> get_cheapest_parallel_safe_total_inner.\n>>\n>> On the other hand, if we declare that we can rely on the pathlist being\n>> sorted in ascending order by total_cost, we should update the comment\n>> for add_path to highlight this aspect. We should also include a comment\n>> for get_cheapest_parallel_safe_total_inner to clarify why an early exit\n>> is possible, similar to what we do for add_path_precheck. Additionally,\n>> in several places, we can optimize our code by taking advantage of this\n>> fact and terminate the iteration through the pathlist early when looking\n>> for the cheapest path of a certain kind.\n>>\n>> Thanks\n>> Richard\n>>\n>\n\nHi Richard Guo Today is the last day of the commitfest cycle.I think this patch should be commented ,Except for the comments, I tested it good to meThankswenhui qiu <[email protected]> 于2024年7月25日周四 16:18写道:Hi Richard Guo Is it necessary to add some comments here?+\t\tif (!innerpath->parallel_safe ||+\t\t\t!bms_is_empty(PATH_REQ_OUTER(innerpath)))+\t\t\tcontinue;++\t\tif (matched_path != NULL &&+\t\t\tcompare_path_costs(matched_path, innerpath, TOTAL_COST) <= 0)+\t\t\tcontinue;++\t\tmatched_path = innerpath;Richard Guo <[email protected]> 于2024年1月10日周三 15:08写道:On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]> wrote:On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]> wrote:As the comment above add_path() says, 'The pathlist is kept sorted bytotal_cost, with cheaper paths at the front.' And it seems thatget_cheapest_parallel_safe_total_inner() relies on this ordering(without being mentioned in the comments, which I think we should do).I think the answer for ${subject} should be yes. Per the comments inadd_partial_path, we have * add_partial_path * *\t As in add_path, the partial_pathlist is kept sorted with the cheapest *\t total path in front. This is depended on by multiple places, which *\t just take the front entry as the cheapest path without searching. *I'm not sure about this conclusion. Surely we can depend on that thepartial_pathlist is kept sorted by total_cost ASC. This is emphasizedin the comment of add_partial_path, and also leveraged in practice, suchas in many places we just use linitial(rel->partial_pathlist) as thecheapest partial path.But get_cheapest_parallel_safe_total_inner works on pathlist notpartial_pathlist. And for pathlist, I'm not sure if it's a goodpractice to depend on its ordering. Because1) the comment of add_path only mentions that add_path_precheck relieson this ordering, but it does not mention that other functions also do;2) other than add_path_precheck, I haven't observed any other functionsthat rely on this ordering. The only exception as far as I notice isget_cheapest_parallel_safe_total_inner.On the other hand, if we declare that we can rely on the pathlist beingsorted in ascending order by total_cost, we should update the commentfor add_path to highlight this aspect. We should also include a commentfor get_cheapest_parallel_safe_total_inner to clarify why an early exitis possible, similar to what we do for add_path_precheck. Additionally,in several places, we can optimize our code by taking advantage of thisfact and terminate the iteration through the pathlist early when lookingfor the cheapest path of a certain kind.ThanksRichard",
"msg_date": "Wed, 31 Jul 2024 09:21:17 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we rely on the ordering of paths in pathlist?"
},
{
"msg_contents": "Hi Richard Guo\n I tried to changed the comment, can you help me to check if this is\ncorrect?Many thanks.\n- /*\n- * get_cheapest_parallel_safe_total_inner\n- * Find the unparameterized parallel-safe path with the least total cost.\n- */\n+ /* get_cheapest_parallel_safe_total_inner\n+ * Skip paths that do not meet the criteria,find the unparameterized\nparallel-safe path with the least total cost and return NULL if it does not\nexist.\n+ *\n+ */\n\nThanks\n\nwenhui qiu <[email protected]> 于2024年7月31日周三 09:21写道:\n\n> Hi Richard Guo\n> Today is the last day of the commitfest cycle.I think this patch\n> should be commented ,Except for the comments, I tested it good to me\n>\n>\n> Thanks\n>\n> wenhui qiu <[email protected]> 于2024年7月25日周四 16:18写道:\n>\n>> Hi Richard Guo\n>> Is it necessary to add some comments here?\n>>\n>>\n>> + if (!innerpath->parallel_safe ||\n>> + !bms_is_empty(PATH_REQ_OUTER(innerpath)))\n>> + continue;\n>> +\n>> + if (matched_path != NULL &&\n>> + compare_path_costs(matched_path, innerpath, TOTAL_COST) <= 0)\n>> + continue;\n>> +\n>> + matched_path = innerpath;\n>>\n>> Richard Guo <[email protected]> 于2024年1月10日周三 15:08写道:\n>>\n>>>\n>>> On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]>\n>>> wrote:\n>>>\n>>>> On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> As the comment above add_path() says, 'The pathlist is kept sorted by\n>>>>> total_cost, with cheaper paths at the front.' And it seems that\n>>>>> get_cheapest_parallel_safe_total_inner() relies on this ordering\n>>>>> (without being mentioned in the comments, which I think we should do).\n>>>>>\n>>>>\n>>>> I think the answer for ${subject} should be yes. Per the comments in\n>>>> add_partial_path, we have\n>>>>\n>>>> * add_partial_path\n>>>> *\n>>>> * As in add_path, the partial_pathlist is kept sorted with the\n>>>> cheapest\n>>>> * total path in front. This is depended on by multiple places, which\n>>>> * just take the front entry as the cheapest path without searching.\n>>>> *\n>>>>\n>>>\n>>> I'm not sure about this conclusion. Surely we can depend on that the\n>>> partial_pathlist is kept sorted by total_cost ASC. This is emphasized\n>>> in the comment of add_partial_path, and also leveraged in practice, such\n>>> as in many places we just use linitial(rel->partial_pathlist) as the\n>>> cheapest partial path.\n>>>\n>>> But get_cheapest_parallel_safe_total_inner works on pathlist not\n>>> partial_pathlist. And for pathlist, I'm not sure if it's a good\n>>> practice to depend on its ordering. Because\n>>>\n>>> 1) the comment of add_path only mentions that add_path_precheck relies\n>>> on this ordering, but it does not mention that other functions also do;\n>>>\n>>> 2) other than add_path_precheck, I haven't observed any other functions\n>>> that rely on this ordering. The only exception as far as I notice is\n>>> get_cheapest_parallel_safe_total_inner.\n>>>\n>>> On the other hand, if we declare that we can rely on the pathlist being\n>>> sorted in ascending order by total_cost, we should update the comment\n>>> for add_path to highlight this aspect. We should also include a comment\n>>> for get_cheapest_parallel_safe_total_inner to clarify why an early exit\n>>> is possible, similar to what we do for add_path_precheck. Additionally,\n>>> in several places, we can optimize our code by taking advantage of this\n>>> fact and terminate the iteration through the pathlist early when looking\n>>> for the cheapest path of a certain kind.\n>>>\n>>> Thanks\n>>> Richard\n>>>\n>>\n\nHi Richard Guo I tried to changed the comment, can you help me to check if this is correct?Many thanks.- /*- * get_cheapest_parallel_safe_total_inner- *\t Find the unparameterized parallel-safe path with the least total cost.- */+ /* get_cheapest_parallel_safe_total_inner+ * Skip paths that do not meet the criteria,find the unparameterized parallel-safe path with the least total cost and return NULL if it does not exist.+ *+ */Thanks wenhui qiu <[email protected]> 于2024年7月31日周三 09:21写道:Hi Richard Guo Today is the last day of the commitfest cycle.I think this patch should be commented ,Except for the comments, I tested it good to meThankswenhui qiu <[email protected]> 于2024年7月25日周四 16:18写道:Hi Richard Guo Is it necessary to add some comments here?+\t\tif (!innerpath->parallel_safe ||+\t\t\t!bms_is_empty(PATH_REQ_OUTER(innerpath)))+\t\t\tcontinue;++\t\tif (matched_path != NULL &&+\t\t\tcompare_path_costs(matched_path, innerpath, TOTAL_COST) <= 0)+\t\t\tcontinue;++\t\tmatched_path = innerpath;Richard Guo <[email protected]> 于2024年1月10日周三 15:08写道:On Tue, Apr 11, 2023 at 11:43 AM Andy Fan <[email protected]> wrote:On Tue, Apr 11, 2023 at 11:03 AM Richard Guo <[email protected]> wrote:As the comment above add_path() says, 'The pathlist is kept sorted bytotal_cost, with cheaper paths at the front.' And it seems thatget_cheapest_parallel_safe_total_inner() relies on this ordering(without being mentioned in the comments, which I think we should do).I think the answer for ${subject} should be yes. Per the comments inadd_partial_path, we have * add_partial_path * *\t As in add_path, the partial_pathlist is kept sorted with the cheapest *\t total path in front. This is depended on by multiple places, which *\t just take the front entry as the cheapest path without searching. *I'm not sure about this conclusion. Surely we can depend on that thepartial_pathlist is kept sorted by total_cost ASC. This is emphasizedin the comment of add_partial_path, and also leveraged in practice, suchas in many places we just use linitial(rel->partial_pathlist) as thecheapest partial path.But get_cheapest_parallel_safe_total_inner works on pathlist notpartial_pathlist. And for pathlist, I'm not sure if it's a goodpractice to depend on its ordering. Because1) the comment of add_path only mentions that add_path_precheck relieson this ordering, but it does not mention that other functions also do;2) other than add_path_precheck, I haven't observed any other functionsthat rely on this ordering. The only exception as far as I notice isget_cheapest_parallel_safe_total_inner.On the other hand, if we declare that we can rely on the pathlist beingsorted in ascending order by total_cost, we should update the commentfor add_path to highlight this aspect. We should also include a commentfor get_cheapest_parallel_safe_total_inner to clarify why an early exitis possible, similar to what we do for add_path_precheck. Additionally,in several places, we can optimize our code by taking advantage of thisfact and terminate the iteration through the pathlist early when lookingfor the cheapest path of a certain kind.ThanksRichard",
"msg_date": "Tue, 17 Sep 2024 16:48:16 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we rely on the ordering of paths in pathlist?"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile doing a few things on Windows with meson, I have noticed that,\nwhile we output some information related to bison after a setup step,\nthere is nothing about flex.\n\nI think that adding something about flex in the \"Programs\" section\nwould be pretty useful, particularly for Windows as the command used\ncould be \"flex\" as much as \"win_flex.exe\".\n\nAttached is a patch to show the path to the flex command used, as well\nas its version.\n\nOpinions or thoughts?\n--\nMichael",
"msg_date": "Tue, 11 Apr 2023 14:58:45 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add information about command path and version of flex in meson\n output"
},
{
"msg_contents": "On 11.04.23 07:58, Michael Paquier wrote:\n> While doing a few things on Windows with meson, I have noticed that,\n> while we output some information related to bison after a setup step,\n> there is nothing about flex.\n> \n> I think that adding something about flex in the \"Programs\" section\n> would be pretty useful, particularly for Windows as the command used\n> could be \"flex\" as much as \"win_flex.exe\".\n\nI think this would be useful.\n\n > + flex_version = run_command(flex, '--version', check: true)\n > + flex_version = flex_version.stdout().split(' ')[1].split('\\n')[0]\n\nMaybe this could be combined into one command?\n\nLooks good otherwise.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 08:34:39 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add information about command path and version of flex in meson\n output"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 08:34:39AM +0200, Peter Eisentraut wrote:\n> Maybe this could be combined into one command?\n\nOn clarity ground, I am not sure that combining both is a good idea.\nPerhaps the use of a different variable, like bison a few lines above,\nmakes things cleaner?\n\n> Looks good otherwise.\n\nThanks for the review.\n--\nMichael",
"msg_date": "Mon, 3 Jul 2023 16:30:26 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add information about command path and version of flex in meson\n output"
},
{
"msg_contents": "On 03.07.23 09:30, Michael Paquier wrote:\n> On Mon, Jul 03, 2023 at 08:34:39AM +0200, Peter Eisentraut wrote:\n>> Maybe this could be combined into one command?\n> \n> On clarity ground, I am not sure that combining both is a good idea.\n> Perhaps the use of a different variable, like bison a few lines above,\n> makes things cleaner?\n\nYes, if you want two separate lines, then doing it like bison makes sense.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 10:17:40 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add information about command path and version of flex in meson\n output"
},
{
"msg_contents": "On Mon, Jul 03, 2023 at 10:17:40AM +0200, Peter Eisentraut wrote:\n> Yes, if you want two separate lines, then doing it like bison makes sense.\n\nOkay, I have applied v2 that uses two separate lines and two separate\nvariables, then, to be like bison.\n--\nMichael",
"msg_date": "Tue, 4 Jul 2023 07:29:02 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add information about command path and version of flex in meson\n output"
}
] |
[
{
"msg_contents": "cfbot [1] is listing some already committed patches under the \"Needs\nReview\" category. For example here are some of mine [1][2]. And\nbecause they are already committed, the 'apply' fails, so they get\nflagged by cfbot as needed rebase.\n\nSomething seems broken.\n\n------\n[1] http://cfbot.cputube.org/next.html\n[2] https://commitfest.postgresql.org/43/4246/\n[3] https://commitfest.postgresql.org/43/4266/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 11 Apr 2023 16:15:42 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "cfbot is listing committed patches?"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 6:16 PM Peter Smith <[email protected]> wrote:\n> cfbot [1] is listing some already committed patches under the \"Needs\n> Review\" category. For example here are some of mine [1][2]. And\n> because they are already committed, the 'apply' fails, so they get\n> flagged by cfbot as needed rebase.\n>\n> Something seems broken.\n\nThanks, fixed. It was confused because after CF #42 was recently\nclosed, #43 became \"current\" but there is not yet a \"next\" commitfest.\nI never noticed before, but I guess those are manually created.\n\n\n",
"msg_date": "Tue, 11 Apr 2023 18:36:12 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cfbot is listing committed patches?"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 4:36 PM Thomas Munro <[email protected]> wrote:\n>\n> On Tue, Apr 11, 2023 at 6:16 PM Peter Smith <[email protected]> wrote:\n> > cfbot [1] is listing some already committed patches under the \"Needs\n> > Review\" category. For example here are some of mine [1][2]. And\n> > because they are already committed, the 'apply' fails, so they get\n> > flagged by cfbot as needed rebase.\n> >\n> > Something seems broken.\n>\n> Thanks, fixed. It was confused because after CF #42 was recently\n> closed, #43 became \"current\" but there is not yet a \"next\" commitfest.\n> I never noticed before, but I guess those are manually created.\n\nOh, right. And I was mistakenly looking at the cfbot page of the\n\"next\" commitfest even though there is no such thing. Thanks for the\nfix/explanation.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 11 Apr 2023 17:00:35 +1000",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cfbot is listing committed patches?"
}
] |
[
{
"msg_contents": "The check for parallel_safe should be even cheaper than cost comparison\nso I think it's better to do that first. The attached patch does this\nand also updates the comment to mention the requirement about being\nparallel-safe.\n\nThanks\nRichard",
"msg_date": "Tue, 11 Apr 2023 15:19:41 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "Hi,\n\n> The check for parallel_safe should be even cheaper than cost comparison\n> so I think it's better to do that first. The attached patch does this\n> and also updates the comment to mention the requirement about being\n> parallel-safe.\n\nThe patch was marked as \"Needs review\" so I decided to take a look.\n\nI see the reasoning behind the proposed change, but I'm not convinced\nthat there will be any measurable performance improvements. Firstly,\ncompare_path_costs() is rather cheap. Secondly, require_parallel_safe\nis `false` in most of the cases. Last but not least, one should prove\nthat this particular place is a bottleneck under given loads. I doubt\nit is. Most of the time it's a network, disk I/O or locks.\n\nSo unless the author can provide benchmarks that show measurable\nbenefits of the change I suggest rejecting it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 11 Jul 2023 15:16:38 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 8:16 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> Hi,\n>\n> > The check for parallel_safe should be even cheaper than cost comparison\n> > so I think it's better to do that first. The attached patch does this\n> > and also updates the comment to mention the requirement about being\n> > parallel-safe.\n>\n> The patch was marked as \"Needs review\" so I decided to take a look.\n\n\nThanks for the review!\n\n\n> I see the reasoning behind the proposed change, but I'm not convinced\n> that there will be any measurable performance improvements. Firstly,\n> compare_path_costs() is rather cheap. Secondly, require_parallel_safe\n> is `false` in most of the cases. Last but not least, one should prove\n> that this particular place is a bottleneck under given loads. I doubt\n> it is. Most of the time it's a network, disk I/O or locks.\n>\n> So unless the author can provide benchmarks that show measurable\n> benefits of the change I suggest rejecting it.\n\n\nHmm, I doubt that there would be any measurable performance gains from\nthis minor tweak. I think this tweak is more about being cosmetic. But\nI'm OK if it is deemed unnecessary and thus rejected.\n\nAnother change in this patch is to mention the requirement about being\nparallel-safe in the comment.\n\n * Find the cheapest path (according to the specified criterion) that\n- * satisfies the given pathkeys and parameterization.\n+ * satisfies the given pathkeys and parameterization, and is\nparallel-safe\n+ * if required.\n\nMaybe this is something that is worthwhile to do?\n\nThanks\nRichard\n\nOn Tue, Jul 11, 2023 at 8:16 PM Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> The check for parallel_safe should be even cheaper than cost comparison\n> so I think it's better to do that first. The attached patch does this\n> and also updates the comment to mention the requirement about being\n> parallel-safe.\n\nThe patch was marked as \"Needs review\" so I decided to take a look.Thanks for the review! \nI see the reasoning behind the proposed change, but I'm not convinced\nthat there will be any measurable performance improvements. Firstly,\ncompare_path_costs() is rather cheap. Secondly, require_parallel_safe\nis `false` in most of the cases. Last but not least, one should prove\nthat this particular place is a bottleneck under given loads. I doubt\nit is. Most of the time it's a network, disk I/O or locks.\n\nSo unless the author can provide benchmarks that show measurable\nbenefits of the change I suggest rejecting it.Hmm, I doubt that there would be any measurable performance gains fromthis minor tweak. I think this tweak is more about being cosmetic. ButI'm OK if it is deemed unnecessary and thus rejected.Another change in this patch is to mention the requirement about beingparallel-safe in the comment. * Find the cheapest path (according to the specified criterion) that- * satisfies the given pathkeys and parameterization.+ * satisfies the given pathkeys and parameterization, and is parallel-safe+ * if required.Maybe this is something that is worthwhile to do?ThanksRichard",
"msg_date": "Wed, 19 Jul 2023 18:12:07 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "Hi,\n\n>> I see the reasoning behind the proposed change, but I'm not convinced\n>> that there will be any measurable performance improvements. Firstly,\n>> compare_path_costs() is rather cheap. Secondly, require_parallel_safe\n>> is `false` in most of the cases. Last but not least, one should prove\n>> that this particular place is a bottleneck under given loads. I doubt\n>> it is. Most of the time it's a network, disk I/O or locks.\n>>\n>> So unless the author can provide benchmarks that show measurable\n>> benefits of the change I suggest rejecting it.\n>\n> Hmm, I doubt that there would be any measurable performance gains from\n> this minor tweak. I think this tweak is more about being cosmetic. But\n> I'm OK if it is deemed unnecessary and thus rejected.\n\nDuring the triage of the patches submitted for the September CF a\nconsensus was reached [1] to mark this patch as Rejected.\n\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:35:12 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Mon, Sep 4, 2023 at 10:17 AM Aleksander Alekseev\n<[email protected]> wrote:\n> During the triage of the patches submitted for the September CF a\n> consensus was reached [1] to mark this patch as Rejected.\n\nI don't think that's the correct conclusion. You said here that you\ndidn't think the patch was valuable. Then you said the same thing over\nthere. You agreeing with yourself is not a consensus.\n\nI think this patch is a good idea and should be committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 11:09:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "Hi Robert,\n\n> I don't think that's the correct conclusion. You said here that you\n> didn't think the patch was valuable. Then you said the same thing over\n> there. You agreeing with yourself is not a consensus.\n\nThe word \"consensus\" was a poor choice for sure. The fact that I\nsuggested to reject the patch and nobody objected straight away is not\na consensus.\n\n> I think this patch is a good idea and should be committed.\n\nNo problem, changing status back to \"Needs review\".\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 5 Sep 2023 19:01:09 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "Hi,\n\n> > I think this patch is a good idea and should be committed.\n>\n> No problem, changing status back to \"Needs review\".\n\nNow when we continue reviewing the patch, could you please elaborate a\nbit on why you think it's worth committing?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 5 Sep 2023 19:05:14 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 12:05 PM Aleksander Alekseev\n<[email protected]> wrote:\n> Now when we continue reviewing the patch, could you please elaborate a\n> bit on why you think it's worth committing?\n\nWell, why not? The test he's proposing to move earlier doesn't involve\ncalling a function, so it should be cheaper than the one he's moving\nit past, which does.\n\nI mean, I don't know whether the savings are going to be measurable on\na benchmark, but I guess I don't particularly see why it matters. Why\nwrite a function that says \"this thing is cheaper so we test it first\"\nand then perform a cheaper test afterwards? That's just silly. We can\neither change the comment to say \"we do this first for no reason even\nthough it would be more sensible to do the cheap test first\" or we can\nreorder the tests to match the principle set forth in the existing\ncomment.\n\nI mean, unless there's some reason why it *isn't* cheaper. In that\ncase we should have a different conversation...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 13:14:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 4:06 AM Robert Haas <[email protected]> wrote:\n\n> On Tue, Sep 5, 2023 at 12:05 PM Aleksander Alekseev\n> <[email protected]> wrote:\n> > Now when we continue reviewing the patch, could you please elaborate a\n> > bit on why you think it's worth committing?\n>\n> Well, why not? The test he's proposing to move earlier doesn't involve\n> calling a function, so it should be cheaper than the one he's moving\n> it past, which does.\n>\n> I mean, I don't know whether the savings are going to be measurable on\n> a benchmark, but I guess I don't particularly see why it matters. Why\n> write a function that says \"this thing is cheaper so we test it first\"\n> and then perform a cheaper test afterwards? That's just silly. We can\n> either change the comment to say \"we do this first for no reason even\n> though it would be more sensible to do the cheap test first\" or we can\n> reorder the tests to match the principle set forth in the existing\n> comment.\n>\n> I mean, unless there's some reason why it *isn't* cheaper. In that\n> case we should have a different conversation...\n>\n\nI like this consultation, so +1 from me :)\n\nI guess the *valuable* sometimes means the effort we pay is greater\nthan the benefit we get, As for this patch, the benefit is not huge (it\nis possible the compiler already does that). and the most effort we pay\nshould be committer's attention, who needs to grab the patch, write the\ncorrect commit and credit to the author and push it. I'm not sure if\nAleksander is worrying that this kind of patch will grab too much of\nthe committer's attention and I do see there are lots of patches like\nthis.\n\nIn my opinion, we can do some stuff to improve the ROI.\n- Authors should do as much as possible, mainly a better commit\nmessage. As for this patch, the commit message is \" Adjustment\nto get_cheapest_path_for_pathkeys\" which I don't think matches\nour culture.\n- Authors can provide more refactor code if possible. like 8b26769bc\n<https://github.com/postgres/postgres/commit/8b26769bc441fffa8aad31dddc484c2f4043d2c9>\n.\n- After others reviewers read the patch and think it is good to commit\nwith the rule above, who can mark the commitfest entry as \"Ready\nfor Committer\". Whenever a committer wants some non mental stress,\nthey can pick this and commit this.\n\nActually I also want to know what \"Ready for Committer\" is designed for,\nand when/who can mark a patch as \"Ready for Committer\" ?\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Sep 6, 2023 at 4:06 AM Robert Haas <[email protected]> wrote:On Tue, Sep 5, 2023 at 12:05 PM Aleksander Alekseev\n<[email protected]> wrote:\n> Now when we continue reviewing the patch, could you please elaborate a\n> bit on why you think it's worth committing?\n\nWell, why not? The test he's proposing to move earlier doesn't involve\ncalling a function, so it should be cheaper than the one he's moving\nit past, which does.\n\nI mean, I don't know whether the savings are going to be measurable on\na benchmark, but I guess I don't particularly see why it matters. Why\nwrite a function that says \"this thing is cheaper so we test it first\"\nand then perform a cheaper test afterwards? That's just silly. We can\neither change the comment to say \"we do this first for no reason even\nthough it would be more sensible to do the cheap test first\" or we can\nreorder the tests to match the principle set forth in the existing\ncomment.\n\nI mean, unless there's some reason why it *isn't* cheaper. In that\ncase we should have a different conversation... I like this consultation, so +1 from me :) I guess the *valuable* sometimes means the effort we pay is greaterthan the benefit we get, As for this patch, the benefit is not huge (it is possible the compiler already does that). and the most effort we payshould be committer's attention, who needs to grab the patch, write thecorrect commit and credit to the author and push it. I'm not sure if Aleksander is worrying that this kind of patch will grab too much of the committer's attention and I do see there are lots of patches like this.In my opinion, we can do some stuff to improve the ROI. - Authors should do as much as possible, mainly a better commitmessage. As for this patch, the commit message is \" Adjustmentto get_cheapest_path_for_pathkeys\" which I don't think matchesour culture. - Authors can provide more refactor code if possible. like 8b26769bc. - After others reviewers read the patch and think it is good to commitwith the rule above, who can mark the commitfest entry as \"Readyfor Committer\". Whenever a committer wants some non mental stress,they can pick this and commit this. Actually I also want to know what \"Ready for Committer\" is designed for,and when/who can mark a patch as \"Ready for Committer\" ?-- Best RegardsAndy Fan",
"msg_date": "Wed, 6 Sep 2023 14:45:06 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 2:45 PM Andy Fan <[email protected]>\nwrote:uld have a different conversation...\n\n>\n> I like this consultation, so +1 from me :)\n>\n\ns/consultation/conclusion.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Sep 6, 2023 at 2:45 PM Andy Fan <[email protected]> wrote:uld have a different conversation... I like this consultation, so +1 from me :) s/consultation/conclusion. -- Best RegardsAndy Fan",
"msg_date": "Wed, 6 Sep 2023 14:46:08 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": ">\n> I guess the *valuable* sometimes means the effort we pay is greater\n> than the benefit we get, As for this patch, the benefit is not huge (it\n> is possible the compiler already does that). and the most effort we pay\n> should be committer's attention, who needs to grab the patch, write the\n> correct commit and credit to the author and push it. I'm not sure if\n> Aleksander is worrying that this kind of patch will grab too much of\n> the committer's attention and I do see there are lots of patches like\n> this.\n>\n\nI forget to mention that the past contribution of the author should be a\nfactor as well. Like Richard has provided lots of performance improvement,\nbug fix, code reviews, so I believe more attention from committers should\nbe a reasonable request.\n\n-- \nBest Regards\nAndy Fan\n\nI guess the *valuable* sometimes means the effort we pay is greaterthan the benefit we get, As for this patch, the benefit is not huge (it is possible the compiler already does that). and the most effort we payshould be committer's attention, who needs to grab the patch, write thecorrect commit and credit to the author and push it. I'm not sure if Aleksander is worrying that this kind of patch will grab too much of the committer's attention and I do see there are lots of patches like this. I forget to mention that the past contribution of the author should be a factor as well. Like Richard has provided lots of performance improvement,bug fix, code reviews, so I believe more attention from committers shouldbe a reasonable request. -- Best RegardsAndy Fan",
"msg_date": "Wed, 6 Sep 2023 15:13:39 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 2:45 AM Andy Fan <[email protected]> wrote:\n> I guess the *valuable* sometimes means the effort we pay is greater\n> than the benefit we get, As for this patch, the benefit is not huge (it\n> is possible the compiler already does that). and the most effort we pay\n> should be committer's attention, who needs to grab the patch, write the\n> correct commit and credit to the author and push it. I'm not sure if\n> Aleksander is worrying that this kind of patch will grab too much of\n> the committer's attention and I do see there are lots of patches like\n> this.\n\nVery fair point. However, as you said in your follow-up email, Richard\nGuo has done a lot of good work in this area already, so it makes\nsense to pay a bit more attention to his suggestions.\n\n> In my opinion, we can do some stuff to improve the ROI.\n> - Authors should do as much as possible, mainly a better commit\n> message. As for this patch, the commit message is \" Adjustment\n> to get_cheapest_path_for_pathkeys\" which I don't think matches\n> our culture.\n\nI agree. I don't think the patch submitter is obliged to try to write\na good commit message, but people who contribute regularly or are\nposting large stacks of complex patches are probably well-advised to\ntry. It makes life easier for committers and even for reviewers trying\nto make sense of their patches.\n\n> Actually I also want to know what \"Ready for Committer\" is designed for,\n> and when/who can mark a patch as \"Ready for Committer\" ?\n\nAny reviewer who feels that this is the case. It's not binding on\nanyone; it's an opinion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 08:50:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 8:50 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Sep 6, 2023 at 2:45 AM Andy Fan <[email protected]> wrote:\n> > I guess the *valuable* sometimes means the effort we pay is greater\n> > than the benefit we get, As for this patch, the benefit is not huge (it\n> > is possible the compiler already does that). and the most effort we pay\n> > should be committer's attention, who needs to grab the patch, write the\n> > correct commit and credit to the author and push it. I'm not sure if\n> > Aleksander is worrying that this kind of patch will grab too much of\n> > the committer's attention and I do see there are lots of patches like\n> > this.\n>\n> Very fair point. However, as you said in your follow-up email, Richard\n> Guo has done a lot of good work in this area already, so it makes\n> sense to pay a bit more attention to his suggestions.\n>\n\nAgreed.\n\n\n>\n> > In my opinion, we can do some stuff to improve the ROI.\n> > - Authors should do as much as possible, mainly a better commit\n> > message. As for this patch, the commit message is \" Adjustment\n> > to get_cheapest_path_for_pathkeys\" which I don't think matches\n> > our culture.\n>\n> I agree. I don't think the patch submitter is obliged to try to write\n> a good commit message, but people who contribute regularly or are\n> posting large stacks of complex patches are probably well-advised to\n> try. It makes life easier for committers and even for reviewers trying\n> to make sense of their patches.\n>\n\nFair enough.\n\n\n> > Actually I also want to know what \"Ready for Committer\" is designed for,\n> > and when/who can mark a patch as \"Ready for Committer\" ?\n>\n> Any reviewer who feels that this is the case. It's not binding on\n> anyone; it's an opinion.\n>\n\nGlad to know that. I have marked myself as a reviewer and mark this entry\nas \"Ready for Committer\".\n\nhttps://commitfest.postgresql.org/44/4286/\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Sep 6, 2023 at 8:50 PM Robert Haas <[email protected]> wrote:On Wed, Sep 6, 2023 at 2:45 AM Andy Fan <[email protected]> wrote:\n> I guess the *valuable* sometimes means the effort we pay is greater\n> than the benefit we get, As for this patch, the benefit is not huge (it\n> is possible the compiler already does that). and the most effort we pay\n> should be committer's attention, who needs to grab the patch, write the\n> correct commit and credit to the author and push it. I'm not sure if\n> Aleksander is worrying that this kind of patch will grab too much of\n> the committer's attention and I do see there are lots of patches like\n> this.\n\nVery fair point. However, as you said in your follow-up email, Richard\nGuo has done a lot of good work in this area already, so it makes\nsense to pay a bit more attention to his suggestions.Agreed. \n\n> In my opinion, we can do some stuff to improve the ROI.\n> - Authors should do as much as possible, mainly a better commit\n> message. As for this patch, the commit message is \" Adjustment\n> to get_cheapest_path_for_pathkeys\" which I don't think matches\n> our culture.\n\nI agree. I don't think the patch submitter is obliged to try to write\na good commit message, but people who contribute regularly or are\nposting large stacks of complex patches are probably well-advised to\ntry. It makes life easier for committers and even for reviewers trying\nto make sense of their patches.Fair enough. \n\n> Actually I also want to know what \"Ready for Committer\" is designed for,\n> and when/who can mark a patch as \"Ready for Committer\" ?\n\nAny reviewer who feels that this is the case. It's not binding on\nanyone; it's an opinion. Glad to know that. I have marked myself as a reviewer and mark this entryas \"Ready for Committer\". https://commitfest.postgresql.org/44/4286/ -- Best RegardsAndy Fan",
"msg_date": "Wed, 6 Sep 2023 21:00:20 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 8:50 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Sep 6, 2023 at 2:45 AM Andy Fan <[email protected]> wrote:\n> > In my opinion, we can do some stuff to improve the ROI.\n> > - Authors should do as much as possible, mainly a better commit\n> > message. As for this patch, the commit message is \" Adjustment\n> > to get_cheapest_path_for_pathkeys\" which I don't think matches\n> > our culture.\n>\n> I agree. I don't think the patch submitter is obliged to try to write\n> a good commit message, but people who contribute regularly or are\n> posting large stacks of complex patches are probably well-advised to\n> try. It makes life easier for committers and even for reviewers trying\n> to make sense of their patches.\n\n\nFair point. So I had a go at writing a commit message for this patch as\nattached. Thanks for all the reviews.\n\nThanks\nRichard",
"msg_date": "Thu, 7 Sep 2023 10:21:31 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "Hi,\n\n>> I agree. I don't think the patch submitter is obliged to try to write\n>> a good commit message, but people who contribute regularly or are\n>> posting large stacks of complex patches are probably well-advised to\n>> try. It makes life easier for committers and even for reviewers trying\n>> to make sense of their patches.\n>\n>\n> Fair point. So I had a go at writing a commit message for this patch as\n> attached. Thanks for all the reviews.\n\n+1 to Robert's and Andy's arguments above. IMO the problem with the\npatch was that it was declared as a performance improvement. In such\ncases we typically ask the authors to prove that the actual\nimprovement took place and that there were no degradations.\n\nIf we consider the patch marley as a refactoring that improves the\nreadability I see no reason not to merge it.\n\nv2 LGTM.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 7 Sep 2023 12:32:39 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Thu, Sep 7, 2023 at 5:32 AM Aleksander Alekseev\n<[email protected]> wrote:\n> v2 LGTM.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Sep 2023 14:42:00 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 2:42 AM Robert Haas <[email protected]> wrote:\n\n> Committed.\n\n\nThanks for pushing it!\n\nThanks\nRichard\n\nOn Fri, Sep 8, 2023 at 2:42 AM Robert Haas <[email protected]> wrote:\nCommitted.Thanks for pushing it!ThanksRichard",
"msg_date": "Fri, 8 Sep 2023 14:08:34 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A minor adjustment to get_cheapest_path_for_pathkeys"
}
] |
[
{
"msg_contents": "Over in [1], Horiguchisan mentioned a few things about VACUUM's new\nBUFFER_USAGE_LIMIT option.\n\n1) buffer_usage_limit in the ERROR messages should be consistently in uppercase.\n2) defGetString() already checks for opt->args == NULL and raises an\nERROR when it is.\n\nI suspect that Melanie might have followed the lead of the PARALLEL\noption when she was working on adding the BUFFER_USAGE_LIMIT patch.\n\nWhat I'm wondering now is:\n\na) Is it worth changing this for the PARALLEL option too? and;\nb) I see we lose the parse position indicator, and;\nc) If we did want to change this, is it too late for v16?\n\nFor a), I know that's much older code, so perhaps it's not worth\nmessing around with the ERROR messages for this. For b), this seems\nlike a fairly minor detail given that VACUUM commands are fairly\nsimple. It shouldn't be too hard for a user to see what we're talking\nabout.\n\nI've attached a patch to adjust this.\n\nDavid\n\n[1] https://postgr.es/m/[email protected]",
"msg_date": "Tue, 11 Apr 2023 20:00:35 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR messages in VACUUM's PARALLEL option"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> Over in [1], Horiguchisan mentioned a few things about VACUUM's new\n> BUFFER_USAGE_LIMIT option.\n\n> 1) buffer_usage_limit in the ERROR messages should be consistently in uppercase.\n\nFWIW, I think this is exactly backward, and so is whatever code you\nbased this on. Our usual habit is to write GUC names and suchlike\nin lower case in error messages. Add double quotes if you feel you\nwant to set them off from the surrounding text. Here's a typical\nexample of longstanding style:\n\nregression=# set max_parallel_workers_per_gather to -1;\nERROR: -1 is outside the valid range for parameter \"max_parallel_workers_per_gather\" (0 .. 1024)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:58:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR messages in VACUUM's PARALLEL option"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 4:00 AM David Rowley <[email protected]> wrote:\n>\n> Over in [1], Horiguchisan mentioned a few things about VACUUM's new\n> BUFFER_USAGE_LIMIT option.\n>\n> 1) buffer_usage_limit in the ERROR messages should be consistently in uppercase.\n\nI did notice that all the other VACUUM options don't do this:\n\npostgres=# vacuum (process_toast 'boom') foo;\nERROR: process_toast requires a Boolean value\npostgres=# vacuum (full 2) foo;\nERROR: full requires a Boolean value\npostgres=# vacuum (verbose 2) foo;\nERROR: verbose requires a Boolean value\n\nThough, I actually prefer uppercase. They are all documented in\nuppercase, so I don't see why they would all be lowercase in the error\nmessages. Additionally, for BUFFER_USAGE_LIMIT, I find the uppercase\nhelpful to differentiate it from the GUC vacuum_buffer_usage_limit.\n\n> 2) defGetString() already checks for opt->args == NULL and raises an\n> ERROR when it is.\n>\n> I suspect that Melanie might have followed the lead of the PARALLEL\n> option when she was working on adding the BUFFER_USAGE_LIMIT patch.\n\nYes, I mostly imitated parallel since it was the other VACUUM option\nwith a valid range (as opposed to a boolean or enum of acceptable\nvalues).\n\n> What I'm wondering now is:\n>\n> a) Is it worth changing this for the PARALLEL option too? and;\n> b) I see we lose the parse position indicator, and;\n\nI'm not too worried about the parse position indicator, as we don't get\nit for the majority of the errors about valid values. And, as you say\nlater in your email, the VACUUM options are pretty simple so it should\nbe easy for the user to figure out without a parse position indicator.\n\npostgres=# vacuum (SKIP_LOCKED 2) foo;\nERROR: skip_locked requires a Boolean value\n\nWhile trying different combinations, I noticed that defGetInt32 produces\na somewhat unsatisfactory error message when I provide an argument which\nwould overflow an int32\n\npostgres=# vacuum (parallel 3333333333333333333333) foo;\nERROR: parallel requires an integer value\n\nIn defGetInt32(), the def->arg nodeTag is T_Float, which is why we get\nthis error message. Perhaps it is worth checking somewhere in the stack\nfor integer overflow (instead of assuming it is a float) and giving a\nmore helpful message like the one from parse_int()?\n\n if (val > INT_MAX || val < INT_MIN)\n {\n if (hintmsg)\n *hintmsg = gettext_noop(\"Value exceeds integer range.\");\n return false;\n }\n\nProbably not a trivial task and not one for 16, however.\n\nI will say that I prefer the new error message introduced by this\npatch's usage of defGetInt32() when an argument is not specified to the\nprevious error message.\n\npostgres=# vacuum (parallel) foo;\nERROR: parallel requires an integer value\n\nOn Tue, Apr 11, 2023 at 9:58 AM Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Over in [1], Horiguchisan mentioned a few things about VACUUM's new\n> > BUFFER_USAGE_LIMIT option.\n>\n> > 1) buffer_usage_limit in the ERROR messages should be consistently in uppercase.\n>\n> FWIW, I think this is exactly backward, and so is whatever code you\n> based this on. Our usual habit is to write GUC names and suchlike\n> in lower case in error messages. Add double quotes if you feel you\n> want to set them off from the surrounding text. Here's a typical\n> example of longstanding style:\n>\n> regression=# set max_parallel_workers_per_gather to -1;\n> ERROR: -1 is outside the valid range for parameter \"max_parallel_workers_per_gather\" (0 .. 1024)\n\nIt seems it also is true for VACUUM options currently, but you didn't\nmention command options. Do you also feel these should be lowercase?\n\n- Melanie\n\n\n",
"msg_date": "Tue, 11 Apr 2023 11:36:46 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR messages in VACUUM's PARALLEL option"
},
{
"msg_contents": "(On Wed, 12 Apr 2023 at 01:58, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Over in [1], Horiguchisan mentioned a few things about VACUUM's new\n> > BUFFER_USAGE_LIMIT option.\n>\n> > 1) buffer_usage_limit in the ERROR messages should be consistently in uppercase.\n>\n> FWIW, I think this is exactly backward, and so is whatever code you\n> based this on. Our usual habit is to write GUC names and suchlike\n> in lower case in error messages. Add double quotes if you feel you\n> want to set them off from the surrounding text. Here's a typical\n> example of longstanding style:\n\nThanks for chipping in on this. Can you confirm if you think this\nshould apply to VACUUM options? We're not talking GUCs here.\n\nI think there might be some precedents creeping in here for the legacy\nVACUUM syntax. Roughly the current ERROR messages are using upper\ncase. e.g:\n\n\"PROCESS_TOAST required with VACUUM FULL\"\n\"VACUUM option DISABLE_PAGE_SKIPPING cannot be used with FULL\"\n\"ANALYZE option must be specified when a column list is provided\"\n\nIt's quite possible the newer options copied what VACUUM FULL did, and\nVACUUM FULL was probably upper case from before we had the newer\nsyntax with the options in parenthesis.\n\nI did notice that defGetString() did lower case, but that not exactly\nterrible. It seems worse to have ExecVacuum() use a random assortment\nof casings when reporting errors in the specified options. Unless\nsomeone thinks are should edit all of those to lower case, then I\nthink the best option is just to make BUFFER_USAGE_LIMIT follow the\nexisting precedent of upper casing.\n\nDavid\n\n\n",
"msg_date": "Wed, 12 Apr 2023 09:57:30 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR messages in VACUUM's PARALLEL option"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> Thanks for chipping in on this. Can you confirm if you think this\n> should apply to VACUUM options? We're not talking GUCs here.\n\nMy druthers would be to treat them similarly to GUCs.\nI recognize that I might be in the minority, and that doing\nso would entail touching a lot of existing messages in this\narea. Nonetheless, I think this area is not being consistent\nwith our wider conventions, which is why for example defGetString\nis out of step with these messages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 18:16:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR messages in VACUUM's PARALLEL option"
},
{
"msg_contents": "At Tue, 11 Apr 2023 18:16:40 -0400, Tom Lane <[email protected]> wrote in \n> David Rowley <[email protected]> writes:\n> > Thanks for chipping in on this. Can you confirm if you think this\n> > should apply to VACUUM options? We're not talking GUCs here.\n> \n> My druthers would be to treat them similarly to GUCs.\n\nIMHO I like this direction.\n\n> I recognize that I might be in the minority, and that doing\n> so would entail touching a lot of existing messages in this\n> area. Nonetheless, I think this area is not being consistent\n> with our wider conventions, which is why for example defGetString\n> is out of step with these messages.\n\nOn the other hand, the documentation write optinos in uppercase\n[1]. It is why I did not push hard for the normalization.\n\n[1] https://www.postgresql.org/docs/devel/sql-vacuum.html\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 12 Apr 2023 10:21:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR messages in VACUUM's PARALLEL option"
}
] |
[
{
"msg_contents": "hi hackers,\n\nwhile working on the issue reported by Noah in [1], I realized that there is an\nissue in 035_standby_logical_decoding.pl.\n\nThe issue is here:\n\n\"\n$node_standby->reload;\n\n$node_primary->psql('postgres', q[CREATE DATABASE testdb]);\n$node_primary->safe_psql('testdb', qq[CREATE TABLE decoding_test(x integer, y text);]);\n\n# create the logical slots\ncreate_logical_slots($node_standby, 'promotion_');\n\n# create the logical slots on the cascading standby too\ncreate_logical_slots($node_cascading_standby, 'promotion_');\n\"\n\nWe are not waiting for the standby/cascading standby to catchup (so that the create\ndatabase get replicated) before creating the replication slots (in testdb).\n\nWhile, It's still not 100% sure that it will fix Noah's issue, I think this has to be fixed.\n\nPlease find, attached a patch proposal to do so.\n\nRegards,\n\n[1]: https://www.postgresql.org/message-id/20230411053657.GA1177147%40rfd.leadboat.com\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 11 Apr 2023 12:29:45 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Missing wait_for_replay_catchup in 035_standby_logical_decoding.pl"
}
] |
[
{
"msg_contents": "Over on [1], Tom mentioned that we might want to rethink the decision\nto not protect chunk headers with Valgrind. That thread fixed a bug\nthat was accessing array element -1, which effectively was reading the\nMemoryChunk at the start of the allocated chunk as an array element.\n\nI wrote a patch to adjust the Valgrind macros to mark the MemoryChunks\nas NOACCESS and that finds the bug reported on that thread (with the\nfix for it reverted).\n\nI didn't quite get a clear run at committing the changes during the\nv16 cycle, but wondering since they're really just Valgrind macro\nchanges if anyone would object to doing it now?\n\nI know there are a few people out there running sqlsmith and/or\nsqlancer under Valgrind. It would be good to have this in so we could\naddress any new issues the attached patch might help them highlight.\n\nAny objections?\n\n(Copying in Tom and Richard same as original thread. Reposting for\nmore visibility of this change)\n\nDavid",
"msg_date": "Wed, 12 Apr 2023 01:28:08 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Protecting allocator headers with Valgrind"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 9:28 PM David Rowley <[email protected]> wrote:\n\n> Over on [1], Tom mentioned that we might want to rethink the decision\n> to not protect chunk headers with Valgrind. That thread fixed a bug\n> that was accessing array element -1, which effectively was reading the\n> MemoryChunk at the start of the allocated chunk as an array element.\n\n\nSeems the link to the original thread is not pasted. Here it is.\n\n[1] https://www.postgresql.org/message-id/1650235.1672694719%40sss.pgh.pa.us\n\nThanks\nRichard\n\nOn Tue, Apr 11, 2023 at 9:28 PM David Rowley <[email protected]> wrote:Over on [1], Tom mentioned that we might want to rethink the decision\nto not protect chunk headers with Valgrind. That thread fixed a bug\nthat was accessing array element -1, which effectively was reading the\nMemoryChunk at the start of the allocated chunk as an array element.Seems the link to the original thread is not pasted. Here it is.[1] https://www.postgresql.org/message-id/1650235.1672694719%40sss.pgh.pa.usThanksRichard",
"msg_date": "Wed, 12 Apr 2023 09:59:22 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protecting allocator headers with Valgrind"
},
{
"msg_contents": "On Wed, 12 Apr 2023 at 01:28, David Rowley <[email protected]> wrote:\n> Any objections?\n\nIt seems there are none. I'll have another look at the patch tomorrow\nwith the aim to get it in.\n\n(Unless someone objects to me doing that before then)\n\nDavid\n\n\n",
"msg_date": "Fri, 14 Apr 2023 00:24:20 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protecting allocator headers with Valgrind"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 01:28:08AM +1200, David Rowley wrote:\n> Any objections?\n\nNot objecting. I think the original Valgrind integration refrained from this\nbecause it would have added enough Valgrind client requests to greatly slow\nValgrind runs. Valgrind reduced the cost of client requests in later years,\nso this new conclusion is reasonable.\n\n\n",
"msg_date": "Sat, 15 Apr 2023 08:25:58 -0700",
"msg_from": "Noah Misch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Protecting allocator headers with Valgrind"
},
{
"msg_contents": "On Sun, 16 Apr 2023 at 03:26, Noah Misch <[email protected]> wrote:\n> Not objecting. I think the original Valgrind integration refrained from this\n> because it would have added enough Valgrind client requests to greatly slow\n> Valgrind runs. Valgrind reduced the cost of client requests in later years,\n> so this new conclusion is reasonable.\n\nI tested that. It's not much slowdown:\n\ntime make installcheck\n\nUnpatched: real 79m36.458s\nPatched: real 81m31.589s\n\nI forgot to mention, I pushed the patch yesterday.\n\nDavid\n\n\n",
"msg_date": "Sun, 16 Apr 2023 17:29:34 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Protecting allocator headers with Valgrind"
}
] |
[
{
"msg_contents": "Reduced from sqlsmith, this query fails under debug_parallel_query=1.\n\nThe elog was added at: 55416b26a98fcf354af88cdd27fc2e045453b68a\nBut (I'm not sure) the faulty commit may be 8edd0e7946 (Suppress Append\nand MergeAppend plan nodes that have a single child).\n\npostgres=# SET force_parallel_mode =1; CREATE TABLE x (i int) PARTITION BY RANGE (i); CREATE TABLE x1 PARTITION OF x DEFAULT ;\n select * from pg_class,\n lateral (select pg_catalog.bit_and(1)\n from pg_class as sample_1\n where case when EXISTS (\n select 1 from x\n where EXISTS (\n select 1 from pg_catalog.pg_depend\n where (sample_1.reltuples is NULL)\n )) then 1 end\n is NULL)x\nwhere false;\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:25:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "v12: ERROR: subplan \"InitPlan 2 (returns $4)\" was not initialized"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> postgres=# SET force_parallel_mode =1; CREATE TABLE x (i int) PARTITION BY RANGE (i); CREATE TABLE x1 PARTITION OF x DEFAULT ;\n> select * from pg_class,\n> lateral (select pg_catalog.bit_and(1)\n> from pg_class as sample_1\n> where case when EXISTS (\n> select 1 from x\n> where EXISTS (\n> select 1 from pg_catalog.pg_depend\n> where (sample_1.reltuples is NULL)\n> )) then 1 end\n> is NULL)x\n> where false;\n\nInteresting. The proximate cause is that we end up with a subplan\nthat is marked parallel_safe, but it has an initplan that is not\nparallel_safe. The parallel worker receives, and tries to initialize,\nthe parallel_safe subplan, and falls over because of its reference\nto the unsafe subplan -- which was not transmitted to the worker.\n\nActually, because of the policy installed by commit ab77a5a45, the\nmere fact of having an initplan should be enough to disqualify\nthe first subplan from being marked parallel-safe.\n\nI dug around and found the culprit: setrefs.c's\nclean_up_removed_plan_level() moves initplans down from a parent\nto a child plan node, but it forgot the possibility that the\nchild plan node had been marked parallel_safe before that and\nmust not be anymore.\n\nThe v1 patch attached is enough to fix the immediate issue,\nbut there's another thing not to like, which is that we're also\ndiscarding the costs associated with the initplans. That's\nstrictly cosmetic given that all the planning decisions are\nalready made, but it still seems potentially annoying if you're\ntrying to understand EXPLAIN output. So I'm inclined to instead\ndo something like v2 attached, which deals with that as well.\n(On the other hand, we aren't bothering to fix up costs when\nwe move initplans around in materialize_finished_plan or\nstandard_planner ... so maybe that should be left for a patch\nthat fixes those things too.)\n\nAnother thing worth wondering about is whether we can't loosen\ncommit ab77a5a45's policy that having an initplan is enough\nto make you parallel-unsafe. In the wake of later fixes,\nnotably 5e6d8d2bb, it seems like maybe we could allow that\nas long as the initplans themselves are parallel-safe. That\nwouldn't be material for back-patching though, so I'll worry\nabout it later.\n\nNot sure what if anything to do about a test case. I'm not\nexcited about memorializing the specific case found by sqlsmith,\nbecause it seems only very accidental that it exposes this\nproblem. I found that there are existing regression tests\nthat exercise the situation where clean_up_removed_plan_level\ngenerates an incorrectly-marked plan, but there is accidentally\nno bad effect. (The planner itself isn't going to be making\nany further decisions with the bogus info; it's only\nExecSerializePlan that pays attention to the flag, and we'd only\nnotice in this specific cross-reference situation.) Also, any\nchange we make along the lines speculated about in the previous\npara would be highly likely to break a test case, in the sense\nthat it'd no longer exercise the previously-failing scenario.\nSo on the whole I'm inclined not to bother with a new test case.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 11 Apr 2023 15:59:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v12: ERROR: subplan \"InitPlan 2 (returns $4)\" was not initialized"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 3:59 AM Tom Lane <[email protected]> wrote:\n\n> The v1 patch attached is enough to fix the immediate issue,\n> but there's another thing not to like, which is that we're also\n> discarding the costs associated with the initplans. That's\n> strictly cosmetic given that all the planning decisions are\n> already made, but it still seems potentially annoying if you're\n> trying to understand EXPLAIN output. So I'm inclined to instead\n> do something like v2 attached, which deals with that as well.\n> (On the other hand, we aren't bothering to fix up costs when\n> we move initplans around in materialize_finished_plan or\n> standard_planner ... so maybe that should be left for a patch\n> that fixes those things too.)\n\n\n+1 to the v2 patch.\n\n* Should we likewise set the parallel_safe flag for topmost plan in\nSS_attach_initplans?\n\n* In standard_planner around line 443, we move any initPlans from\ntop_plan to the new added Gather node. But since we know that the\ntop_plan is parallel_safe here, shouldn't it have no initPlans?\n\nThanks\nRichard\n\nOn Wed, Apr 12, 2023 at 3:59 AM Tom Lane <[email protected]> wrote:\nThe v1 patch attached is enough to fix the immediate issue,\nbut there's another thing not to like, which is that we're also\ndiscarding the costs associated with the initplans. That's\nstrictly cosmetic given that all the planning decisions are\nalready made, but it still seems potentially annoying if you're\ntrying to understand EXPLAIN output. So I'm inclined to instead\ndo something like v2 attached, which deals with that as well.\n(On the other hand, we aren't bothering to fix up costs when\nwe move initplans around in materialize_finished_plan or\nstandard_planner ... so maybe that should be left for a patch\nthat fixes those things too.)+1 to the v2 patch.* Should we likewise set the parallel_safe flag for topmost plan inSS_attach_initplans?* In standard_planner around line 443, we move any initPlans fromtop_plan to the new added Gather node. But since we know that thetop_plan is parallel_safe here, shouldn't it have no initPlans?ThanksRichard",
"msg_date": "Wed, 12 Apr 2023 14:01:37 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v12: ERROR: subplan \"InitPlan 2 (returns $4)\" was not initialized"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Wed, Apr 12, 2023 at 3:59 AM Tom Lane <[email protected]> wrote:\n>> The v1 patch attached is enough to fix the immediate issue,\n>> but there's another thing not to like, which is that we're also\n>> discarding the costs associated with the initplans. That's\n>> strictly cosmetic given that all the planning decisions are\n>> already made, but it still seems potentially annoying if you're\n>> trying to understand EXPLAIN output. So I'm inclined to instead\n>> do something like v2 attached, which deals with that as well.\n>> (On the other hand, we aren't bothering to fix up costs when\n>> we move initplans around in materialize_finished_plan or\n>> standard_planner ... so maybe that should be left for a patch\n>> that fixes those things too.)\n\n> +1 to the v2 patch.\n\nThanks for looking at this. After sleeping on it, I'm inclined\nto use the v1 patch in the back branches and do the cost fixups\nonly in HEAD.\n\n> * Should we likewise set the parallel_safe flag for topmost plan in\n> SS_attach_initplans?\n\nSS_attach_initplans is assuming that costs and parallel safety\nalready got dealt with, either by SS_charge_for_initplans or by\nequivalent processing during create_plan. I did have an Assert\nthere about parallel_safe already being off in a draft version\nof this patch, but dropped it after realizing that it'd have to\ngo away anyway when we fix things to allow parallel-safe initplans.\n(I have a draft patch for that that I'll post separately.)\n\nWe could improve the comment for SS_attach_initplans to explicitly\nmention parallel safety, though. Also, I'd better go look at the\ncreate_plan code paths to make sure they are indeed accounting\nfor this.\n\n> * In standard_planner around line 443, we move any initPlans from\n> top_plan to the new added Gather node. But since we know that the\n> top_plan is parallel_safe here, shouldn't it have no initPlans?\n\nRight. Again, I did have such an Assert there in a draft version,\nthen decided it wasn't useful to change temporarily. However, the\nfollow-on patch removes that stanza altogether, and I suppose it\nmight as well remove an Assert as what's there now. I'll make it so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 08:34:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v12: ERROR: subplan \"InitPlan 2 (returns $4)\" was not initialized"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 00:34, Tom Lane <[email protected]> wrote:\n> Thanks for looking at this. After sleeping on it, I'm inclined\n> to use the v1 patch in the back branches and do the cost fixups\n> only in HEAD.\n\nI'm also fine with v1 for the back branches.\n\nDavid\n\n\n",
"msg_date": "Thu, 13 Apr 2023 09:36:01 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v12: ERROR: subplan \"InitPlan 2 (returns $4)\" was not initialized"
}
] |
[
{
"msg_contents": "Hi,\n\nI've attached a patch with a few typo and grammatical fixes.\n\nRegards\n\nThom",
"msg_date": "Tue, 11 Apr 2023 15:36:02 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Various typo fixes"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 03:36:02PM +0100, Thom Brown wrote:\n> I've attached a patch with a few typo and grammatical fixes.\n\nI think you actually sent the \"git-diff\" manpage :(\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:39:09 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Tue, 11 Apr 2023 at 15:39, Justin Pryzby <[email protected]> wrote:\n>\n> On Tue, Apr 11, 2023 at 03:36:02PM +0100, Thom Brown wrote:\n> > I've attached a patch with a few typo and grammatical fixes.\n>\n> I think you actually sent the \"git-diff\" manpage :(\n\nOh dear, well that's a first. Thanks for pointing out.\n\nRe-attached.\n\nThom",
"msg_date": "Tue, 11 Apr 2023 15:43:12 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 03:43:12PM +0100, Thom Brown wrote:\n> On Tue, 11 Apr 2023 at 15:39, Justin Pryzby <[email protected]> wrote:\n> >\n> > On Tue, Apr 11, 2023 at 03:36:02PM +0100, Thom Brown wrote:\n> > > I've attached a patch with a few typo and grammatical fixes.\n> >\n> > I think you actually sent the \"git-diff\" manpage :(\n> \n> Oh dear, well that's a first. Thanks for pointing out.\n\nThanks. I think these are all new in v16, right ?\n\nI noticed some of these too - I'll send a patch pretty soon.\n\n|+++ b/doc/src/sgml/logicaldecoding.sgml\n|@@ -326,11 +326,11 @@ postgres=# select * from pg_logical_slot_get_changes('regression_slot', NULL, NU\n| connection is alive (for example a node restart would break it). Then, the\n| primary may delete system catalog rows that could be needed by the logical\n| decoding on the standby (as it does not know about the catalog_xmin on the\n|- standby). Existing logical slots on standby also get invalidated if wal_level\n|- on primary is reduced to less than 'logical'. This is done as soon as the\n|- standby detects such a change in the WAL stream. It means, that for walsenders\n|- that are lagging (if any), some WAL records up to the wal_level parameter change\n|- on the primary won't be decoded.\n|+ standby). Existing logical slots on standby also get invalidated if\n|+ <varname>wal_level</varname> on the primary is reduced to less than 'logical'.\n|+ This is done as soon as the standby detects such a change in the WAL stream.\n|+ It means that, for walsenders which are lagging (if any), some WAL records up\n|+ to the wal_level parameter change on the primary won't be decoded.\n| </para>\n\nI think \"logical\" should be a <literal> here.\n\n\n",
"msg_date": "Tue, 11 Apr 2023 09:53:00 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "> On 11 Apr 2023, at 16:53, Justin Pryzby <[email protected]> wrote:\n\n> I think \"logical\" should be a <literal> here.\n\nAgree, it should in order to be consistent.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 11 Apr 2023 23:12:58 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 09:53:00AM -0500, Justin Pryzby wrote:\n> On Tue, Apr 11, 2023 at 03:43:12PM +0100, Thom Brown wrote:\n> > On Tue, 11 Apr 2023 at 15:39, Justin Pryzby <[email protected]> wrote:\n> > >\n> > > On Tue, Apr 11, 2023 at 03:36:02PM +0100, Thom Brown wrote:\n> > > > I've attached a patch with a few typo and grammatical fixes.\n> > >\n> > > I think you actually sent the \"git-diff\" manpage :(\n> > \n> > Oh dear, well that's a first. Thanks for pointing out.\n> \n> Thanks. I think these are all new in v16, right ?\n> \n> I noticed some of these too - I'll send a patch pretty soon.\n\nThe first attachment fixes for typos in user-facing docs new in v16,\ncombining Thom's changes with the ones that I'd found. If that's\nconfusing, I'll resend my patches separately.\n\nThe other four numbered patches could use extra review.\n\n-- \nJustin",
"msg_date": "Tue, 11 Apr 2023 17:15:29 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 11:12:58PM +0200, Daniel Gustafsson wrote:\n> > On 11 Apr 2023, at 16:53, Justin Pryzby <[email protected]> wrote:\n> \n> > I think \"logical\" should be a <literal> here.\n> \n> Agree, it should in order to be consistent.\n\nIndeed.\n\n+ to the wal_level parameter change on the primary won't be decoded.\n\nThis wal_level should also have a markup.\n\n Number of uses of logical slots in this database that have been\n- canceled due to old snapshots or a too low <xref linkend=\"guc-wal-level\"/>\n+ canceled due to old snapshots or too low a <xref linkend=\"guc-wal-level\"/>\n\nThis sounds a bit strange to me. A too low wal_level would be a cause\nfor a cancel, hence shouldn't this be \"canceled due to old snapshots\nor due to a too low guc-wal-level?\n--\nMichael",
"msg_date": "Wed, 12 Apr 2023 12:28:25 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 12:28:25PM +0900, Michael Paquier wrote:\n> On Tue, Apr 11, 2023 at 11:12:58PM +0200, Daniel Gustafsson wrote:\n> > > On 11 Apr 2023, at 16:53, Justin Pryzby <[email protected]> wrote:\n> > \n> > > I think \"logical\" should be a <literal> here.\n> > \n> > Agree, it should in order to be consistent.\n> \n> Indeed.\n> \n> + to the wal_level parameter change on the primary won't be decoded.\n> \n> This wal_level should also have a markup.\n> \n> Number of uses of logical slots in this database that have been\n> - canceled due to old snapshots or a too low <xref linkend=\"guc-wal-level\"/>\n> + canceled due to old snapshots or too low a <xref linkend=\"guc-wal-level\"/>\n> \n> This sounds a bit strange to me. A too low wal_level would be a cause\n> for a cancel, hence shouldn't this be \"canceled due to old snapshots\n> or due to a too low guc-wal-level?\n\nThat's the same as the original language which Thom and I are requesting\nto change, (but you added another \"due to\").\n\n\"a too low\" is poor english. It's good enough for a code comment, but\nthis is a user-facing doc.\n\nIt could be \"an inadequate wal-level\" or \"a prohibitively low\nwal-level\", but Thom's language is better. \"too low a wal-level\" means\nthe same thing as \"too low of a wal-level\" (which would also be fine).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 22:44:43 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 05:15:29PM -0500, Justin Pryzby wrote:\n> The first attachment fixes for typos in user-facing docs new in v16,\n> combining Thom's changes with the ones that I'd found. If that's\n> confusing, I'll resend my patches separately.\n> \n> The other four numbered patches could use extra review.\n\nIn v16-typos.diff..\n\n- <literal>buffered</literal>, the decoding will stream or serialize\n+ <literal>buffered</literal>, decoding will stream or serialize\n\nThe could be referred as \"the decoding context\", as well?\n\n- not starting with a <literal>.</literal> and ending with\n- <literal>.conf</literal> will be included. Multiple files within an include\n+ ending with <literal>.conf</literal> and not starting with a <literal>.</literal>\n+ will be included. Multiple files within an include\n\nIn 0001.. Not sure that this is an improvement, switching the\nstarting and ending parts.\n\n- include records. These records only contain two fields:\n+ include directives. Include directives only contain two fields:\n[...]\n- included. The file or directory can be a relative of absolute path, and can\n+ included. The file or directory can be a relative or absolute path, and can\n\nYep, indeed.\n\nWe've discussed quite a lot about the current wording that 0004 aims\nto change, FWIW.\n\nI have applied a first batch of fixes that relate to the areas\nintroduced by myself, plus a few extras. The changes for\npg_log_standby_snapshot() mostly left out for now (except one simple\nchange in logicaldecoding.sgml).\n\nMost of the changes in 0002 and 0003 seem rather OK at quick glance,\nbut perhaps their respective authors would like to weigh in.\n--\nMichael",
"msg_date": "Wed, 12 Apr 2023 13:05:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 10:44:43PM -0500, Justin Pryzby wrote:\n> It could be \"an inadequate wal-level\" or \"a prohibitively low\n> wal-level\", but Thom's language is better. \"too low a wal-level\" means\n> the same thing as \"too low of a wal-level\" (which would also be fine).\n\nI have been studying more this point, and you are right that this is\nmuch better. So applied with this wording, after adding more markups\nwhere these were needed.\n--\nMichael",
"msg_date": "Fri, 14 Apr 2023 13:13:09 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Various typo fixes"
}
] |
[
{
"msg_contents": "Hi,\n\nI was looking at this issue: \nhttps://www.postgresql.org/message-id/flat/17888-f72930e6b5ce8c14%40postgresql.org\n\npfree call on contrib/intarray/_int_gist.c:345\n\n```\n\n if (in != (ArrayType *) DatumGetPointer(entry->key))\n pfree(in);\n\n```\n\nleads to BogusFree function call and server crash.\n\nThis is when we use array size of 1001.\n\nIf we increase array size to 3001, we get index size error,\n\nNOTICE: input array is too big (199 maximum allowed, 3001 current), use \ngist__intbig_ops opclass instead\nERROR: index row requires 12040 bytes, maximum size is 8191\n\nWhen array size is 1001 is concerned, we raise elog about input array is \ntoo big, multiple of times. Wouldn't it make more sense to error out \ninstead of proceeding further and getting bogus pointer errors messages \n(as we do when size is 3001)?\n\nChanging elog to ereport makes behavior consistent (between array size \nof 1001 vs 3001), so I have\n\nattached a patch for that.\n\nIt errors out like this:\n\nERROR: input array is too big (199 maximum allowed, 1001 current), use \ngist__intbig_ops opclass instead\n\nThis is same error which was raised as notice earlier.\n\nLet me know if it makes sense.\n\n\nAlso, comments on BogusFree mentions `As a possible\naid in debugging, we report the header word along with the pointer\naddress`. How can we interpret useful debugging information from this?\n\n`pfree called with invalid pointer 0x7ff1706d0030 (header \n0x4fc8000100000000)`\n\n\nRegards,\n\nAnkit",
"msg_date": "Tue, 11 Apr 2023 21:19:12 +0530",
"msg_from": "Ankit Kumar Pandey <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG #17888] Incorrect memory access in gist__int_ops for an input\n array with many elements"
},
{
"msg_contents": "On Wed, 12 Apr 2023 at 03:49, Ankit Kumar Pandey <[email protected]> wrote:\n> Also, comments on BogusFree mentions `As a possible\n> aid in debugging, we report the header word along with the pointer\n> address`. How can we interpret useful debugging information from this?\n>\n> `pfree called with invalid pointer 0x7ff1706d0030 (header\n> 0x4fc8000100000000)`\n\nelog(ERROR)s are not meant to happen. ISTM, what's there is about the\nbest that can be done with our current infrastructure. If that occurs\non some machine that we can't get access to debug on, then having the\nheader bits might be useful, at least, certainly much more useful than\njust not having them at all.\n\nIf you can think of something more useful to put in the elog, then we\ncould consider changing it to improve it.\n\nJust in case you suggest it, I don't believe it's wise to try and\nsplit it out into the components of MemoryChunk's hdrmask.\nMemoryContexts aren't forced into using that. They're only forced into\nusing the 3 least significant bits for the MemoryContextMethodID.\n\nDavid\n\n\n",
"msg_date": "Wed, 12 Apr 2023 23:48:08 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUG #17888] Incorrect memory access in gist__int_ops for an\n input array with many elements"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile playing with a new single board computer (VisionFive 2) I\ndiscovered that postgresql:unsafe_tests suite fails like this:\n\n```\n--- /home/user/projects/postgresql/src/test/modules/unsafe_tests/expected/rolenames.out\n2023-04-11 14:58:57.844550612 +0000\n+++ /home/user/projects/postgresql/build/testrun/unsafe_tests/regress/results/rolenames.out\n 2023-04-11 17:54:22.999024391 +0000\n@@ -53,6 +53,7 @@\n CREATE ROLE \"current_user\";\n CREATE ROLE \"session_user\";\n CREATE ROLE \"user\";\n+ERROR: role \"user\" already exists\n RESET client_min_messages;\n CREATE ROLE current_user; -- error\n ERROR: CURRENT_USER cannot be used as a role name here\n@@ -1089,4 +1090,5 @@\n DROP OWNED BY regress_testrol0, \"Public\", \"current_role\",\n\"current_user\", regress_testrol1, regress_testrol2, regress_testrolx\nCASCADE;\n DROP ROLE regress_testrol0, regress_testrol1, regress_testrol2,\nregress_testrolx;\n DROP ROLE \"Public\", \"None\", \"current_role\", \"current_user\",\n\"session_user\", \"user\";\n+ERROR: current user cannot be dropped\n DROP ROLE regress_role_haspriv, regress_role_nopriv;\n```\n\nThis happens because the developers of this SBC choose the default\nusername \"user\", which I had no reason to change.\n\nTest merely checks that we can distinguish a username \"user\" from the\nUSER keyword. Maybe it's worth replacing \"user\" with \"system_user\"? It\nis also a keyword but is a less likely choice for the OS user name.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 11 Apr 2023 21:25:30 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
},
{
"msg_contents": "On 2023-04-11 Tu 14:25, Aleksander Alekseev wrote:\n> Hi,\n>\n> While playing with a new single board computer (VisionFive 2) I\n> discovered that postgresql:unsafe_tests suite fails like this:\n>\n> ```\n> --- /home/user/projects/postgresql/src/test/modules/unsafe_tests/expected/rolenames.out\n> 2023-04-11 14:58:57.844550612 +0000\n> +++ /home/user/projects/postgresql/build/testrun/unsafe_tests/regress/results/rolenames.out\n> 2023-04-11 17:54:22.999024391 +0000\n> @@ -53,6 +53,7 @@\n> CREATE ROLE \"current_user\";\n> CREATE ROLE \"session_user\";\n> CREATE ROLE \"user\";\n> +ERROR: role \"user\" already exists\n> RESET client_min_messages;\n> CREATE ROLE current_user; -- error\n> ERROR: CURRENT_USER cannot be used as a role name here\n> @@ -1089,4 +1090,5 @@\n> DROP OWNED BY regress_testrol0, \"Public\", \"current_role\",\n> \"current_user\", regress_testrol1, regress_testrol2, regress_testrolx\n> CASCADE;\n> DROP ROLE regress_testrol0, regress_testrol1, regress_testrol2,\n> regress_testrolx;\n> DROP ROLE \"Public\", \"None\", \"current_role\", \"current_user\",\n> \"session_user\", \"user\";\n> +ERROR: current user cannot be dropped\n> DROP ROLE regress_role_haspriv, regress_role_nopriv;\n> ```\n>\n> This happens because the developers of this SBC choose the default\n> username \"user\", which I had no reason to change.\n>\n> Test merely checks that we can distinguish a username \"user\" from the\n> USER keyword. Maybe it's worth replacing \"user\" with \"system_user\"? It\n> is also a keyword but is a less likely choice for the OS user name.\n>\n\nI don't think we can protect against all possible user names. Wouldn't \nit be better to run the tests under an OS user with a different name, \nlike \"marmaduke\"? (\"user\" is a truly terrible default user name).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-11 Tu 14:25, Aleksander\n Alekseev wrote:\n\n\nHi,\n\nWhile playing with a new single board computer (VisionFive 2) I\ndiscovered that postgresql:unsafe_tests suite fails like this:\n\n```\n--- /home/user/projects/postgresql/src/test/modules/unsafe_tests/expected/rolenames.out\n2023-04-11 14:58:57.844550612 +0000\n+++ /home/user/projects/postgresql/build/testrun/unsafe_tests/regress/results/rolenames.out\n 2023-04-11 17:54:22.999024391 +0000\n@@ -53,6 +53,7 @@\n CREATE ROLE \"current_user\";\n CREATE ROLE \"session_user\";\n CREATE ROLE \"user\";\n+ERROR: role \"user\" already exists\n RESET client_min_messages;\n CREATE ROLE current_user; -- error\n ERROR: CURRENT_USER cannot be used as a role name here\n@@ -1089,4 +1090,5 @@\n DROP OWNED BY regress_testrol0, \"Public\", \"current_role\",\n\"current_user\", regress_testrol1, regress_testrol2, regress_testrolx\nCASCADE;\n DROP ROLE regress_testrol0, regress_testrol1, regress_testrol2,\nregress_testrolx;\n DROP ROLE \"Public\", \"None\", \"current_role\", \"current_user\",\n\"session_user\", \"user\";\n+ERROR: current user cannot be dropped\n DROP ROLE regress_role_haspriv, regress_role_nopriv;\n```\n\nThis happens because the developers of this SBC choose the default\nusername \"user\", which I had no reason to change.\n\nTest merely checks that we can distinguish a username \"user\" from the\nUSER keyword. Maybe it's worth replacing \"user\" with \"system_user\"? It\nis also a keyword but is a less likely choice for the OS user name.\n\n\n\n\n\nI don't think we can protect against all possible user names.\n Wouldn't it be better to run the tests under an OS user with a\n different name, like \"marmaduke\"? (\"user\" is a truly terrible\n default user name).\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 15:03:47 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
},
{
"msg_contents": "Hi Andrew,\n\n> I don't think we can protect against all possible user names. Wouldn't it be better to run the tests under an OS user with a different name, like \"marmaduke\"? (\"user\" is a truly terrible default user name).\n\n100% agree. The point is not to protect against all possible user\nnames but merely to reduce the likelihood of the problem. For this\nparticular test there is no difference which keyword to use for the\ntest. I realize this is a minor problem, however the fix is trivial\ntoo.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 11 Apr 2023 22:10:06 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
},
{
"msg_contents": "Aleksander Alekseev <[email protected]> writes:\n>> I don't think we can protect against all possible user names. Wouldn't it be better to run the tests under an OS user with a different name, like \"marmaduke\"? (\"user\" is a truly terrible default user name).\n\n> 100% agree. The point is not to protect against all possible user\n> names but merely to reduce the likelihood of the problem.\n\nIt only reduces the likelihood if you assume that \"system_user\"\nis less likely than \"user\" as a choice of OS user name to run\nthe tests under. That seems like a debatable assumption;\nperhaps it's actually *more* likely.\n\nWhether we need to have a test for this at all is perhaps a\nmore interesting argument.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 16:03:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
},
{
"msg_contents": "Hi,\n\n> Whether we need to have a test for this at all is perhaps a\n> more interesting argument.\n\nThis was my initial thought but since somebody put it there I assumed\nthis is a very important test.\n\nAny objections if we remove the tests for \"user\"?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 12 Apr 2023 15:30:03 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 03:30:03PM +0300, Aleksander Alekseev wrote:\n> Any objections if we remove the tests for \"user\"?\n\nBased on some rather-recent experience in this area with\nCOERCE_SQL_SYNTAX, the relationship between the SQL keywords and the\nway they can handled internally could be tricky if this area of the\ncode is touched. So I would choose to keep these tests, FWIW.\n--\nMichael",
"msg_date": "Thu, 13 Apr 2023 07:38:55 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
},
{
"msg_contents": "Hi,\n\n> On Wed, Apr 12, 2023 at 03:30:03PM +0300, Aleksander Alekseev wrote:\n> > Any objections if we remove the tests for \"user\"?\n>\n> Based on some rather-recent experience in this area with\n> COERCE_SQL_SYNTAX, the relationship between the SQL keywords and the\n> way they can handled internally could be tricky if this area of the\n> code is touched. So I would choose to keep these tests, FWIW.\n\nThanks for the feedback. I see this is a controversial topic so in the\ninterest of saving our time I'm withdrawing the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 13 Apr 2023 13:35:26 +0300",
"msg_from": "Aleksander Alekseev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Use role name \"system_user\" instead of \"user\" for\n unsafe_tests"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen using postgres_fdw, in the event of a local transaction being\naborted while a query is running on a remote server,\npostgres_fdw sends a cancel request to the remote server.\nHowever, if PQgetCancel() returned NULL and no cancel request was issued,\nI found that postgres_fdw could still wait for the reply to\nthe cancel request, causing unnecessary wait time with a 30 second timeout.\n\nFor example, the following queries can reproduce the issue:\n\n----------------------------\ncreate extension postgres_fdw;\ncreate server loopback foreign data wrapper postgres_fdw options (tcp_user_timeout 'a');\ncreate user mapping for public server loopback;\ncreate view t as select 1 as i from pg_sleep(100);\ncreate foreign table ft (i int) server loopback options (table_name 't');\nselect * from ft;\n\nPress Ctrl-C while running the above SELECT query.\n----------------------------\n\nAttached patch fixes this issue. It ensures that postgres_fdw only waits\nfor a reply if a cancel request is actually issued. Additionally,\nit improves PQgetCancel() to set error messages in certain error cases,\nsuch as when out of memory occurs and malloc() fails. Moreover,\nit enhances postgres_fdw to report a warning message when PQgetCancel()\nreturns NULL, explaining the reason for the NULL value.\n\nThought?\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 12 Apr 2023 03:36:01 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "At Wed, 12 Apr 2023 03:36:01 +0900, Fujii Masao <[email protected]> wrote in \n> Attached patch fixes this issue. It ensures that postgres_fdw only\n> waits\n> for a reply if a cancel request is actually issued. Additionally,\n> it improves PQgetCancel() to set error messages in certain error\n> cases,\n> such as when out of memory occurs and malloc() fails. Moreover,\n> it enhances postgres_fdw to report a warning message when\n> PQgetCancel()\n> returns NULL, explaining the reason for the NULL value.\n> \n> Thought?\n\nI wondered why the connection didn't fail in the first place. After\ndigging into it, I found (or remembered) that local (or AF_UNIX)\nconnections ignore the timeout value at making a connection. I think\nthe real issue here is that PGgetCancel is unnecessarily checking its\nvalue and failing as a result. Otherwise there would be no room for\nfailure in the call to PQgetCancel() at that point in the example\ncase.\n\nPQconnectPoll should remove the ignored parameters at connection or\nPQgetCancel should ingore the unreferenced (or unchecked)\nparameters. For example, the below diff takes the latter way and seems\nworking (for at least AF_UNIX connections)\n\ndiff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c\nindex 40fef0e2c8..30e2ab54ba 100644\n--- a/src/interfaces/libpq/fe-connect.c\n+++ b/src/interfaces/libpq/fe-connect.c\n@@ -4718,6 +4718,10 @@ PQgetCancel(PGconn *conn)\n cancel->keepalives_idle = -1;\n cancel->keepalives_interval = -1;\n cancel->keepalives_count = -1;\n+\n+ if (conn->connip == 0)\n+ return cancel;\n+\n if (conn->pgtcp_user_timeout != NULL)\n {\n if (!parse_int_param(conn->pgtcp_user_timeout,\n\nOf course, it's not great that pgfdw_cancel_query_begin() ignores the\nresult from PQgetCancel(), but I think we don't need another ereport.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 12 Apr 2023 12:00:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel\n request reply"
},
{
"msg_contents": "\n\nOn 2023/04/12 12:00, Kyotaro Horiguchi wrote:\n> At Wed, 12 Apr 2023 03:36:01 +0900, Fujii Masao <[email protected]> wrote in\n>> Attached patch fixes this issue. It ensures that postgres_fdw only\n>> waits\n>> for a reply if a cancel request is actually issued. Additionally,\n>> it improves PQgetCancel() to set error messages in certain error\n>> cases,\n>> such as when out of memory occurs and malloc() fails. Moreover,\n>> it enhances postgres_fdw to report a warning message when\n>> PQgetCancel()\n>> returns NULL, explaining the reason for the NULL value.\n>>\n>> Thought?\n> \n> I wondered why the connection didn't fail in the first place. After\n> digging into it, I found (or remembered) that local (or AF_UNIX)\n> connections ignore the timeout value at making a connection. I think\n\nBTW, you can reproduce the issue even when using a TCP connection\ninstead of a Unix domain socket by specifying a very large number\nin the \"keepalives\" connection parameter for the foreign server.\nHere is an example:\n\n-----------------\ncreate server loopback foreign data wrapper postgres_fdw options (host '127.0.0.1', port '5432', keepalives '99999999999');\n-----------------\n\nThe reason behind this issue is that PQconnectPoll() parses\nthe \"keepalives\" parameter value by simply using strtol(),\nwhile PQgetCancel() uses parse_int_param(). To fix this issue,\nit might be better to update PQconnectPoll() so that it uses\nparse_int_param() for parsing the \"keepalives\" parameter.\n\n\n\n> the real issue here is that PGgetCancel is unnecessarily checking its\n> value and failing as a result. Otherwise there would be no room for\n> failure in the call to PQgetCancel() at that point in the example\n> case.\n> \n> PQconnectPoll should remove the ignored parameters at connection or\n> PQgetCancel should ingore the unreferenced (or unchecked)\n> parameters. For example, the below diff takes the latter way and seems\n> working (for at least AF_UNIX connections)\n\nTo clarify, are you suggesting that PQgetCancel() should\nonly parse the parameters for TCP connections\nif cancel->raddr.addr.ss_family != AF_UNIX?\nIf so, I think that's a good idea.\n\n\n> Of course, it's not great that pgfdw_cancel_query_begin() ignores the\n> result from PQgetCancel(), but I think we don't need another ereport.\n\nCan you please clarify why you suggest avoiding outputting\nthe warning message when PQgetCancel() returns NULL?\nI think it is important to inform the user when an error\noccurs and a cancel request cannot be sent, as this information\ncan help them identify the cause of the problem (such as\nsetting an overly large value for the keepalives parameter).\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 12 Apr 2023 23:39:29 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "At Wed, 12 Apr 2023 23:39:29 +0900, Fujii Masao <[email protected]> wrote in \n> BTW, you can reproduce the issue even when using a TCP connection\n> instead of a Unix domain socket by specifying a very large number\n> in the \"keepalives\" connection parameter for the foreign server.\n> Here is an example:\n> \n> -----------------\n> create server loopback foreign data wrapper postgres_fdw options (host\n> '127.0.0.1', port '5432', keepalives '99999999999');\n> -----------------\n\nMmm..\n\n> The reason behind this issue is that PQconnectPoll() parses\n> the \"keepalives\" parameter value by simply using strtol(),\n> while PQgetCancel() uses parse_int_param(). To fix this issue,\n> it might be better to update PQconnectPoll() so that it uses\n> parse_int_param() for parsing the \"keepalives\" parameter.\n\nAgreed, it seems to be a leftover when we moved to parse_int_param()\nin that area.\n\n> > the real issue here is that PGgetCancel is unnecessarily checking its\n> > value and failing as a result. Otherwise there would be no room for\n> > failure in the call to PQgetCancel() at that point in the example\n> > case.\n> > PQconnectPoll should remove the ignored parameters at connection or\n> > PQgetCancel should ingore the unreferenced (or unchecked)\n> > parameters. For example, the below diff takes the latter way and seems\n> > working (for at least AF_UNIX connections)\n> \n> To clarify, are you suggesting that PQgetCancel() should\n> only parse the parameters for TCP connections\n> if cancel->raddr.addr.ss_family != AF_UNIX?\n> If so, I think that's a good idea.\n\nYou're right. I used connip in the diff because I thought it provided\nthe same condition, but in a simpler way.\n\n> \n> > Of course, it's not great that pgfdw_cancel_query_begin() ignores the\n> > result from PQgetCancel(), but I think we don't need another ereport.\n> \n> Can you please clarify why you suggest avoiding outputting\n> the warning message when PQgetCancel() returns NULL?\n\nNo. I suggested merging the two failures that emit the \"same\" message\nbecause I believed that they were identical. I had this in my mind:\n\n calcel = PQgetCancel();\n if (!cancel || PQcancel())\n {\n ereport(); return false;\n }\n PQfreeCancel()\n\nHowever, I notcied that PQgetCancel() doesn't set errbuf.. So, I'm\nfine with your proposal.\n\n> I think it is important to inform the user when an error\n> occurs and a cancel request cannot be sent, as this information\n> can help them identify the cause of the problem (such as\n> setting an overly large value for the keepalives parameter).\n\nAlthough I view it as an internal error, I agree with emitting some\nerror messages in that situation.\n\nregrads.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 13 Apr 2023 11:00:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel\n request reply"
},
{
"msg_contents": "Hi Fujii-san,\n\nOn Wed, Apr 12, 2023 at 3:36 AM Fujii Masao <[email protected]> wrote:\n> However, if PQgetCancel() returned NULL and no cancel request was issued,\n> I found that postgres_fdw could still wait for the reply to\n> the cancel request, causing unnecessary wait time with a 30 second timeout.\n\nGood catch!\n\n> Attached patch fixes this issue.\n\nI am not 100% sure that it is a good idea to use the same error\nmessage \"could not send cancel request\" for the PQgetCancel() and\nPQcancel() cases, because they are different functions. How about\n\"could not create PGcancel structure” or something like that, for the\nformer case, so we can distinguish the former error from the latter?\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 13 Apr 2023 15:13:31 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "On 2023/04/13 11:00, Kyotaro Horiguchi wrote:\n> Agreed, it seems to be a leftover when we moved to parse_int_param()\n> in that area.\n\nIt looks like there was an oversight in commit e7a2217978. I've attached a patch (0002) that updates PQconnectPoll() to use parse_int_param() for parsing the keepalives parameter.\n\nAs this change is not directly related to the bug fix, it may not be necessary to back-patch it to the stable versions, I think. Thought?\n\n\n>> To clarify, are you suggesting that PQgetCancel() should\n>> only parse the parameters for TCP connections\n>> if cancel->raddr.addr.ss_family != AF_UNIX?\n>> If so, I think that's a good idea.\n> \n> You're right. I used connip in the diff because I thought it provided\n> the same condition, but in a simpler way.\n\nI made a modification to the 0001 patch. It will now allow PQgetCancel() to parse and interpret TCP connection parameters only when the connection is not made through a Unix-domain socket.\n\n\n> However, I notcied that PQgetCancel() doesn't set errbuf.. So, I'm\n> fine with your proposal.\n\nOk.\n\n\n>> I think it is important to inform the user when an error\n>> occurs and a cancel request cannot be sent, as this information\n>> can help them identify the cause of the problem (such as\n>> setting an overly large value for the keepalives parameter).\n> \n> Although I view it as an internal error, I agree with emitting some\n> error messages in that situation.\n\nOk.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 14 Apr 2023 03:04:24 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "\n\nOn 2023/04/13 15:13, Etsuro Fujita wrote:\n> I am not 100% sure that it is a good idea to use the same error\n> message \"could not send cancel request\" for the PQgetCancel() and\n> PQcancel() cases, because they are different functions. How about\n> \"could not create PGcancel structure” or something like that, for the\n\nThe primary message basically should avoid reference to implementation details such as specific structure names like PGcancel, shouldn't it, as per the error message style guide?\n\n\n> former case, so we can distinguish the former error from the latter?\n\nAlthough the primary message is the same, the supplemental message provides additional context that can help distinguish which function is reporting the message. Therefore, I'm fine with the current primary message in the 0001 patch. However, I'm open to any better message ideas.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 14 Apr 2023 03:19:09 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 3:19 AM Fujii Masao <[email protected]> wrote:\n> On 2023/04/13 15:13, Etsuro Fujita wrote:\n> > I am not 100% sure that it is a good idea to use the same error\n> > message \"could not send cancel request\" for the PQgetCancel() and\n> > PQcancel() cases, because they are different functions. How about\n> > \"could not create PGcancel structure” or something like that, for the\n>\n> The primary message basically should avoid reference to implementation details such as specific structure names like PGcancel, shouldn't it, as per the error message style guide?\n\nI do not think that PGcancel is that specific, as it is described in\nthe user-facing documentation [1]. (In addition, the error message I\nproposed was created by copying the existing error message \"could not\ncreate OpenSSL BIO structure\" in contrib/sslinfo.c.)\n\n> > former case, so we can distinguish the former error from the latter?\n>\n> Although the primary message is the same, the supplemental message provides additional context that can help distinguish which function is reporting the message.\n\nIf the user is familiar with the PQgetCancel/PQcancel internals, this\nis true, but if not, I do not think this is always true. Consider\nthis error message, for example:\n\n2023-04-14 17:48:55.862 JST [24344] WARNING: could not send cancel\nrequest: invalid integer value \"99999999999\" for connection option\n\"keepalives\"\n\nIt would be hard for users without the knowledge about those internals\nto distinguish that from this message. For average users, I think it\nwould be good to use a more distinguishable error message.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/docs/current/libpq-cancel.html\n\n\n",
"msg_date": "Fri, 14 Apr 2023 18:59:06 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "\n\nOn 2023/04/14 18:59, Etsuro Fujita wrote:\n>> The primary message basically should avoid reference to implementation details such as specific structure names like PGcancel, shouldn't it, as per the error message style guide?\n> \n> I do not think that PGcancel is that specific, as it is described in\n> the user-facing documentation [1]. (In addition, the error message I\n> proposed was created by copying the existing error message \"could not\n> create OpenSSL BIO structure\" in contrib/sslinfo.c.)\n\nI think that mentioning PGcancel in the error message could be confusing for average users who are just running a query on a foreign table and encounter the error message after pressing Ctrl-C. They may not understand why the PGcancel struct is referenced in the error message while accessing foreign tables. It could be viewed as an internal detail that is not necessary for the user to know.\n\n\n>> Although the primary message is the same, the supplemental message provides additional context that can help distinguish which function is reporting the message.\n> \n> If the user is familiar with the PQgetCancel/PQcancel internals, this\n> is true, but if not, I do not think this is always true. Consider\n> this error message, for example:\n> \n> 2023-04-14 17:48:55.862 JST [24344] WARNING: could not send cancel\n> request: invalid integer value \"99999999999\" for connection option\n> \"keepalives\"\n> \n> It would be hard for users without the knowledge about those internals\n> to distinguish that from this message. For average users, I think it\n> would be good to use a more distinguishable error message.\n\nIn this case, I believe that they should be able to understand that an invalid integer value \"99999999999\" was specified in the \"keepalives\" connection option, which caused the warning message. Then, they would need to check the setting of the \"keepalives\" option and correct it if necessary.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 14 Apr 2023 23:28:11 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 11:28 PM Fujii Masao\n<[email protected]> wrote:\n> On 2023/04/14 18:59, Etsuro Fujita wrote:\n> >> The primary message basically should avoid reference to implementation details such as specific structure names like PGcancel, shouldn't it, as per the error message style guide?\n> >\n> > I do not think that PGcancel is that specific, as it is described in\n> > the user-facing documentation [1]. (In addition, the error message I\n> > proposed was created by copying the existing error message \"could not\n> > create OpenSSL BIO structure\" in contrib/sslinfo.c.)\n>\n> I think that mentioning PGcancel in the error message could be confusing for average users who are just running a query on a foreign table and encounter the error message after pressing Ctrl-C. They may not understand why the PGcancel struct is referenced in the error message while accessing foreign tables. It could be viewed as an internal detail that is not necessary for the user to know.\n\nOk, understood. I do not think it is wrong to use \"could not send\ncancel request” for PQgetCancel as well, but I feel that that is not\nperfect for PQgetCancel, because that function never sends a cancel\nrequest; that function just initializes the request. So how about\n\"could not initialize cancel request”, instead?\n\n> >> Although the primary message is the same, the supplemental message provides additional context that can help distinguish which function is reporting the message.\n> >\n> > If the user is familiar with the PQgetCancel/PQcancel internals, this\n> > is true, but if not, I do not think this is always true. Consider\n> > this error message, for example:\n> >\n> > 2023-04-14 17:48:55.862 JST [24344] WARNING: could not send cancel\n> > request: invalid integer value \"99999999999\" for connection option\n> > \"keepalives\"\n> >\n> > It would be hard for users without the knowledge about those internals\n> > to distinguish that from this message. For average users, I think it\n> > would be good to use a more distinguishable error message.\n>\n> In this case, I believe that they should be able to understand that an invalid integer value \"99999999999\" was specified in the \"keepalives\" connection option, which caused the warning message. Then, they would need to check the setting of the \"keepalives\" option and correct it if necessary.\n\nMaybe my explanation was not clear. Let me explain. Assume that a\nuser want to identify the place where the above error was thrown.\nUsing grep with ”could not send cancel request”, the user can find the\ntwo places emitting the message in pgfdw_cancel_query_begin: one for\nPQgetCancel and one for PQcancel. If the user are familiar with the\nPQgetCancel/PQcancel internals, the user can determine, from the\nsupplemental message, that the error was thrown by the former. But if\nnot, the user cannot do so. To support the unfamiliar user as well, I\nthink it would be a good idea to use a more appropriate message for\nPQgetCancel that is different from \"could not send cancel request”.\n\n(I agree that most users would not care about the places where errors\nwere thrown, but I think some users would, and actually, I do when\ninvestigating unfamiliar errors.)\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 17 Apr 2023 15:21:02 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "At Mon, 17 Apr 2023 15:21:02 +0900, Etsuro Fujita <[email protected]> wrote in \n> > >> Although the primary message is the same, the supplemental message pro=\n> vides additional context that can help distinguish which function is report=\n> ing the message.\n> > >\n> > > If the user is familiar with the PQgetCancel/PQcancel internals, this\n> > > is true, but if not, I do not think this is always true. Consider\n> > > this error message, for example:\n> > >\n> > > 2023-04-14 17:48:55.862 JST [24344] WARNING: could not send cancel\n> > > request: invalid integer value \"99999999999\" for connection option\n> > > \"keepalives\"\n> > >\n> > > It would be hard for users without the knowledge about those internals\n> > > to distinguish that from this message. For average users, I think it\n> > > would be good to use a more distinguishable error message.\n> >\n> > In this case, I believe that they should be able to understand that an in=\n> valid integer value \"99999999999\" was specified in the \"keepalives\" connect=\n> ion option, which caused the warning message. Then, they would need to chec=\n> k the setting of the \"keepalives\" option and correct it if necessary.\n> \n> Maybe my explanation was not clear. Let me explain. Assume that a\n> user want to identify the place where the above error was thrown.\n> Using grep with =E2=80=9Dcould not send cancel request=E2=80=9D, the user c=\n> an find the\n> two places emitting the message in pgfdw_cancel_query_begin: one for\n> PQgetCancel and one for PQcancel. If the user are familiar with the\n> PQgetCancel/PQcancel internals, the user can determine, from the\n> supplemental message, that the error was thrown by the former. But if\n> not, the user cannot do so. To support the unfamiliar user as well, I\n> think it would be a good idea to use a more appropriate message for\n> PQgetCancel that is different from \"could not send cancel request=E2=80=9D.\n> \n> (I agree that most users would not care about the places where errors\n> were thrown, but I think some users would, and actually, I do when\n> investigating unfamiliar errors.)\n\nIf PGgetCancel() fails due to invalid keepliave-related values, It\nseems like a bug that needs fixing, regardless of whether we display\nan error message when PGgetCacncel() fails. The only error case of\nPGgetCancel() that could occur in pgfdw_cancel_query_begin() is a\nmalloc() failure, which currently does not set an error message (I'm\nnot sure we can do that in that case, though..).\n\nIn my opinion, PQconnectPoll and PQgetCancel should use the same\nparsing function or PQconnectPoll should set parsed values, making\nunnecessary for PQgetCancel to parse the same parameter\nagain. Additionally, PQgetCancel should set appropriate error messages\nfor all failure modes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 17 Apr 2023 17:38:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel\n request reply"
},
{
"msg_contents": "> In my opinion, PQconnectPoll and PQgetCancel should use the same\n> parsing function or PQconnectPoll should set parsed values, making\n> unnecessary for PQgetCancel to parse the same parameter\n> again.\n\nYes, I totally agree. So I think patch 0002 looks fine.\n\n> Additionally, PQgetCancel should set appropriate error messages\n> for all failure modes.\n\nI don't think that PQgetCancel should ever set error messages on the\nprovided conn object though. It's not part of the documented API and\nit's quite confusing since there's actually no error on the connection\nitself. That this happens for the keepalive parameter was an\nunintended sideeffect of 5987feb70b combined with the fact that the\nparsing is different. All those parsing functions should never error,\nbecause setting up the connection should already have checked them.\n\nSo I think the newly added libpq_append_conn_error calls in patch 0001\nshould be removed. The AF_UNIX check and the new WARNING in pg_fdw\nseem fine though. It would probably make sense to have them be\nseparate patches though, because they are pretty unrelated.\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:39:37 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "\n\nOn 2023/04/21 16:39, Jelte Fennema wrote:\n>> In my opinion, PQconnectPoll and PQgetCancel should use the same\n>> parsing function or PQconnectPoll should set parsed values, making\n>> unnecessary for PQgetCancel to parse the same parameter\n>> again.\n> \n> Yes, I totally agree. So I think patch 0002 looks fine.\n\nIt seems like we have reached a consensus to push the 0002 patch.\nAs for back-patching, although the issue it fixes is trivial,\nit may be a good idea to back-patch to v12 where parse_int_param()\nwas added, for easier back-patching in the future. Therefore\nI'm thinking to push the 0002 patch at first and back-patch to v12.\n\n\n>> Additionally, PQgetCancel should set appropriate error messages\n>> for all failure modes.\n> \n> I don't think that PQgetCancel should ever set error messages on the\n> provided conn object though. It's not part of the documented API and\n> it's quite confusing since there's actually no error on the connection\n> itself. That this happens for the keepalive parameter was an\n> unintended sideeffect of 5987feb70b combined with the fact that the\n> parsing is different. All those parsing functions should never error,\n> because setting up the connection should already have checked them.\n> \n> So I think the newly added libpq_append_conn_error calls in patch 0001\n> should be removed. The AF_UNIX check and the new WARNING in pg_fdw\n> seem fine though.\n\nSounds reasonable to me.\n\nRegarding the WARNING message, another idea is to pass the return value\nof PQgetCancel() directly to PQcancel() as follows. If NULL is passed,\nPQcancel() will detect it and set the proper error message to errbuf.\nThen the warning message \"WARNING: could not send cancel request:\nPQcancel() -- no cancel object supplied\" is output. This approach is\nsimilar to how dblink_cancel_query() does. Thought?\n\n----------------\n\tcancel = PQgetCancel(conn);\n\tif (!PQcancel(cancel, errbuf, sizeof(errbuf)))\n\t{\n\t\tereport(WARNING,\n\t\t\t\t(errcode(ERRCODE_CONNECTION_FAILURE),\n\t\t\t\t errmsg(\"could not send cancel request: %s\",\n\t\t\t\t\t\terrbuf)));\n\t\tPQfreeCancel(cancel);\n\t\treturn false;\n\t}\n----------------\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 26 Apr 2023 01:49:28 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "Hi, Fujii-san\n\n> Regarding the WARNING message, another idea is to pass the return value\n> of PQgetCancel() directly to PQcancel() as follows. If NULL is passed,\n> PQcancel() will detect it and set the proper error message to errbuf.\n> Then the warning message \"WARNING: could not send cancel request:\n> PQcancel() -- no cancel object supplied\" is output.\n\nI agree to go with this.\n\nWith this approach, the information behind the error (e.g., \"out of\nmemory\") will disappear, I guess.\nI think we have to deal with it eventually. (I'm sorry, I don't have a good\nidea right now)\nHowever, the original issue is unnecessary waiting, and this should be\nfixed soon.\nSo it is better to fix the problem this way and discuss retaining\ninformation in another patch IMO.\n\nI'm afraid I'm new to reviewing.\nIf I'm misunderstanding something, please let me know.\n\nMasaki Kuwamura\n\nHi, Fujii-san> Regarding the WARNING message, another idea is to pass the return value> of PQgetCancel() directly to PQcancel() as follows. If NULL is passed,> PQcancel() will detect it and set the proper error message to errbuf.> Then the warning message \"WARNING: could not send cancel request:> PQcancel() -- no cancel object supplied\" is output.I agree to go with this. With this approach, the information behind the error (e.g., \"out of memory\") will disappear, I guess.I think we have to deal with it eventually. (I'm sorry, I don't have a good idea right now)However, the original issue is unnecessary waiting, and this should be fixed soon.So it is better to fix the problem this way and discuss retaining information in another patch IMO.I'm afraid I'm new to reviewing.If I'm misunderstanding something, please let me know.Masaki Kuwamura",
"msg_date": "Thu, 27 Jul 2023 20:01:56 +0900",
"msg_from": "Kuwamura Masaki <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 2:04 PM Fujii Masao <[email protected]> wrote:\n> >> To clarify, are you suggesting that PQgetCancel() should\n> >> only parse the parameters for TCP connections\n> >> if cancel->raddr.addr.ss_family != AF_UNIX?\n> >> If so, I think that's a good idea.\n> >\n> > You're right. I used connip in the diff because I thought it provided\n> > the same condition, but in a simpler way.\n>\n> I made a modification to the 0001 patch. It will now allow PQgetCancel() to parse and interpret TCP connection parameters only when the connection is not made through a Unix-domain socket.\n\nI don't really like this change. It seems to me that what this does is\ndecide that it's not an error to set tcp_user_timeout='a' when making\na cancel request if the connection doesn't actually use TCP. I agree\nthat we shouldn't try to *use* the values if they don't apply, but I'm\nnot sure it's a good idea to skip *sanity-checking* them when they\ndon't apply. For instance you can't set work_mem=ssdgjsjdg in\npostgresql.conf even if you never run a query that needs work_mem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:41:13 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 23:34, Fujii Masao <[email protected]> wrote:\n>\n>\n>\n> On 2023/04/13 11:00, Kyotaro Horiguchi wrote:\n> > Agreed, it seems to be a leftover when we moved to parse_int_param()\n> > in that area.\n>\n> It looks like there was an oversight in commit e7a2217978. I've attached a patch (0002) that updates PQconnectPoll() to use parse_int_param() for parsing the keepalives parameter.\n>\n> As this change is not directly related to the bug fix, it may not be necessary to back-patch it to the stable versions, I think. Thought?\n>\n>\n> >> To clarify, are you suggesting that PQgetCancel() should\n> >> only parse the parameters for TCP connections\n> >> if cancel->raddr.addr.ss_family != AF_UNIX?\n> >> If so, I think that's a good idea.\n> >\n> > You're right. I used connip in the diff because I thought it provided\n> > the same condition, but in a simpler way.\n>\n> I made a modification to the 0001 patch. It will now allow PQgetCancel() to parse and interpret TCP connection parameters only when the connection is not made through a Unix-domain socket.\n>\n>\n> > However, I notcied that PQgetCancel() doesn't set errbuf.. So, I'm\n> > fine with your proposal.\n>\n> Ok.\n>\n>\n> >> I think it is important to inform the user when an error\n> >> occurs and a cancel request cannot be sent, as this information\n> >> can help them identify the cause of the problem (such as\n> >> setting an overly large value for the keepalives parameter).\n> >\n> > Although I view it as an internal error, I agree with emitting some\n> > error messages in that situation.\n>\n> Ok.\n\nI have changed the status of the patch to \"Waiting on Author\" as all\nthe issues are not addressed. Feel free to address them and change the\nstatus accordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 11 Jan 2024 20:00:27 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
},
{
"msg_contents": "On Thu, 11 Jan 2024 at 20:00, vignesh C <[email protected]> wrote:\n>\n> On Thu, 13 Apr 2023 at 23:34, Fujii Masao <[email protected]> wrote:\n> >\n> >\n> >\n> > On 2023/04/13 11:00, Kyotaro Horiguchi wrote:\n> > > Agreed, it seems to be a leftover when we moved to parse_int_param()\n> > > in that area.\n> >\n> > It looks like there was an oversight in commit e7a2217978. I've attached a patch (0002) that updates PQconnectPoll() to use parse_int_param() for parsing the keepalives parameter.\n> >\n> > As this change is not directly related to the bug fix, it may not be necessary to back-patch it to the stable versions, I think. Thought?\n> >\n> >\n> > >> To clarify, are you suggesting that PQgetCancel() should\n> > >> only parse the parameters for TCP connections\n> > >> if cancel->raddr.addr.ss_family != AF_UNIX?\n> > >> If so, I think that's a good idea.\n> > >\n> > > You're right. I used connip in the diff because I thought it provided\n> > > the same condition, but in a simpler way.\n> >\n> > I made a modification to the 0001 patch. It will now allow PQgetCancel() to parse and interpret TCP connection parameters only when the connection is not made through a Unix-domain socket.\n> >\n> >\n> > > However, I notcied that PQgetCancel() doesn't set errbuf.. So, I'm\n> > > fine with your proposal.\n> >\n> > Ok.\n> >\n> >\n> > >> I think it is important to inform the user when an error\n> > >> occurs and a cancel request cannot be sent, as this information\n> > >> can help them identify the cause of the problem (such as\n> > >> setting an overly large value for the keepalives parameter).\n> > >\n> > > Although I view it as an internal error, I agree with emitting some\n> > > error messages in that situation.\n> >\n> > Ok.\n>\n> I have changed the status of the patch to \"Waiting on Author\" as all\n> the issues are not addressed. Feel free to address them and change the\n> status accordingly.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 23:52:09 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgres_fdw causing unnecessary wait for cancel request\n reply"
}
] |
[
{
"msg_contents": "I have seen this failure a couple of times recently while\ntesting code that caused crashes and restarts:\n\n#2 0x00000000009987e3 in ExceptionalCondition (\n conditionName=conditionName@entry=0xb31bc8 \"mode == RBM_NORMAL || mode == RBM_ZERO_ON_ERROR || mode == RBM_ZERO_AND_LOCK\", \n fileName=fileName@entry=0xb31c15 \"bufmgr.c\", \n lineNumber=lineNumber@entry=892) at assert.c:66\n#3 0x0000000000842d73 in ExtendBufferedRelTo (eb=..., \n fork=fork@entry=MAIN_FORKNUM, strategy=strategy@entry=0x0, \n flags=flags@entry=3, extend_to=extend_to@entry=1, \n mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK) at bufmgr.c:891\n#4 0x00000000005cc398 in XLogReadBufferExtended (rlocator=..., \n forknum=MAIN_FORKNUM, blkno=0, mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK, \n recent_buffer=<optimized out>) at xlogutils.c:527\n#5 0x00000000005cc697 in XLogReadBufferForRedoExtended (\n record=record@entry=0x1183b98, block_id=block_id@entry=0 '\\000', \n mode=mode@entry=RBM_NORMAL, get_cleanup_lock=get_cleanup_lock@entry=true, \n buf=buf@entry=0x7ffd98e3ea94) at xlogutils.c:391\n#6 0x000000000055df59 in heap_xlog_prune (record=0x1183b98) at heapam.c:8779\n#7 heap2_redo (record=0x1183b98) at heapam.c:10015\n#8 0x00000000005ca430 in ApplyWalRecord (replayTLI=<synthetic pointer>, \n record=0x7f8f7afbcb60, xlogreader=<optimized out>)\n at ../../../../src/include/access/xlog_internal.h:379\n\nIt's not clear to me whether this Assert is wrong, or\nXLogReadBufferForRedoExtended shouldn't be using\nRBM_ZERO_AND_CLEANUP_LOCK, or the Assert is correctly protecting an\nunimplemented case in ExtendBufferedRelTo that we now need to implement.\n\nIn any case, I'm pretty sure Andres broke it in 26158b852, because\nI hadn't seen it before this weekend.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:48:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Assertion being hit during WAL replay"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 14:48:44 -0400, Tom Lane wrote:\n> I have seen this failure a couple of times recently while\n> testing code that caused crashes and restarts:\n\nDo you have a quick repro recipe?\n\n\n> #2 0x00000000009987e3 in ExceptionalCondition (\n> conditionName=conditionName@entry=0xb31bc8 \"mode == RBM_NORMAL || mode == RBM_ZERO_ON_ERROR || mode == RBM_ZERO_AND_LOCK\",\n> fileName=fileName@entry=0xb31c15 \"bufmgr.c\",\n> lineNumber=lineNumber@entry=892) at assert.c:66\n> #3 0x0000000000842d73 in ExtendBufferedRelTo (eb=...,\n> fork=fork@entry=MAIN_FORKNUM, strategy=strategy@entry=0x0,\n> flags=flags@entry=3, extend_to=extend_to@entry=1,\n> mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK) at bufmgr.c:891\n> #4 0x00000000005cc398 in XLogReadBufferExtended (rlocator=...,\n> forknum=MAIN_FORKNUM, blkno=0, mode=mode@entry=RBM_ZERO_AND_CLEANUP_LOCK,\n> recent_buffer=<optimized out>) at xlogutils.c:527\n> #5 0x00000000005cc697 in XLogReadBufferForRedoExtended (\n> record=record@entry=0x1183b98, block_id=block_id@entry=0 '\\000',\n> mode=mode@entry=RBM_NORMAL, get_cleanup_lock=get_cleanup_lock@entry=true,\n> buf=buf@entry=0x7ffd98e3ea94) at xlogutils.c:391\n> #6 0x000000000055df59 in heap_xlog_prune (record=0x1183b98) at heapam.c:8779\n> #7 heap2_redo (record=0x1183b98) at heapam.c:10015\n> #8 0x00000000005ca430 in ApplyWalRecord (replayTLI=<synthetic pointer>,\n> record=0x7f8f7afbcb60, xlogreader=<optimized out>)\n> at ../../../../src/include/access/xlog_internal.h:379\n>\n> It's not clear to me whether this Assert is wrong, or\n> XLogReadBufferForRedoExtended shouldn't be using\n> RBM_ZERO_AND_CLEANUP_LOCK, or the Assert is correctly protecting an\n> unimplemented case in ExtendBufferedRelTo that we now need to implement.\n\nHm. It's not implemented because I didn't quite see how it'd make sense to\npass RBM_ZERO_AND_CLEANUP_LOCK when extending the relation, but given how\nrelation extension is done \"implicitly\" during recovery, that's too narrow a\nview. It's trivial to add.\n\nI wonder if we should eventually redefine the RBM* things into a bitmask.\n\n\n> In any case, I'm pretty sure Andres broke it in 26158b852, because\n> I hadn't seen it before this weekend.\n\nYea, that's clearly the fault of 26158b852.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 12:56:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assertion being hit during WAL replay"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-04-11 14:48:44 -0400, Tom Lane wrote:\n>> I have seen this failure a couple of times recently while\n>> testing code that caused crashes and restarts:\n\n> Do you have a quick repro recipe?\n\nHere's something related to what I hit that time:\n\ndiff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c\nindex 052263aea6..d43a7c7bcb 100644\n--- a/src/backend/optimizer/plan/subselect.c\n+++ b/src/backend/optimizer/plan/subselect.c\n@@ -2188,6 +2188,7 @@ SS_charge_for_initplans(PlannerInfo *root, RelOptInfo *final_rel)\n void\n SS_attach_initplans(PlannerInfo *root, Plan *plan)\n {\n+ Assert(root->init_plans == NIL);\n plan->initPlan = root->init_plans;\n }\n \nYou won't get through initdb with this, but if you install this change\ninto a successfully init'd database and then \"make installcheck-parallel\",\nit will crash and then fail to recover, at least a lot of the time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Apr 2023 16:54:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assertion being hit during WAL replay"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 16:54:53 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-04-11 14:48:44 -0400, Tom Lane wrote:\n> >> I have seen this failure a couple of times recently while\n> >> testing code that caused crashes and restarts:\n> \n> > Do you have a quick repro recipe?\n> \n> Here's something related to what I hit that time:\n> \n> diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c\n> index 052263aea6..d43a7c7bcb 100644\n> --- a/src/backend/optimizer/plan/subselect.c\n> +++ b/src/backend/optimizer/plan/subselect.c\n> @@ -2188,6 +2188,7 @@ SS_charge_for_initplans(PlannerInfo *root, RelOptInfo *final_rel)\n> void\n> SS_attach_initplans(PlannerInfo *root, Plan *plan)\n> {\n> + Assert(root->init_plans == NIL);\n> plan->initPlan = root->init_plans;\n> }\n> \n> You won't get through initdb with this, but if you install this change\n> into a successfully init'd database and then \"make installcheck-parallel\",\n> it will crash and then fail to recover, at least a lot of the time.\n\nAh, that allowed me to reproduce. Thanks.\n\n\nTook me a bit to understand how we actually get into this situation. A PRUNE\nrecord for relation+block that doesn't exist during recovery. That doesn't\ncommonly happen outside of PITR or such, because we obviously need a block\nwith content to generate the PRUNE. The way it does happen here, is that the\nrelation is vacuumed and then truncated. Then we crash. Thus we end up with a\nPRUNE record for a block that doesn't exist on disk.\n\nWhich is also why the test is quite timing sensitive.\n\nSeems like it'd be good to have a test that covers this scenario. There's\nplenty code around it that doesn't currently get exercised.\n\nNone of the existing tests seem like a great fit. I guess it could be added to\n013_crash_restart, but that really focuses on something else.\n\nSo I guess I'll write a 036_notsureyet.pl...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 15:03:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assertion being hit during WAL replay"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 15:03:02 -0700, Andres Freund wrote:\n> On 2023-04-11 16:54:53 -0400, Tom Lane wrote:\n> > Here's something related to what I hit that time:\n> > \n> > diff --git a/src/backend/optimizer/plan/subselect.c b/src/backend/optimizer/plan/subselect.c\n> > index 052263aea6..d43a7c7bcb 100644\n> > --- a/src/backend/optimizer/plan/subselect.c\n> > +++ b/src/backend/optimizer/plan/subselect.c\n> > @@ -2188,6 +2188,7 @@ SS_charge_for_initplans(PlannerInfo *root, RelOptInfo *final_rel)\n> > void\n> > SS_attach_initplans(PlannerInfo *root, Plan *plan)\n> > {\n> > + Assert(root->init_plans == NIL);\n> > plan->initPlan = root->init_plans;\n> > }\n> > \n> > You won't get through initdb with this, but if you install this change\n> > into a successfully init'd database and then \"make installcheck-parallel\",\n> > it will crash and then fail to recover, at least a lot of the time.\n> \n> Ah, that allowed me to reproduce. Thanks.\n> \n> \n> Took me a bit to understand how we actually get into this situation. A PRUNE\n> record for relation+block that doesn't exist during recovery. That doesn't\n> commonly happen outside of PITR or such, because we obviously need a block\n> with content to generate the PRUNE. The way it does happen here, is that the\n> relation is vacuumed and then truncated. Then we crash. Thus we end up with a\n> PRUNE record for a block that doesn't exist on disk.\n> \n> Which is also why the test is quite timing sensitive.\n> \n> Seems like it'd be good to have a test that covers this scenario. There's\n> plenty code around it that doesn't currently get exercised.\n> \n> None of the existing tests seem like a great fit. I guess it could be added to\n> 013_crash_restart, but that really focuses on something else.\n> \n> So I guess I'll write a 036_notsureyet.pl...\n\nSee also the separate report by Alexander Lakhin at\nhttps://postgr.es/m/[email protected]\n\nI pushed the fix + test now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Apr 2023 11:40:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Assertion being hit during WAL replay"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> I pushed the fix + test now.\n\nCool, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 14:42:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Assertion being hit during WAL replay"
}
] |
[
{
"msg_contents": "Yes, $SUBJECT is correct.\n\nOn an old centos6 VM which I'd forgotten about and never removed from\nmonitoring, I noticed that a process had recently crashed...\n\nMaybe this is an issue which was already fixed, but I looked and find no\nbug report nor patch about it. Feel free to dismiss the problem report\nif it's not interesting or useful. \n\npostgres was compiled locally at 4e54d231a. It'd been running\ncontinuously since September without crashing until a couple weeks ago\n(and running nearly-continuously for months before that).\n\nThe VM is essentially idle, so maybe that's related to the crash.\n\nTRAP: FailedAssertion(\"segment_map->header->magic == (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\", Line: 1770, PID: 24257)\npostgres: telsasoft old_ts ::1(50284) authentication(ExceptionalCondition+0x91)[0x991451]\npostgres: telsasoft old_ts ::1(50284) authentication[0x9b9f97]\npostgres: telsasoft old_ts ::1(50284) authentication(dsa_get_address+0x92)[0x9ba192]\npostgres: telsasoft old_ts ::1(50284) authentication(pgstat_get_entry_ref+0x442)[0x868892]\npostgres: telsasoft old_ts ::1(50284) authentication(pgstat_prep_pending_entry+0x54)[0x862b14]\npostgres: telsasoft old_ts ::1(50284) authentication(pgstat_assoc_relation+0x54)[0x866764]\npostgres: telsasoft old_ts ::1(50284) authentication(_bt_first+0xb1b)[0x51399b]\npostgres: telsasoft old_ts ::1(50284) authentication(btgettuple+0xc2)[0x50e792]\npostgres: telsasoft old_ts ::1(50284) authentication(index_getnext_tid+0x51)[0x4ff271]\npostgres: telsasoft old_ts ::1(50284) authentication(index_getnext_slot+0x72)[0x4ff442]\npostgres: telsasoft old_ts ::1(50284) authentication(systable_getnext+0x132)[0x4fe282]\npostgres: telsasoft old_ts ::1(50284) authentication[0x9775cb]\npostgres: telsasoft old_ts ::1(50284) authentication(SearchCatCache+0x20d)[0x978e6d]\npostgres: telsasoft old_ts ::1(50284) authentication(GetSysCacheOid+0x30)[0x98c7c0]\npostgres: telsasoft old_ts ::1(50284) authentication(get_role_oid+0x2d)[0x86b1ad]\npostgres: telsasoft old_ts ::1(50284) authentication(hba_getauthmethod+0x22)[0x6f5592]\npostgres: telsasoft old_ts ::1(50284) authentication(ClientAuthentication+0x39)[0x6f1f59]\npostgres: telsasoft old_ts ::1(50284) authentication(InitPostgres+0x8c6)[0x9a33d6]\npostgres: telsasoft old_ts ::1(50284) authentication(PostgresMain+0x109)[0x84bb79]\npostgres: telsasoft old_ts ::1(50284) authentication(PostmasterMain+0x1a6a)[0x7ac1aa]\npostgres: telsasoft old_ts ::1(50284) authentication(main+0x461)[0x6fe5a1]\n/lib64/libc.so.6(__libc_start_main+0x100)[0x36e041ed20]\n\n(gdb) bt\n#0 0x00000036e04324f5 in raise () from /lib64/libc.so.6\n#1 0x00000036e0433cd5 in abort () from /lib64/libc.so.6\n#2 0x0000000000991470 in ExceptionalCondition (conditionName=<value optimized out>, errorType=<value optimized out>, fileName=<value optimized out>, lineNumber=1770) at assert.c:69\n#3 0x00000000009b9f97 in get_segment_by_index (area=0x22818c0, index=<value optimized out>) at dsa.c:1769\n#4 0x00000000009ba192 in dsa_get_address (area=0x22818c0, dp=1099511703168) at dsa.c:953\n#5 0x0000000000868892 in pgstat_get_entry_ref (kind=PGSTAT_KIND_RELATION, dboid=<value optimized out>, objoid=<value optimized out>, create=true, created_entry=0x0) at pgstat_shmem.c:508\n#6 0x0000000000862b14 in pgstat_prep_pending_entry (kind=PGSTAT_KIND_RELATION, dboid=0, objoid=2676, created_entry=0x0) at pgstat.c:1067\n#7 0x0000000000866764 in pgstat_prep_relation_pending (rel=0x22beba8) at pgstat_relation.c:855\n#8 pgstat_assoc_relation (rel=0x22beba8) at pgstat_relation.c:138\n#9 0x000000000051399b in _bt_first (scan=0x22eb5c8, dir=ForwardScanDirection) at nbtsearch.c:882\n#10 0x000000000050e792 in btgettuple (scan=0x22eb5c8, dir=ForwardScanDirection) at nbtree.c:243\n#11 0x00000000004ff271 in index_getnext_tid (scan=0x22eb5c8, direction=<value optimized out>) at indexam.c:533\n#12 0x00000000004ff442 in index_getnext_slot (scan=0x22eb5c8, direction=ForwardScanDirection, slot=0x22eb418) at indexam.c:625\n#13 0x00000000004fe282 in systable_getnext (sysscan=0x22eb3c0) at genam.c:511\n#14 0x00000000009775cb in SearchCatCacheMiss (cache=0x229e280, nkeys=<value optimized out>, hashValue=3877703461, hashIndex=5, v1=<value optimized out>, v2=<value optimized out>, v3=0, v4=0) at catcache.c:1364\n#15 0x0000000000978e6d in SearchCatCacheInternal (cache=0x229e280, v1=36056248, v2=0, v3=0, v4=0) at catcache.c:1295\n#16 SearchCatCache (cache=0x229e280, v1=36056248, v2=0, v3=0, v4=0) at catcache.c:1149\n#17 0x000000000098c7c0 in GetSysCacheOid (cacheId=10, oidcol=<value optimized out>, key1=<value optimized out>, key2=<value optimized out>, key3=<value optimized out>, key4=<value optimized out>) at syscache.c:1293\n#18 0x000000000086b1ad in get_role_oid (rolname=0x2262cb8 \"telsasoft\", missing_ok=true) at acl.c:5181\n#19 0x00000000006f5592 in check_hba (port=<value optimized out>) at hba.c:2100\n#20 hba_getauthmethod (port=<value optimized out>) at hba.c:2699\n#21 0x00000000006f1f59 in ClientAuthentication (port=0x224b5d0) at auth.c:396\n#22 0x00000000009a33d6 in PerformAuthentication (in_dbname=0x2268a48 \"old_ts\", dboid=0, username=0x2262cb8 \"telsasoft\", useroid=0, out_dbname=0x0, override_allow_connections=false) at postinit.c:245\n#23 InitPostgres (in_dbname=0x2268a48 \"old_ts\", dboid=0, username=0x2262cb8 \"telsasoft\", useroid=0, out_dbname=0x0, override_allow_connections=false) at postinit.c:836\n#24 0x000000000084bb79 in PostgresMain (dbname=0x2268a48 \"old_ts\", username=0x2262cb8 \"telsasoft\") at postgres.c:4130\n#25 0x00000000007ac1aa in BackendRun (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4504\n#26 BackendStartup (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:4232\n#27 ServerLoop (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1806\n#28 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>) at postmaster.c:1478\n#29 0x00000000006fe5a1 in main (argc=3, argv=0x22239a0) at main.c:202\n\nUnfortunately:\n(gdb) p area->control->handle \n$3 = 0\n(gdb) p segment_map->header->magic\nvalue has been optimized out\n(gdb) p index\n$4 = <value optimized out>\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 14:46:23 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 14:46:23 -0500, Justin Pryzby wrote:\n> Yes, $SUBJECT is correct.\n>\n> On an old centos6 VM which I'd forgotten about and never removed from\n> monitoring, I noticed that a process had recently crashed...\n>\n> Maybe this is an issue which was already fixed, but I looked and find no\n> bug report nor patch about it. Feel free to dismiss the problem report\n> if it's not interesting or useful.\n\n> postgres was compiled locally at 4e54d231a. It'd been running\n> continuously since September without crashing until a couple weeks ago\n> (and running nearly-continuously for months before that).\n\nIt possibly could be:\n\nAuthor: Andres Freund <[email protected]>\nBranch: master [cb2e7ddfe] 2022-12-02 18:10:30 -0800\nBranch: REL_15_STABLE Release: REL_15_2 [c6a60471a] 2022-12-02 18:07:47 -0800\nBranch: REL_14_STABLE Release: REL_14_7 [6344bc097] 2022-12-02 18:10:30 -0800\nBranch: REL_13_STABLE Release: REL_13_10 [7944d2d8c] 2022-12-02 18:13:40 -0800\nBranch: REL_12_STABLE Release: REL_12_14 [35b99a18f] 2022-12-02 18:16:14 -0800\nBranch: REL_11_STABLE Release: REL_11_19 [af3517c15] 2022-12-02 18:17:54 -0800\n \n Prevent pgstats from getting confused when relkind of a relation changes\n\nBut the fact that it's on a catalog table's stats makes it less likely,\nalthough not impossible.\n\n\nAny chance there were conversions from tables to views in that connection?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 13:35:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 13:35:38 -0700, Andres Freund wrote:\n> On 2023-04-11 14:46:23 -0500, Justin Pryzby wrote:\n> > Yes, $SUBJECT is correct.\n> >\n> > On an old centos6 VM which I'd forgotten about and never removed from\n> > monitoring, I noticed that a process had recently crashed...\n> >\n> > Maybe this is an issue which was already fixed, but I looked and find no\n> > bug report nor patch about it. Feel free to dismiss the problem report\n> > if it's not interesting or useful.\n> \n> > postgres was compiled locally at 4e54d231a. It'd been running\n> > continuously since September without crashing until a couple weeks ago\n> > (and running nearly-continuously for months before that).\n> \n> It possibly could be:\n> \n> Author: Andres Freund <[email protected]>\n> Branch: master [cb2e7ddfe] 2022-12-02 18:10:30 -0800\n> Branch: REL_15_STABLE Release: REL_15_2 [c6a60471a] 2022-12-02 18:07:47 -0800\n> Branch: REL_14_STABLE Release: REL_14_7 [6344bc097] 2022-12-02 18:10:30 -0800\n> Branch: REL_13_STABLE Release: REL_13_10 [7944d2d8c] 2022-12-02 18:13:40 -0800\n> Branch: REL_12_STABLE Release: REL_12_14 [35b99a18f] 2022-12-02 18:16:14 -0800\n> Branch: REL_11_STABLE Release: REL_11_19 [af3517c15] 2022-12-02 18:17:54 -0800\n> \n> Prevent pgstats from getting confused when relkind of a relation changes\n> \n> But the fact that it's on a catalog table's stats makes it less likely,\n> although not impossible.\n> \n> \n> Any chance there were conversions from tables to views in that connection?\n\nNope, not possible - the stack trace actually shows this is during connection establishment.\n\nThomas, see stack trace upthread?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 13:38:37 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 7:46 AM Justin Pryzby <[email protected]> wrote:\n> Unfortunately:\n> (gdb) p area->control->handle\n> $3 = 0\n> (gdb) p segment_map->header->magic\n> value has been optimized out\n> (gdb) p index\n> $4 = <value optimized out>\n\nHmm, well index I can find from parameters:\n\n> #2 0x0000000000991470 in ExceptionalCondition (conditionName=<value optimized out>, errorType=<value optimized out>, fileName=<value optimized out>, lineNumber=1770) at assert.c:69\n> #3 0x00000000009b9f97 in get_segment_by_index (area=0x22818c0, index=<value optimized out>) at dsa.c:1769\n> #4 0x00000000009ba192 in dsa_get_address (area=0x22818c0, dp=1099511703168) at dsa.c:953\n\nWe have dp=1099511703168 == 0x10000012680, so index == 1 and the rest\nis the offset into that segment. It's not the initial segment in the\nmain shared memory area created by the postmaster with\ndsa_create_in_place() (that'd be index 0), it's in an extra segment\nthat was created with shm_open(). We managed to open and mmap() that\nsegment, but it contains unexpected garbage.\n\nCan you print *area->control? And then can you see that the DSM\nhandle is in index 1 in \"segment_handles\" in there? Then can you see\nif your system has a file with that number in its name under\n/dev/shm/, and can you tell me what \"od -c /dev/shm/...\" shows as the\nfirst few lines of stuff at the top, so we can see what that\nunexpected garbage looks like?\n\nSide rant: I don't think there's any particular indication that it's\nthe issue here, but while it's on my mind: I really wish we didn't\nuse random numbers for DSM handles. I understand where it came from:\nthe need to manage SysV shmem keyspace (a DSM mode that almost nobody\nuses, but whose limitations apply to all modes). We've debugged\nissues relating to handle collisions before, causing unrelated DSM\nsegments to be confused, back when the random seed was not different\nin each backend making collisions likely. For every other mode, we\ncould instead use something like (slot, generation) to keep collisions\nas far apart as possible (generation wraparound), and avoid collisions\nbetween unrelated clusters by using the pgdata path as a shm_open()\nprefix. Another idea is to add a new DSM mode that would use memfd\nand similar things and pass fds between backends, so that the segments\nare entirely anonymous and don't need to be cleaned up after a crash\n(I thought about that while studying the reasons why PostgreSQL can't\nrun on Capsicum (a capabilities research project) or Android (a\ntelephone), both of which banned SysV *and* POSIX shm because\nsystem-global namespaces are bad).\n\n\n",
"msg_date": "Wed, 12 Apr 2023 11:18:36 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 11:18:36AM +1200, Thomas Munro wrote:\n> Can you print *area->control?\n\n(gdb) p *area->control\n$1 = {segment_header = {magic = 216163848, usable_pages = 62, size = 1048576, prev = 1, next = 18446744073709551615, bin = 4, freed = false}, handle = 0, segment_handles = {0, 3696856876, 433426374, 1403332952, 2754923922, \n 0 <repeats 1019 times>}, segment_bins = {18446744073709551615, 18446744073709551615, 18446744073709551615, 18446744073709551615, 3, 4, 18446744073709551615, 18446744073709551615, 18446744073709551615, \n 18446744073709551615, 18446744073709551615, 18446744073709551615, 18446744073709551615, 18446744073709551615, 18446744073709551615, 18446744073709551615}, pools = {{lock = {tranche = 72, state = {value = 536870912}, \n waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 2199025295360, 8192, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, \n 2199025296088, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {\n head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, \n state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, \n 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 2199025296648, 2199025298608, 0}}, {lock = {tranche = 72, state = {value = 536870912}, \n waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {\n tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, \n tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {\n value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, \n 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 2199025298496, 2199025298664, 2199025297936}}, {lock = {tranche = 72, state = {\n value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, \n 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 8416, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, \n tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {\n value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, \n 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, \n tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 8304, 0, 0}}, {lock = {tranche = 72, state = {\n value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, \n 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, \n tail = 2147483647}}, spans = {0, 8528, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {\n value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 8248, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, \n 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 8584, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {\n head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, \n state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, 0, 0, 0}}, {lock = {tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}, spans = {0, \n 8640, 0, 0}}}, total_segment_size = 9699328, max_total_segment_size = 18446744073709551615, high_segment_index = 4, refcnt = 8455469, pinned = true, freed_segment_counter = 0, lwlock_tranche_id = 72, lock = {\n tranche = 72, state = {value = 536870912}, waiters = {head = 2147483647, tail = 2147483647}}}\n\n> And then can you see that the DSM handle is in index 1 in \"segment_handles\"\n> in there?\n\n(gdb) p area->control->segment_handles \n$2 = {0, 3696856876, 433426374, 1403332952, 2754923922, 0 <repeats 1019 times>}\n\n> Then can you see if your system has a file with that number in its name under\n> /dev/shm/,\n\n$ ls /dev/shm/ |grep 3696856876 || echo not found\nnot found\n\n(In case it matters: the vm has been up for 1558 days).\n\nIf it's helpful, I could provide the corefile, unstripped binaries, and\nlibc.so, which would be enough to use gdb on your side with \"set\nsolib-search-path\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 18:37:24 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 11:37 AM Justin Pryzby <[email protected]> wrote:\n> $ ls /dev/shm/ |grep 3696856876 || echo not found\n> not found\n\nOh, of course it would have restarted after it crashed and unlinked\nthat... So the remaining traces of that memory *might* be in the core\nfile, depending (IIRC) on the core filter settings (you definitely get\nshared anonymous memory like our main shm region by default, but IIRC\nthere's something extra needed if you want the shm_open'd DSM segments\nto be dumped too...)\n\n> (In case it matters: the vm has been up for 1558 days).\n\nI will refrain from invoking cosmic radiation at this point :-)\n\n> If it's helpful, I could provide the corefile, unstripped binaries, and\n> libc.so, which would be enough to use gdb on your side with \"set\n> solib-search-path\".\n\nSounds good, thanks, please send them over off-list and I'll see if I\ncan figure anything out ...\n\n\n",
"msg_date": "Wed, 12 Apr 2023 11:49:51 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 11:49:51AM +1200, Thomas Munro wrote:\n> On Wed, Apr 12, 2023 at 11:37 AM Justin Pryzby <[email protected]> wrote:\n> > $ ls /dev/shm/ |grep 3696856876 || echo not found\n> > not found\n> \n> Oh, of course it would have restarted after it crashed and unlinked\n> that... So the remaining traces of that memory *might* be in the core\n> file, depending (IIRC) on the core filter settings (you definitely get\n> shared anonymous memory like our main shm region by default, but IIRC\n> there's something extra needed if you want the shm_open'd DSM segments\n> to be dumped too...)\n\nI scrounged around and found:\n/var/spool/abrt/ccpp-2023-03-11-13:07:02-24257/maps\n\nWhich has (including the size):\n\n$ sudo cat /var/spool/abrt/ccpp-2023-03-11-13:07:02-24257/maps |awk --non-decimal-data -F'[- ]' '/shm|zero|SYSV/{a=\"0x\"$1; b=\"0x\"$2; print $0,0+b-a}'\n7fd39c981000-7fd39ca81000 rw-s 00000000 00:0f 1698691690 /dev/shm/PostgreSQL.3696856876 1048576\n7fd39ca81000-7fd39cc81000 rw-s 00000000 00:0f 1699005881 /dev/shm/PostgreSQL.433426374 2097152\n7fd39cc81000-7fd39d081000 rw-s 00000000 00:0f 2443340900 /dev/shm/PostgreSQL.2754923922 4194304\n7fd39d081000-7fd3b6a09000 rw-s 00000000 00:04 1698308066 /dev/zero (deleted) 429424640\n7fd3bcf58000-7fd3bcf63000 rw-s 00000000 00:0f 1698308074 /dev/shm/PostgreSQL.2386569568 45056\n7fd3bcf63000-7fd3bcf64000 rw-s 00000000 00:04 9732096 /SYSV0001581b (deleted) 4096\n\nBut except for the last one, none of these are available in the corefile.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 21:38:10 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v15b1: FailedAssertion(\"segment_map->header->magic ==\n (DSA_SEGMENT_HEADER_MAGIC ^ area->control->handle ^ index)\", File: \"dsa.c\",\n ..)"
}
] |
[
{
"msg_contents": "Commit 0ac5ad5134 (\"Improve concurrency of foreign key locking\") added\ninfobits_set fields to certain WAL records. However, in the case of\nxl_heap_lock, it made the data type int8 rather than uint8.\n\nI believe that this was a minor oversight. Attached patch fixes the issue.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 11 Apr 2023 13:13:49 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "infobits_set WAL record struct field is int8"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-11 13:13:49 -0700, Peter Geoghegan wrote:\n> Commit 0ac5ad5134 (\"Improve concurrency of foreign key locking\") added\n> infobits_set fields to certain WAL records. However, in the case of\n> xl_heap_lock, it made the data type int8 rather than uint8.\n>\n> I believe that this was a minor oversight. Attached patch fixes the issue.\n\nMakes sense. Looks like there never was a flag defined for the sign bit,\nluckily. I assume you're just going to apply this for HEAD?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Apr 2023 13:48:44 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: infobits_set WAL record struct field is int8"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 1:48 PM Andres Freund <[email protected]> wrote:\n> Makes sense. Looks like there never was a flag defined for the sign bit,\n> luckily. I assume you're just going to apply this for HEAD?\n\nYes.\n\nI'm also going to rename the TransactionId field to \"xmax\", for\nconsistency with nearby very similar records (like\nxl_heap_lock_updated).\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 11 Apr 2023 13:55:50 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: infobits_set WAL record struct field is int8"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nIs it fair to assume that, given the same data, a partitioned table should\nreturn the same results as a non-partitioned table? If that's true, then I\nthink I may have stumbled across a case of wrong results on boolean partitioned\ntables.\n\nIn following example, I think we incorrectly skip the default partition scan:\n\nCREATE TABLE boolpart (a bool) PARTITION BY LIST (a);\nCREATE TABLE boolpart_default PARTITION OF boolpart default;\nCREATE TABLE boolpart_t PARTITION OF boolpart FOR VALUES IN ('true');\nCREATE TABLE boolpart_f PARTITION OF boolpart FOR VALUES IN ('false');\nINSERT INTO boolpart VALUES (true), (false), (null);\n\nEXPLAIN SELECT * FROM boolpart WHERE a IS NOT true;\n QUERY PLAN\n-----------------------------------------------------------------------\n Seq Scan on boolpart_f boolpart (cost=0.00..38.10 rows=1405 width=1)\n Filter: (a IS NOT TRUE)\n(2 rows)\n\nSELECT * FROM boolpart WHERE a IS NOT true;\n a\n---\n f\n(1 row)\n\nCompare that to the result of a non-partitioned table:\n\nCREATE TABLE booltab (a bool);\nINSERT INTO booltab VALUES (true), (false), (null);\n\nEXPLAIN SELECT * FROM booltab WHERE a IS NOT true;\n QUERY PLAN\n-----------------------------------------------------------\n Seq Scan on booltab (cost=0.00..38.10 rows=1405 width=1)\n Filter: (a IS NOT TRUE)\n(2 rows)\n\nSELECT * FROM booltab WHERE a IS NOT true;\n a\n---\n f\n\n(2 rows)\n\nI think the issue has to do with assumptions made about boolean test IS NOT\ninequality logic which is different from inequality of other operators.\nSpecifically, \"true IS NOT NULL\" is not the same as \"true<>NULL\".\n\nIn partition pruning, match_boolean_partition_clause() tries to match partkey\nwith clause and outputs PARTCLAUSE_MATCH_CLAUSE and an outconst TRUE for\n(IS_TRUE or IS_NOT_FALSE) and inversely FALSE for (IS_FALSE or IS_NOT_TRUE).\nHowever, I don't think this gradularity is sufficient for \"IS NOT\" logic when a\nNULL value partition is present.\n\nOne idea is to use the negation operator for IS_NOT_(true|false) (i.e.\nBooleanNotEqualOperator instead of BooleanEqualOperator). But besides\npresumably being a more expensive operation, not equal is not part of the btree\nopfamily for bool_ops. So, seems like that won't really fit into the current\npartition pruning framework.\n\nThen I realized that the issue is just about adding the default or null\npartition in these very particular scenarios. And struct PartitionBoundInfoData\nalready holds that information. So if we can identify these scenarios and pass\nthat information into get_matching_partitions() then we can add the necessary\npartitions. Attached is a very rough sketch of that idea.\n\nThoughts? Does this seem like a legit issue? And if so, do either of the\nproposed solutions seem reasonable?\n\nThanks,\nDavid",
"msg_date": "Tue, 11 Apr 2023 14:28:32 -0700",
"msg_from": "David Kimura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unexpected (wrong?) result querying boolean partitioned table with\n NULL partition"
},
{
"msg_contents": "On Wed, 12 Apr 2023 at 22:13, David Kimura <[email protected]> wrote:\n> Is it fair to assume that, given the same data, a partitioned table should\n> return the same results as a non-partitioned table?\n\nYes, and also the same as when enable_partition_pruning is set to off.\n\n> CREATE TABLE boolpart (a bool) PARTITION BY LIST (a);\n> CREATE TABLE boolpart_default PARTITION OF boolpart default;\n> CREATE TABLE boolpart_t PARTITION OF boolpart FOR VALUES IN ('true');\n> CREATE TABLE boolpart_f PARTITION OF boolpart FOR VALUES IN ('false');\n> INSERT INTO boolpart VALUES (true), (false), (null);\n>\n> EXPLAIN SELECT * FROM boolpart WHERE a IS NOT true;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Seq Scan on boolpart_f boolpart (cost=0.00..38.10 rows=1405 width=1)\n> Filter: (a IS NOT TRUE)\n> (2 rows)\n>\n> SELECT * FROM boolpart WHERE a IS NOT true;\n> a\n> ---\n> f\n> (1 row)\n>\n> Compare that to the result of a non-partitioned table:\n>\n> CREATE TABLE booltab (a bool);\n> INSERT INTO booltab VALUES (true), (false), (null);\n>\n> EXPLAIN SELECT * FROM booltab WHERE a IS NOT true;\n> QUERY PLAN\n> -----------------------------------------------------------\n> Seq Scan on booltab (cost=0.00..38.10 rows=1405 width=1)\n> Filter: (a IS NOT TRUE)\n> (2 rows)\n>\n> SELECT * FROM booltab WHERE a IS NOT true;\n> a\n> ---\n> f\n\nOuch. That's certainly not correct.\n\n> I think the issue has to do with assumptions made about boolean test IS NOT\n> inequality logic which is different from inequality of other operators.\n> Specifically, \"true IS NOT NULL\" is not the same as \"true<>NULL\".\n\nYeah, that's wrong.\n\n> One idea is to use the negation operator for IS_NOT_(true|false) (i.e.\n> BooleanNotEqualOperator instead of BooleanEqualOperator). But besides\n> presumably being a more expensive operation, not equal is not part of the btree\n> opfamily for bool_ops. So, seems like that won't really fit into the current\n> partition pruning framework.\n\nThere's already code to effectively handle <> operators. Just the\nPartClauseInfo.op_is_ne needs to be set to true.\nget_matching_list_bounds() then handles that by taking the inverse of\nthe partitions matching the equality operator.\n\nEffectively, I think that's the attached patch.\n\nThere seems to be a bunch of tests checking this already, all of them\nassuming the incorrect plans.\n\nDavid",
"msg_date": "Wed, 12 Apr 2023 23:13:35 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 4:13 AM David Rowley <[email protected]> wrote:\n> On Wed, 12 Apr 2023 at 22:13, David Kimura <[email protected]> wrote:\n> > Is it fair to assume that, given the same data, a partitioned table should\n> > return the same results as a non-partitioned table?\n>\n> Yes, and also the same as when enable_partition_pruning is set to off.\n\nThanks for making me aware of that GUC.\n\n> > One idea is to use the negation operator for IS_NOT_(true|false) (i.e.\n> > BooleanNotEqualOperator instead of BooleanEqualOperator). But besides\n> > presumably being a more expensive operation, not equal is not part of the btree\n> > opfamily for bool_ops. So, seems like that won't really fit into the current\n> > partition pruning framework.\n>\n> There's already code to effectively handle <> operators. Just the\n> PartClauseInfo.op_is_ne needs to be set to true.\n> get_matching_list_bounds() then handles that by taking the inverse of\n> the partitions matching the equality operator.\n\nAh, I missed that when I first tried to implement that approach. Indeed, this\nseems cleaner. Also, the domain space for boolean partitions is very small, so\nany added cost for searching not equal seems negligible.\n\nThanks,\nDavid\n\n\n",
"msg_date": "Wed, 12 Apr 2023 09:13:26 -0700",
"msg_from": "David Kimura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 7:13 PM David Rowley <[email protected]> wrote:\n\n> There's already code to effectively handle <> operators. Just the\n> PartClauseInfo.op_is_ne needs to be set to true.\n> get_matching_list_bounds() then handles that by taking the inverse of\n> the partitions matching the equality operator.\n>\n> Effectively, I think that's the attached patch.\n\n\nI think there is a thinko here.\n\n+ switch (btest->booltesttype)\n+ {\n+ case IS_NOT_TRUE:\n+ *noteq = true;\n+ /* fall through */\n+ case IS_TRUE:\n+ *outconst = (Expr *) makeBoolConst(true, false);\n+ break;\n+ case IS_NOT_FALSE:\n+ *noteq = true;\n+ /* fall through */\n+ case IS_FALSE:\n+ *outconst = (Expr *) makeBoolConst(false, false);\n+ break;\n+ default:\n+ Assert(false); /* hmm? */\n+ return PARTCLAUSE_UNSUPPORTED;\n+ }\n\nThe *outconst should be set to true in case IS_NOT_FALSE and set to\nfalse in case IS_NOT_TRUE, something like:\n\n switch (btest->booltesttype)\n {\n- case IS_NOT_TRUE:\n+ case IS_NOT_FALSE:\n *noteq = true;\n /* fall through */\n case IS_TRUE:\n *outconst = (Expr *) makeBoolConst(true, false);\n break;\n- case IS_NOT_FALSE:\n+ case IS_NOT_TRUE:\n *noteq = true;\n /* fall through */\n case IS_FALSE:\n\nThanks\nRichard\n\nOn Wed, Apr 12, 2023 at 7:13 PM David Rowley <[email protected]> wrote:\r\nThere's already code to effectively handle <> operators. Just the\r\nPartClauseInfo.op_is_ne needs to be set to true.\r\nget_matching_list_bounds() then handles that by taking the inverse of\r\nthe partitions matching the equality operator.\n\r\nEffectively, I think that's the attached patch.I think there is a thinko here.+ switch (btest->booltesttype)+ {+ case IS_NOT_TRUE:+ *noteq = true;+ /* fall through */+ case IS_TRUE:+ *outconst = (Expr *) makeBoolConst(true, false);+ break;+ case IS_NOT_FALSE:+ *noteq = true;+ /* fall through */+ case IS_FALSE:+ *outconst = (Expr *) makeBoolConst(false, false);+ break;+ default:+ Assert(false); /* hmm? */+ return PARTCLAUSE_UNSUPPORTED;+ }The *outconst should be set to true in case IS_NOT_FALSE and set tofalse in case IS_NOT_TRUE, something like: switch (btest->booltesttype) {- case IS_NOT_TRUE:+ case IS_NOT_FALSE: *noteq = true; /* fall through */ case IS_TRUE: *outconst = (Expr *) makeBoolConst(true, false); break;- case IS_NOT_FALSE:+ case IS_NOT_TRUE: *noteq = true; /* fall through */ case IS_FALSE:ThanksRichard",
"msg_date": "Thu, 13 Apr 2023 10:39:03 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 10:39 AM Richard Guo <[email protected]> wrote:\n\n> On Wed, Apr 12, 2023 at 7:13 PM David Rowley <[email protected]> wrote:\n>\n>> There's already code to effectively handle <> operators. Just the\n>> PartClauseInfo.op_is_ne needs to be set to true.\n>> get_matching_list_bounds() then handles that by taking the inverse of\n>> the partitions matching the equality operator.\n>>\n>> Effectively, I think that's the attached patch.\n>\n>\n> I think there is a thinko here.\n>\n\nSorry. It's my thinko. In cases IS_NOT_TRUE and IS_NOT_FALSE the\nop_is_ne is set to true. So the logic in origin patch is right.\n\nBTW, I wonder if we should elog an Error here.\n\n default:\n- Assert(false); /* hmm? */\n- return PARTCLAUSE_UNSUPPORTED;\n+ elog(ERROR, \"unrecognized booltesttype: %d\",\n+ (int) btest->booltesttype);\n+ break;\n\nOtherwise the patch looks good to me.\n\nThanks\nRichard\n\nOn Thu, Apr 13, 2023 at 10:39 AM Richard Guo <[email protected]> wrote:On Wed, Apr 12, 2023 at 7:13 PM David Rowley <[email protected]> wrote:\nThere's already code to effectively handle <> operators. Just the\nPartClauseInfo.op_is_ne needs to be set to true.\nget_matching_list_bounds() then handles that by taking the inverse of\nthe partitions matching the equality operator.\n\nEffectively, I think that's the attached patch.I think there is a thinko here.Sorry. It's my thinko. In cases IS_NOT_TRUE and IS_NOT_FALSE theop_is_ne is set to true. So the logic in origin patch is right.BTW, I wonder if we should elog an Error here. default:- Assert(false); /* hmm? */- return PARTCLAUSE_UNSUPPORTED;+ elog(ERROR, \"unrecognized booltesttype: %d\",+ (int) btest->booltesttype);+ break;Otherwise the patch looks good to me.ThanksRichard",
"msg_date": "Thu, 13 Apr 2023 11:30:05 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 15:30, Richard Guo <[email protected]> wrote:\n> BTW, I wonder if we should elog an Error here.\n>\n> default:\n> - Assert(false); /* hmm? */\n> - return PARTCLAUSE_UNSUPPORTED;\n> + elog(ERROR, \"unrecognized booltesttype: %d\",\n> + (int) btest->booltesttype);\n> + break;\n\nI wondered about that, hence my not-so-commitable comment left in there.\n\nMy last thoughts were that maybe we should just move the IS_UNKNOWN\nand IS_NOT_UNKNOWN down into the switch and let -Wall let us know if\nsomething is missing.\n\nIt hardly seems worth keeping the slightly earlier exit for those two\ncases. That just amounts to the RelabelType check and is this the\npartition key. I doubt IS[_NOT]_UNKNOWN is common enough for us to\nwarrant contorting the code to make it a few dozen nanoseconds faster.\nHaving smaller code is probably more of a win, which we'd get if we\ndidn't add the ERROR you propose.\n\n> Otherwise the patch looks good to me.\n\nThanks for having a look.\n\nDavid\n\n\n",
"msg_date": "Thu, 13 Apr 2023 15:45:03 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 4:13 AM David Rowley <[email protected]> wrote:\n>\n> There seems to be a bunch of tests checking this already, all of them\n> assuming the incorrect plans.\n\nGiven that the plan alone wasn't sufficient to catch this error previously,\nwould it be worthwhile to add some data to the tests to make it abundantly\nobvious?\n\nI had noticed that the default partition seems to be an edge case in the code.\nPerhaps it's overkill, but would it be worth adding a test where the NULL\npartition is not the default?\n\nThanks,\nDavid\n\n\n",
"msg_date": "Thu, 13 Apr 2023 09:19:22 -0700",
"msg_from": "David Kimura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 15:45, David Rowley <[email protected]> wrote:\n>\n> On Thu, 13 Apr 2023 at 15:30, Richard Guo <[email protected]> wrote:\n> > BTW, I wonder if we should elog an Error here.\n> >\n> > default:\n> > - Assert(false); /* hmm? */\n> > - return PARTCLAUSE_UNSUPPORTED;\n> > + elog(ERROR, \"unrecognized booltesttype: %d\",\n> > + (int) btest->booltesttype);\n> > + break;\n>\n> I wondered about that, hence my not-so-commitable comment left in there.\n>\n> My last thoughts were that maybe we should just move the IS_UNKNOWN\n> and IS_NOT_UNKNOWN down into the switch and let -Wall let us know if\n> something is missing.\n>\n> It hardly seems worth keeping the slightly earlier exit for those two\n> cases. That just amounts to the RelabelType check and is this the\n> partition key. I doubt IS[_NOT]_UNKNOWN is common enough for us to\n> warrant contorting the code to make it a few dozen nanoseconds faster.\n> Having smaller code is probably more of a win, which we'd get if we\n> didn't add the ERROR you propose.\n\nAfter having looked at the code in more detail, I don't think it's a\ngood idea to move the IS_UNKNOWN and IS_NOT_UNKNOWN down into the\nswitch. Having them tested early means we can return\nPARTCLAUSE_UNSUPPORTED even when the clause does not match the current\npartition key. If we moved those into the switch statement, then if\nthe qual didn't match to the partition key, then we'd return\nPARTCLAUSE_NOMATCH and we'd maybe waste further effort later trying to\nmatch the same qual to some other partition key.\n\nAll I ended up doing was removing the Assert(). I don't really see\nthe need to add an ERROR. It's not like any other value would cause\nthe code to misbehave. We'll just return PARTCLAUSE_UNSUPPORTED and\nno pruning would get done for that qual. I also struggle to imagine\nwhat possible other values we could ever add to BoolTestType.\n\nAfter looking a bit deeper and testing a bit more, I found another bug\nin match_boolean_partition_clause() around the\nequal(negate_clause((Node *) leftop), partkey). The code there just\nalways set *outconst to a false Const regardless of is_not_clause. I\nsee the code coverage tool shows that line as untested, so I fixed the\nbug and wrote some tests to exercise the code.\n\nAs David Kimura suggested, I also added some data to the tables in\nquestion and repeated the same queries again without the EXPLAIN. I\ngenerated the expected output with enable_partition_pruning = off then\nput it back on again and saw that the same results are shown. I\nconsidered writing a plpgsql function that we can pass a table name\nand a query and it goes and makes a temp table, populates it with the\nquery with enable_partition_pruning = off then tries again with\npruning on and verifies the results are the same as what's stored in\nthe temp table. I'll maybe go and do that for master only, it's just a\nbit more than what I wanted to do in the back branches.\n\nI've pushed the fix now.\n\nThanks for the report about this, David, and thank you both for the reviews.\n\nDavid\n\n\nDavid\n\n\n",
"msg_date": "Fri, 14 Apr 2023 16:45:28 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (wrong?) result querying boolean partitioned table\n with NULL partition"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to understand the Plan tree for select queries. Can you\nplease help me with the below queries?\n\n1) Why is there a difference in plan tree for these two queries? User\ntable tidx1 has an index on column 'a' .\n2) Why do we do Index scan and not Bitmap Index Scan for catalog tables?\n\npostgres=# explain select * from pg_class where oid=2051;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Index Scan using pg_class_oid_index on pg_class (cost=0.27..8.29\nrows=1 width=265)\n Index Cond: (oid = '2051'::oid)\n(2 rows)\n\npostgres=# explain select * from tidx1 where a=1;\n QUERY PLAN\n--------------------------------------------------------------------\n Bitmap Heap Scan on tidx1 (cost=4.24..14.91 rows=11 width=8)\n Recheck Cond: (a = 1)\n -> Bitmap Index Scan on idx1 (cost=0.00..4.24 rows=11 width=0)\n Index Cond: (a = 1)\n(4 rows)\n\npostgres=# select * from tidx1;\n a | b\n---+---\n 1 | 2\n 2 | 2\n 3 | 2\n 4 | 2\n 5 | 2\n(5 rows)\n\nBest,\nAj\n\n\n",
"msg_date": "Tue, 11 Apr 2023 18:09:41 -0700",
"msg_from": "Ajay P S <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regarding Plan tree output(Index/Bitmap Scan)"
},
{
"msg_contents": "On Tue, Apr 11, 2023 at 06:09:41PM -0700, Ajay P S wrote:\n> I am trying to understand the Plan tree for select queries. Can you\n> please help me with the below queries?\n> \n> 1) Why is there a difference in plan tree for these two queries? User\n> table tidx1 has an index on column 'a' .\n\nBased on the query planner's cost estimate of the different scans.\n\n> 2) Why do we do Index scan and not Bitmap Index Scan for catalog tables?\n\nThere's no reason why it can't happen in general.\n\nBut you queried pg_class on a unique column, returning at most one row.\nA bitmap couldn't help by making the I/O more sequential. It can only\nadd overhead.\n\nYou can compare the costs of various plans by running EXPLAIN with\nvarious enable_* GUCs to off.\n\nBTW, your question should be directed to another list - this list is for\nbug reports and development.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 11 Apr 2023 20:33:23 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regarding Plan tree output(Index/Bitmap Scan)"
}
] |
[
{
"msg_contents": "Over on [1], Tim reported that the planner is making some bad choices\nwhen the plan contains a WindowFunc which requires reading all of, or\na large portion of the WindowAgg subnode in order to produce the first\nWindowAgg row.\n\nFor example:\n\nEXPLAIN (ANALYZE, TIMING OFF)\nSELECT COUNT(*) OVER ()\nFROM tenk1 t1 INNER JOIN tenk1 t2 ON t1.unique1 = t2.tenthous\nLIMIT 1;\n\nWith master, we get the following plan:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.29..0.67 rows=1 width=8) (actual time=47.491..47.492\nrows=1 loops=1)\n -> WindowAgg (cost=0.29..3815.00 rows=10000 width=8) (actual\ntime=47.489..47.490 rows=1 loops=1)\n -> Nested Loop (cost=0.29..3690.00 rows=10000 width=0)\n(actual time=0.026..42.972 rows=10000 loops=1)\n -> Seq Scan on tenk1 t2 (cost=0.00..445.00 rows=10000\nwidth=4) (actual time=0.009..1.734 rows=10000 loops=1)\n -> Index Only Scan using tenk1_unique1 on tenk1 t1\n(cost=0.29..0.31 rows=1 width=4) (actual time=0.003..0.004 rows=1\nloops=10000)\n Index Cond: (unique1 = t2.tenthous)\n Heap Fetches: 0\n Planning Time: 0.420 ms\n Execution Time: 48.107 ms\n\nYou can see that the time to get the first WindowAgg row (47.489 ms)\nis not well aligned to the startup cost (0.29). This effectively\ncauses the planner to choose a Nested Loop plan as it thinks it'll\nread just 1 row from the join. Due to the OVER (), we'll read all\nrows! Not good.\n\nIt's not hard to imagine that a slightly different schema could yield\na *far* worse plan if it opted to use a non-parameterised nested loop\nplan and proceed to read all rows from it.\n\nWith the attached patch, that turns into:\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=928.02..928.02 rows=1 width=8) (actual\ntime=29.308..29.310 rows=1 loops=1)\n -> WindowAgg (cost=928.02..928.07 rows=10000 width=8) (actual\ntime=29.306..29.308 rows=1 loops=1)\n -> Hash Join (cost=395.57..803.07 rows=10000 width=0)\n(actual time=10.674..22.032 rows=10000 loops=1)\n Hash Cond: (t1.unique1 = t2.tenthous)\n -> Index Only Scan using tenk1_unique1 on tenk1 t1\n(cost=0.29..270.29 rows=10000 width=4) (actual time=0.036..4.961\nrows=10000 loops=1)\n Heap Fetches: 0\n -> Hash (cost=270.29..270.29 rows=10000 width=4)\n(actual time=10.581..10.582 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 480kB\n -> Index Only Scan using tenk1_thous_tenthous on\ntenk1 t2 (cost=0.29..270.29 rows=10000 width=4) (actual\ntime=0.055..5.437 rows=10000 loops=1)\n Heap Fetches: 0\n Planning Time: 2.415 ms\n Execution Time: 30.554 ms\n\n\nI'm not sure if we should consider backpatching a fix for this bug.\nWe tend not to commit stuff that would destabilise plans in the back\nbranches. On the other hand, it's fairly hard to imagine how we\ncould make this much worse even given bad estimates.\n\nI do think we should fix this in v16, however.\n\nI'll add this to the \"Older bugs affecting stable branches\" section of\nthe PG 16 open items list\n\nDavid\n\n[1] https://postgr.es/m/[email protected]",
"msg_date": "Wed, 12 Apr 2023 21:03:48 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 5:04 PM David Rowley <[email protected]> wrote:\n\n>\n> With the attached patch, that turns into:\n>\n\nThe concept of startup_tuples for a WindowAgg looks good to me, but I\ncan't follow up with the below line:\n\n+ return clamp_row_est(partition_tuples * DEFAULT_INEQ_SEL);\n\n# select count(*) over() from tenk1 limit 1;\n count\n-------\n 10000 --> We need to scan all the tuples.\n\nShould we just return clamp_row_est(partition_tuples)?\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Apr 12, 2023 at 5:04 PM David Rowley <[email protected]> wrote:\nWith the attached patch, that turns into:The concept of startup_tuples for a WindowAgg looks good to me, but I can't follow up with the below line:+\treturn clamp_row_est(partition_tuples * DEFAULT_INEQ_SEL);# select count(*) over() from tenk1 limit 1; count------- 10000 --> We need to scan all the tuples. Should we just return clamp_row_est(partition_tuples)? -- Best RegardsAndy Fan",
"msg_date": "Wed, 12 Apr 2023 22:28:34 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": ".On Thu, 13 Apr 2023 at 02:28, Andy Fan <[email protected]> wrote:\n> The concept of startup_tuples for a WindowAgg looks good to me, but I\n> can't follow up with the below line:\n>\n> + return clamp_row_est(partition_tuples * DEFAULT_INEQ_SEL);\n>\n> # select count(*) over() from tenk1 limit 1;\n> count\n> -------\n> 10000 --> We need to scan all the tuples.\n>\n> Should we just return clamp_row_est(partition_tuples)?\n\nFor the case you've shown, it will. It's handled by this code:\n\nif (wc->orderClause == NIL)\n return clamp_row_est(partition_tuples);\n\nIt would take something like the following to hit the code you're\nconcerned about:\n\nexplain select count(*) over(order by unique1 rows between unbounded\npreceding and 10*random() following) from tenk1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n WindowAgg (cost=140.23..420.29 rows=10000 width=12)\n -> Index Only Scan using tenk1_unique1 on tenk1\n(cost=0.29..270.29 rows=10000 width=4)\n(2 rows)\n\nYou can see the startup cost is about 33% of the total cost for that,\nwhich is from the DEFAULT_INEQ_SEL. I'm not exactly set on that\nhaving to be DEFAULT_INEQ_SEL, but I'm not really sure what we could\nput that's better. I don't really follow why assuming all rows are\nrequired is better. That'll just mean we favour cheap startup plans\nless, but there might be a case where a cheap startup plan is\nfavourable. I was opting for a happy medium when I thought to use\nDEFAULT_INEQ_SEL.\n\nI also see I might need to do a bit more work on this as the following\nis not handled correctly:\n\nselect count(*) over(rows between unbounded preceding and 10\nfollowing) from tenk1;\n\nit's assuming all rows due to lack of ORDER BY, but it seems like it\nshould be 10 rows due to the 10 FOLLOWING end bound.\n\nDavid\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:09:39 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 6:09 AM David Rowley <[email protected]> wrote:\n\n> .On Thu, 13 Apr 2023 at 02:28, Andy Fan <[email protected]> wrote:\n> > The concept of startup_tuples for a WindowAgg looks good to me, but I\n> > can't follow up with the below line:\n> >\n> > + return clamp_row_est(partition_tuples * DEFAULT_INEQ_SEL);\n> >\n> > # select count(*) over() from tenk1 limit 1;\n> > count\n> > -------\n> > 10000 --> We need to scan all the tuples.\n> >\n> > Should we just return clamp_row_est(partition_tuples)?\n>\n> For the case you've shown, it will. It's handled by this code:\n>\n> if (wc->orderClause == NIL)\n> return clamp_row_est(partition_tuples);\n>\n> My fault. I should have real debugging to double check my\nunderstanding, surely I will next time.\n\nIt would take something like the following to hit the code you're\n> concerned about:\n>\n> explain select count(*) over(order by unique1 rows between unbounded\n> preceding and 10*random() following) from tenk1;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------\n> WindowAgg (cost=140.23..420.29 rows=10000 width=12)\n> -> Index Only Scan using tenk1_unique1 on tenk1\n> (cost=0.29..270.29 rows=10000 width=4)\n> (2 rows)\n>\n> You can see the startup cost is about 33% of the total cost for that,\n> which is from the DEFAULT_INEQ_SEL. I'm not exactly set on that\n> having to be DEFAULT_INEQ_SEL, but I'm not really sure what we could\n> put that's better. I don't really follow why assuming all rows are\n> required is better. That'll just mean we favour cheap startup plans\n> less, but there might be a case where a cheap startup plan is\n> favourable. I was opting for a happy medium when I thought to use\n> DEFAULT_INEQ_SEL.\n>\n\nThat looks reasonable to me. My suggestion came from my misreading\nbefore, It was a bit late in my time zone when writing. Thanks for the\ndetailed explanation!\n\n\n>\n> I also see I might need to do a bit more work on this as the following\n> is not handled correctly:\n>\n> select count(*) over(rows between unbounded preceding and 10\n> following) from tenk1;\n>\n> it's assuming all rows due to lack of ORDER BY, but it seems like it\n> should be 10 rows due to the 10 FOLLOWING end bound.\n>\n>\nTrue to me.\n\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Apr 13, 2023 at 6:09 AM David Rowley <[email protected]> wrote:.On Thu, 13 Apr 2023 at 02:28, Andy Fan <[email protected]> wrote:\n> The concept of startup_tuples for a WindowAgg looks good to me, but I\n> can't follow up with the below line:\n>\n> + return clamp_row_est(partition_tuples * DEFAULT_INEQ_SEL);\n>\n> # select count(*) over() from tenk1 limit 1;\n> count\n> -------\n> 10000 --> We need to scan all the tuples.\n>\n> Should we just return clamp_row_est(partition_tuples)?\n\nFor the case you've shown, it will. It's handled by this code:\n\nif (wc->orderClause == NIL)\n return clamp_row_est(partition_tuples);\nMy fault. I should have real debugging to double check myunderstanding, surely I will next time. \nIt would take something like the following to hit the code you're\nconcerned about:\n\nexplain select count(*) over(order by unique1 rows between unbounded\npreceding and 10*random() following) from tenk1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n WindowAgg (cost=140.23..420.29 rows=10000 width=12)\n -> Index Only Scan using tenk1_unique1 on tenk1\n(cost=0.29..270.29 rows=10000 width=4)\n(2 rows)\n\nYou can see the startup cost is about 33% of the total cost for that,\nwhich is from the DEFAULT_INEQ_SEL. I'm not exactly set on that\nhaving to be DEFAULT_INEQ_SEL, but I'm not really sure what we could\nput that's better. I don't really follow why assuming all rows are\nrequired is better. That'll just mean we favour cheap startup plans\nless, but there might be a case where a cheap startup plan is\nfavourable. I was opting for a happy medium when I thought to use\nDEFAULT_INEQ_SEL.That looks reasonable to me. My suggestion came from my misreadingbefore, It was a bit late in my time zone when writing. Thanks for thedetailed explanation! \n\nI also see I might need to do a bit more work on this as the following\nis not handled correctly:\n\nselect count(*) over(rows between unbounded preceding and 10\nfollowing) from tenk1;\n\nit's assuming all rows due to lack of ORDER BY, but it seems like it\nshould be 10 rows due to the 10 FOLLOWING end bound.True to me. -- Best RegardsAndy Fan",
"msg_date": "Thu, 13 Apr 2023 08:16:21 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 10:09, David Rowley <[email protected]> wrote:\n> I also see I might need to do a bit more work on this as the following\n> is not handled correctly:\n>\n> select count(*) over(rows between unbounded preceding and 10\n> following) from tenk1;\n>\n> it's assuming all rows due to lack of ORDER BY, but it seems like it\n> should be 10 rows due to the 10 FOLLOWING end bound.\n\nWell, as it turned out, it was quite a bit more work. The frame\noptions have had quite a few additions since I last looked in detail.\n\nI've attached v2 of the patch. I've included a DEBUG1 message which\nis useful to check what the estimate comes out as without having to\nhave a debugger attached all the time.\n\nHere are a few samples of the estimator getting things right:\n\n# select count(*) over (order by four range between unbounded\npreceding and 2 following exclude current row) from tenk1 limit 1;\nDEBUG: startup_tuples = 7499\n count\n-------\n 7499\n\n# select count(*) over (order by four rows between unbounded preceding\nand 4000 following) from tenk1 limit 1;\nDEBUG: startup_tuples = 4001\n count\n-------\n 4001\n\n# select count(*) over (order by four rows between unbounded preceding\nand 4000 following exclude group) from tenk1 limit 1;\nDEBUG: startup_tuples = 1501\n count\n-------\n 1501\n\nYou can see in each case, startup_tuples was estimated correctly as\nconfirmed by count(*) during execution.\n\nI've attached some more of these in sample_tests.txt, which all are\ncorrect with the caveat of get_windowclause_startup_tuples() never\nreturning 0 due to it using clamp_row_est(). In practice, that's a\nnon-issue due to the way the startup_tuples value is used to calculate\nthe startup costs.\n\nDavid",
"msg_date": "Thu, 13 Apr 2023 23:51:12 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Wed, 12 Apr 2023 at 21:03, David Rowley <[email protected]> wrote:\n> I'm not sure if we should consider backpatching a fix for this bug.\n> We tend not to commit stuff that would destabilise plans in the back\n> branches. On the other hand, it's fairly hard to imagine how we\n> could make this much worse even given bad estimates.\n>\n> I do think we should fix this in v16, however.\n>\n> I'll add this to the \"Older bugs affecting stable branches\" section of\n> the PG 16 open items list\n\nWhen I wrote the above, it was very soon after the feature freeze for\nPG16. I wondered, since we tend not to do cost changes as part of bug\nfixes due to not wanting to destabilise plans between minor versions\nif we could instead just fix it in PG16 given the freeze had *just*\nstarted. That's no longer the case, so I'm just going to move this\nout from where I added it in the PG16 Open items \"Live issues\" section\nand just add a July CF entry for it instead.\n\nDavid\n\n\n",
"msg_date": "Wed, 31 May 2023 12:59:23 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Wed, 31 May 2023 at 12:59, David Rowley <[email protected]> wrote:\n>\n> On Wed, 12 Apr 2023 at 21:03, David Rowley <[email protected]> wrote:\n> > I'll add this to the \"Older bugs affecting stable branches\" section of\n> > the PG 16 open items list\n>\n> When I wrote the above, it was very soon after the feature freeze for\n> PG16. I wondered, since we tend not to do cost changes as part of bug\n> fixes due to not wanting to destabilise plans between minor versions\n> if we could instead just fix it in PG16 given the freeze had *just*\n> started. That's no longer the case, so I'm just going to move this\n> out from where I added it in the PG16 Open items \"Live issues\" section\n> and just add a July CF entry for it instead.\n\nI'm keen to move this patch along. It's not a particularly\ninteresting patch and don't expect much interest in it, but I feel\nit's pretty important to have the planner not accidentally choose a\ncheap startup plan when a WindowAgg is going to fetch the entire\nsubplan's tuples.\n\nI've made another pass over the patch and made a bunch of cosmetic\nchanges. As far as mechanical changes, I only changed the EXCLUDE\nTIES and EXCLUDE GROUP behaviour when there is no ORDER BY clause in\nthe WindowClause. If there's no ORDER BY then subtracting 1.0 rows\nseems like the right thing to do rather than what the previous patch\ndid.\n\nI (temporarily) left the DEBUG1 elog in there if anyone wants to test\nfor themselves (saves debugger use). In the absence of that, I'm\nplanning on just pushing it to master only tomorrow. It seems fairly\nlow risk and unlikely to attract too much interest since it only\naffects startup costs of WindowAgg nodes. I'm currently thinking it's\na bad idea to backpatch this but I'd consider it more if someone else\nthought it was a good idea or if more people came along complaining\nabout poor plan choice in plans containing WindowAggs. Currently, it\nseems better not to destabilise plans in the back branches. (CC'd Tim,\nwho reported #17862, as he may have an opinion on this)\n\nThe only thought I had while looking at this again aside from what I\nchanged was if get_windowclause_startup_tuples() should go in\nselfuncs.c. I wondered if it would be neater to use\nconvert_numeric_to_scalar() instead of the code I had to add to\nconvert the (SMALL|BIG)INT Consts in <Const> FOLLOWING to double.\nAside from that reason, it seems we don't have many usages of\nDEFAULT_INEQ_SEL outside of selfuncs.c. I didn't feel strongly enough\nabout this to actually move the function.\n\nThe updated patch is attached.\n\nHere are the results of my testing (note the DEBUG message matches the\nCOUNT(*) result in all cases apart from one case where COUNT(*)\nreturns 0 and the estimated tuples is 1.0).\n\ncreate table ab (a int, b int);\ninsert into ab select a,b from generate_series(1,100) a,\ngenerate_series(1,100) b;\nanalyze ab;\nset client_min_messages=debug1;\n\n# select count(*) over () from ab limit 1;\nDEBUG: startup_tuples = 10000\n count\n-------\n 10000\n(1 row)\n\n\n# select count(*) over (partition by a) from ab limit 1;\nDEBUG: startup_tuples = 100\n count\n-------\n 100\n(1 row)\n\n\n# select count(*) over (partition by a order by b) from ab limit 1;\nDEBUG: startup_tuples = 1\n count\n-------\n 1\n(1 row)\n\n\n# select count(*) over (partition by a order by b rows between current\nrow and unbounded following) from ab limit 1;\nDEBUG: startup_tuples = 100\n count\n-------\n 100\n(1 row)\n\n\n# select count(*) over (partition by a order by b rows between current\nrow and 10 following) from ab limit 1;\nDEBUG: startup_tuples = 11\n count\n-------\n 11\n(1 row)\n\n\n# select count(*) over (partition by a order by b rows between current\nrow and 10 following exclude current row) from ab limit 1;\nDEBUG: startup_tuples = 10\n count\n-------\n 10\n(1 row)\n\n\n# select count(*) over (partition by a order by b rows between current\nrow and 10 following exclude ties) from ab limit 1;\nDEBUG: startup_tuples = 11\n count\n-------\n 11\n(1 row)\n\n\n# select count(*) over (partition by a order by b range between\ncurrent row and 10 following exclude ties) from ab limit 1;\nDEBUG: startup_tuples = 11\n count\n-------\n 11\n(1 row)\n\n\n# select count(*) over (partition by a order by b range between\ncurrent row and unbounded following exclude ties) from ab limit 1;\nDEBUG: startup_tuples = 100\n count\n-------\n 100\n(1 row)\n\n\n# select count(*) over (partition by a order by b range between\ncurrent row and unbounded following exclude group) from ab limit 1;\nDEBUG: startup_tuples = 99\n count\n-------\n 99\n(1 row)\n\n\n# select count(*) over (partition by a order by b groups between\ncurrent row and unbounded following exclude group) from ab limit 1;\nDEBUG: startup_tuples = 99\n count\n-------\n 99\n(1 row)\n\n\n# select count(*) over (partition by a rows between current row and\nunbounded following exclude group) from ab limit 1;\nDEBUG: startup_tuples = 1\n count\n-------\n 0\n(1 row)\n\n\n# select count(*) over (partition by a rows between current row and\nunbounded following exclude ties) from ab limit 1;\nDEBUG: startup_tuples = 1\n count\n-------\n 1\n(1 row)\n\n\n# select count(*) over (partition by a order by b rows between current\nrow and unbounded following exclude ties) from ab limit 1;\nDEBUG: startup_tuples = 100\n count\n-------\n 100\n(1 row)\n\n\n# select count(*) over (partition by a order by b rows between current\nrow and unbounded following exclude current row) from ab limit 1;\nDEBUG: startup_tuples = 99\n count\n-------\n 99\n(1 row)\n\n\n# select count(*) over (partition by a order by b range between\ncurrent row and 9223372036854775807 following exclude ties) from ab\nlimit 1;\nDEBUG: startup_tuples = 100\n count\n-------\n 100\n(1 row)\n\nDavid",
"msg_date": "Thu, 3 Aug 2023 16:50:21 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "Hi David:\n\nSorry for feedback at the last minute! I study the patch and find the\nfollowing cases.\n\n1. ORDER BY or PARTITION BY\n\nselect *, count(two) over (order by unique1) from tenk1 limit 1;\nDEBUG: startup_tuples = 1\nDEBUG: startup_tuples = 1\n\nselect *, count(two) over (partition by unique1) from tenk1 limit 1;\nDEBUG: startup_tuples = 1\nDEBUG: startup_tuples = 1\n\nDue to the Executor of nodeWindowAgg, we have to fetch the next tuple\nuntil it mismatches with the current one, then we can calculate the\nWindowAgg function. In the current patch, we didn't count the\nmismatched tuple. I verified my thought with 'break at IndexNext'\nfunction and see IndexNext is called twice, so in the above case the\nstartup_tuples should be 2?\n\n\n2. ORDER BY and PARTITION BY\n\nselect two, hundred,\ncount(two) over (partition by ten order by hundred)\nfrom tenk1 limit 1;\n\nDEBUG: startup_tuples = 10\n two | hundred | count\n-----+---------+-------\n 0 | 0 | 100\n\nIf we consider the mismatched tuples, it should be 101?\n\n3. As we can see the log for startup_tuples is logged twice sometimes,\nthe reason is because it is used in cost_windowagg, so it is calculated\nfor every create_one_window_path. I think the startup_tuples should be\nindependent with the physical path, maybe we can cache it somewhere to\nsave some planning cycles?\n\nThanks for the patch!\n\n-- \nBest Regards\nAndy Fan\n\nHi David:Sorry for feedback at the last minute! I study the patch and find thefollowing cases.1. ORDER BY or PARTITION BYselect *, count(two) over (order by unique1) from tenk1 limit 1;DEBUG: startup_tuples = 1DEBUG: startup_tuples = 1select *, count(two) over (partition by unique1) from tenk1 limit 1;DEBUG: startup_tuples = 1DEBUG: startup_tuples = 1Due to the Executor of nodeWindowAgg, we have to fetch the next tupleuntil it mismatches with the current one, then we can calculate theWindowAgg function. In the current patch, we didn't count themismatched tuple. I verified my thought with 'break at IndexNext'function and see IndexNext is called twice, so in the above case thestartup_tuples should be 2?2. ORDER BY and PARTITION BYselect two, hundred,count(two) over (partition by ten order by hundred)from tenk1 limit 1;DEBUG: startup_tuples = 10 two | hundred | count-----+---------+------- 0 | 0 | 100If we consider the mismatched tuples, it should be 101?3. As we can see the log for startup_tuples is logged twice sometimes,the reason is because it is used in cost_windowagg, so it is calculatedfor every create_one_window_path. I think the startup_tuples should beindependent with the physical path, maybe we can cache it somewhere tosave some planning cycles?Thanks for the patch!-- Best RegardsAndy Fan",
"msg_date": "Thu, 3 Aug 2023 14:49:23 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Thu, 3 Aug 2023 at 18:49, Andy Fan <[email protected]> wrote:\n> 1. ORDER BY or PARTITION BY\n>\n> select *, count(two) over (order by unique1) from tenk1 limit 1;\n> DEBUG: startup_tuples = 1\n> DEBUG: startup_tuples = 1\n>\n> select *, count(two) over (partition by unique1) from tenk1 limit 1;\n> DEBUG: startup_tuples = 1\n> DEBUG: startup_tuples = 1\n>\n> Due to the Executor of nodeWindowAgg, we have to fetch the next tuple\n> until it mismatches with the current one, then we can calculate the\n> WindowAgg function. In the current patch, we didn't count the\n> mismatched tuple. I verified my thought with 'break at IndexNext'\n> function and see IndexNext is called twice, so in the above case the\n> startup_tuples should be 2?\n\nYou're probably right here. I'd considered that it wasn't that\ncritical and aimed to attempt to keep the estimate close to the number\nof rows that'll be aggregated. I think that's probably not the best\nthing to do as if you consider the EXCLUDE options, those just exclude\ntuples from aggregation, it does not mean we read fewer tuples from\nthe subnode. I've updated the patch accordingly.\n\n> 2. ORDER BY and PARTITION BY\n>\n> select two, hundred,\n> count(two) over (partition by ten order by hundred)\n> from tenk1 limit 1;\n>\n> DEBUG: startup_tuples = 10\n> two | hundred | count\n> -----+---------+-------\n> 0 | 0 | 100\n>\n> If we consider the mismatched tuples, it should be 101?\n\nI don't really see how we could do better with the current level of\nstatistics. The stats don't know that there are only 10 distinct\n\"hundred\" values for rows which have ten=1. All we have is n_distinct\non tenk1.hundred, which is 100.\n\n> 3. As we can see the log for startup_tuples is logged twice sometimes,\n> the reason is because it is used in cost_windowagg, so it is calculated\n> for every create_one_window_path. I think the startup_tuples should be\n> independent with the physical path, maybe we can cache it somewhere to\n> save some planning cycles?\n\nI wondered about that too but I ended up writing off the idea of\ncaching because the input_tuple count comes from the Path and the\nextra calls are coming from other Paths, which could well have some\ncompletely different value for input_tuples.\n\nDavid",
"msg_date": "Thu, 3 Aug 2023 23:29:05 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Thu, Aug 3, 2023 at 7:29 PM David Rowley <[email protected]> wrote:\n\n> Thanks for having a look at this.\n>\n> On Thu, 3 Aug 2023 at 18:49, Andy Fan <[email protected]> wrote:\n> > 1. ORDER BY or PARTITION BY\n> >\n> > select *, count(two) over (order by unique1) from tenk1 limit 1;\n> > DEBUG: startup_tuples = 1\n> > DEBUG: startup_tuples = 1\n> >\n> > select *, count(two) over (partition by unique1) from tenk1 limit 1;\n> > DEBUG: startup_tuples = 1\n> > DEBUG: startup_tuples = 1\n> >\n> > Due to the Executor of nodeWindowAgg, we have to fetch the next tuple\n> > until it mismatches with the current one, then we can calculate the\n> > WindowAgg function. In the current patch, we didn't count the\n> > mismatched tuple. I verified my thought with 'break at IndexNext'\n> > function and see IndexNext is called twice, so in the above case the\n> > startup_tuples should be 2?\n>\n> You're probably right here. I'd considered that it wasn't that\n> critical and aimed to attempt to keep the estimate close to the number\n> of rows that'll be aggregated. I think that's probably not the best\n> thing to do as if you consider the EXCLUDE options, those just exclude\n> tuples from aggregation, it does not mean we read fewer tuples from\n> the subnode. I've updated the patch accordingly.\n>\n\nThanks.\n\n\n>\n> > 2. ORDER BY and PARTITION BY\n> >\n> > select two, hundred,\n> > count(two) over (partition by ten order by hundred)\n> > from tenk1 limit 1;\n> >\n> > DEBUG: startup_tuples = 10\n> > two | hundred | count\n> > -----+---------+-------\n> > 0 | 0 | 100\n> >\n> > If we consider the mismatched tuples, it should be 101?\n>\n> I don't really see how we could do better with the current level of\n> statistics. The stats don't know that there are only 10 distinct\n> \"hundred\" values for rows which have ten=1. All we have is n_distinct\n> on tenk1.hundred, which is 100.\n\n\nYes, actually I didn't figure it out before / after my posting.\n\n>\n\n\n> > 3. As we can see the log for startup_tuples is logged twice sometimes,\n> > the reason is because it is used in cost_windowagg, so it is calculated\n> > for every create_one_window_path. I think the startup_tuples should be\n> > independent with the physical path, maybe we can cache it somewhere to\n> > save some planning cycles?\n>\n> I wondered about that too but I ended up writing off the idea of\n> caching because the input_tuple count comes from the Path and the\n> extra calls are coming from other Paths, which could well have some\n> completely different value for input_tuples.\n>\n>\nLooks reasonable.\n\nI have checked the updated patch and LGTM.\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Aug 3, 2023 at 7:29 PM David Rowley <[email protected]> wrote:Thanks for having a look at this.\n\nOn Thu, 3 Aug 2023 at 18:49, Andy Fan <[email protected]> wrote:\n> 1. ORDER BY or PARTITION BY\n>\n> select *, count(two) over (order by unique1) from tenk1 limit 1;\n> DEBUG: startup_tuples = 1\n> DEBUG: startup_tuples = 1\n>\n> select *, count(two) over (partition by unique1) from tenk1 limit 1;\n> DEBUG: startup_tuples = 1\n> DEBUG: startup_tuples = 1\n>\n> Due to the Executor of nodeWindowAgg, we have to fetch the next tuple\n> until it mismatches with the current one, then we can calculate the\n> WindowAgg function. In the current patch, we didn't count the\n> mismatched tuple. I verified my thought with 'break at IndexNext'\n> function and see IndexNext is called twice, so in the above case the\n> startup_tuples should be 2?\n\nYou're probably right here. I'd considered that it wasn't that\ncritical and aimed to attempt to keep the estimate close to the number\nof rows that'll be aggregated. I think that's probably not the best\nthing to do as if you consider the EXCLUDE options, those just exclude\ntuples from aggregation, it does not mean we read fewer tuples from\nthe subnode. I've updated the patch accordingly.Thanks. \n\n> 2. ORDER BY and PARTITION BY\n>\n> select two, hundred,\n> count(two) over (partition by ten order by hundred)\n> from tenk1 limit 1;\n>\n> DEBUG: startup_tuples = 10\n> two | hundred | count\n> -----+---------+-------\n> 0 | 0 | 100\n>\n> If we consider the mismatched tuples, it should be 101?\n\nI don't really see how we could do better with the current level of\nstatistics. The stats don't know that there are only 10 distinct\n\"hundred\" values for rows which have ten=1. All we have is n_distinct\non tenk1.hundred, which is 100.Yes, actually I didn't figure it out before / after my posting. \n\n> 3. As we can see the log for startup_tuples is logged twice sometimes,\n> the reason is because it is used in cost_windowagg, so it is calculated\n> for every create_one_window_path. I think the startup_tuples should be\n> independent with the physical path, maybe we can cache it somewhere to\n> save some planning cycles?\n\nI wondered about that too but I ended up writing off the idea of\ncaching because the input_tuple count comes from the Path and the\nextra calls are coming from other Paths, which could well have some\ncompletely different value for input_tuples.Looks reasonable. I have checked the updated patch and LGTM. -- Best RegardsAndy Fan",
"msg_date": "Thu, 3 Aug 2023 22:02:10 +0800",
"msg_from": "Andy Fan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Thu, 3 Aug 2023 at 05:50, David Rowley <[email protected]> wrote:\n\n> I'm currently thinking it's\n> a bad idea to backpatch this but I'd consider it more if someone else\n> thought it was a good idea or if more people came along complaining\n> about poor plan choice in plans containing WindowAggs. Currently, it\n> seems better not to destabilise plans in the back branches. (CC'd Tim,\n> who reported #17862, as he may have an opinion on this)\n>\n\nI agree it's better not to destabilise plans in the back branches.\n\nTim\n\nOn Thu, 3 Aug 2023 at 05:50, David Rowley <[email protected]> wrote:I'm currently thinking it's\na bad idea to backpatch this but I'd consider it more if someone else\nthought it was a good idea or if more people came along complaining\nabout poor plan choice in plans containing WindowAggs. Currently, it\nseems better not to destabilise plans in the back branches. (CC'd Tim,\nwho reported #17862, as he may have an opinion on this)I agree it's better not to destabilise plans in the back branches.Tim",
"msg_date": "Thu, 3 Aug 2023 19:08:28 +0100",
"msg_from": "Tim Palmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Fri, 4 Aug 2023 at 02:02, Andy Fan <[email protected]> wrote:\n> I have checked the updated patch and LGTM.\n\nThank you for reviewing. I've pushed the patch to master only.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Aug 2023 09:28:51 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Fri, Aug 04, 2023 at 09:28:51AM +1200, David Rowley wrote:\n> Thank you for reviewing. I've pushed the patch to master only.\n\nI'm seeing some reliable test failures for 32-bit builds on cfbot [0]. At\na glance, it looks like the relations are swapped in the plan.\n\n[0] https://api.cirrus-ci.com/v1/artifact/task/5728127981191168/testrun/build-32/testrun/regress/regress/regression.diffs\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 3 Aug 2023 16:54:03 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Aug 04, 2023 at 09:28:51AM +1200, David Rowley wrote:\n>> Thank you for reviewing. I've pushed the patch to master only.\n\n> I'm seeing some reliable test failures for 32-bit builds on cfbot [0]. At\n> a glance, it looks like the relations are swapped in the plan.\n\nYeah, I got the same result in a 32-bit FreeBSD VM. Probably, the two\nplans are of effectively-identical estimated cost, and there's some\nroundoff effect in those estimates that differs between machines with\n4-byte and 8-byte MAXALIGN.\n\nYou could likely stabilize the plan choice by joining two tables that\naren't of identical size -- maybe add an additional WHERE constraint\non one of the tables?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Aug 2023 20:46:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "On Fri, 4 Aug 2023 at 11:54, Nathan Bossart <[email protected]> wrote:\n> I'm seeing some reliable test failures for 32-bit builds on cfbot [0]. At\n> a glance, it looks like the relations are swapped in the plan.\n\nThank you for the report. I've just pushed a patch which I'm hoping will fix it.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Aug 2023 13:28:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> Thank you for the report. I've just pushed a patch which I'm hoping will fix it.\n\nPasses now on my VM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Aug 2023 22:23:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix incorrect start up costs for WindowAgg paths (bug #17862)"
}
] |
[
{
"msg_contents": "I came across $subject and reduced the repro query as below.\n\ncreate table a (i int);\ncreate table b (i int);\ninsert into a values (1);\ninsert into b values (2);\nupdate b set i = 2;\n\nset min_parallel_table_scan_size to 0;\nset parallel_tuple_cost to 0;\nset parallel_setup_cost to 0;\n\n# explain (costs off) select * from a full join b on a.i = b.i;\n QUERY PLAN\n------------------------------------------\n Gather\n Workers Planned: 2\n -> Parallel Hash Full Join\n Hash Cond: (a.i = b.i)\n -> Parallel Seq Scan on a\n -> Parallel Hash\n -> Parallel Seq Scan on b\n(7 rows)\n\n# select * from a full join b on a.i = b.i;\n i | i\n---+---\n 1 |\n(1 row)\n\nTuple (NULL, 2) is missing from the results.\n\nThanks\nRichard\n\nI came across $subject and reduced the repro query as below.create table a (i int);create table b (i int);insert into a values (1);insert into b values (2);update b set i = 2;set min_parallel_table_scan_size to 0;set parallel_tuple_cost to 0;set parallel_setup_cost to 0;# explain (costs off) select * from a full join b on a.i = b.i; QUERY PLAN------------------------------------------ Gather Workers Planned: 2 -> Parallel Hash Full Join Hash Cond: (a.i = b.i) -> Parallel Seq Scan on a -> Parallel Hash -> Parallel Seq Scan on b(7 rows)# select * from a full join b on a.i = b.i; i | i---+--- 1 |(1 row)Tuple (NULL, 2) is missing from the results.ThanksRichard",
"msg_date": "Wed, 12 Apr 2023 19:35:53 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 7:36 AM Richard Guo <[email protected]> wrote:\n>\n> I came across $subject and reduced the repro query as below.\n>\n> create table a (i int);\n> create table b (i int);\n> insert into a values (1);\n> insert into b values (2);\n> update b set i = 2;\n>\n> set min_parallel_table_scan_size to 0;\n> set parallel_tuple_cost to 0;\n> set parallel_setup_cost to 0;\n>\n> # explain (costs off) select * from a full join b on a.i = b.i;\n> QUERY PLAN\n> ------------------------------------------\n> Gather\n> Workers Planned: 2\n> -> Parallel Hash Full Join\n> Hash Cond: (a.i = b.i)\n> -> Parallel Seq Scan on a\n> -> Parallel Hash\n> -> Parallel Seq Scan on b\n> (7 rows)\n>\n> # select * from a full join b on a.i = b.i;\n> i | i\n> ---+---\n> 1 |\n> (1 row)\n>\n> Tuple (NULL, 2) is missing from the results.\n\nThanks so much for reporting this, Richard. This is a fantastic minimal\nrepro!\n\nSo, I looked into this, and it seems that, as you can imagine, the tuple\nin b is hot updated, resulting in a heap only tuple.\n\n t_ctid | raw_flags\n--------+----------------------------------------------------------------------\n (0,2) | {HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_HOT_UPDATED}\n (0,2) | {HEAP_XMIN_COMMITTED,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\n\nIn ExecParallelScanHashTableForUnmatched() we don't emit the\nNULL-extended tuple because HeapTupleHeaderHasMatch() is true for our\ndesired tuple.\n\n while (hashTuple != NULL)\n {\n if (!HeapTupleHeaderHasMatch(HJTUPLE_MINTUPLE(hashTuple)))\n {\n\nHeapTupleHeaderHasMatch() checks if HEAP_TUPLE_HAS_MATCH is set.\n\nIn htup_details.h, you will see that HEAP_TUPLE_HAS_MATCH is defined as\nHEAP_ONLY_TUPLE\n/*\n * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins. It is\n * only used in tuples that are in the hash table, and those don't need\n * any visibility information, so we can overlay it on a visibility flag\n * instead of using up a dedicated bit.\n */\n#define HEAP_TUPLE_HAS_MATCH HEAP_ONLY_TUPLE /* tuple has a join match */\n\nIf you redefine HEAP_TUPLE_HAS_MATCH as something that isn't already\nused, say 0x1800, the query returns correct results.\n\n QUERY PLAN\n------------------------------------------\n Gather\n Workers Planned: 2\n -> Parallel Hash Full Join\n Hash Cond: (a.i = b.i)\n -> Parallel Seq Scan on a\n -> Parallel Hash\n -> Parallel Seq Scan on b\n(7 rows)\n\n i | i\n---+---\n 1 |\n | 2\n(2 rows)\n\nThe question is, why does this only happen for a parallel full hash join?\n\nunpa\npostgres=# explain (costs off) select * from a full join b on a.i = b.i;\n QUERY PLAN\n---------------------------\n Hash Full Join\n Hash Cond: (a.i = b.i)\n -> Seq Scan on a\n -> Hash\n -> Seq Scan on b\n(5 rows)\n\npostgres=# select * from a full join b on a.i = b.i;\n i | i\n---+---\n 1 |\n | 2\n(2 rows)\n\nI imagine it has something to do with what tuples are put in the\nparallel hashtable. I am about to investigate that but just wanted to\nshare what I had so far.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 12 Apr 2023 10:57:17 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-12 10:57:17 -0400, Melanie Plageman wrote:\n> HeapTupleHeaderHasMatch() checks if HEAP_TUPLE_HAS_MATCH is set.\n> \n> In htup_details.h, you will see that HEAP_TUPLE_HAS_MATCH is defined as\n> HEAP_ONLY_TUPLE\n> /*\n> * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins. It is\n> * only used in tuples that are in the hash table, and those don't need\n> * any visibility information, so we can overlay it on a visibility flag\n> * instead of using up a dedicated bit.\n> */\n> #define HEAP_TUPLE_HAS_MATCH HEAP_ONLY_TUPLE /* tuple has a join match */\n> \n> If you redefine HEAP_TUPLE_HAS_MATCH as something that isn't already\n> used, say 0x1800, the query returns correct results.\n> [...]\n> The question is, why does this only happen for a parallel full hash join?\n\nI'd guess that PHJ code is missing a HeapTupleHeaderClearMatch() somewhere,\nbut the non-parallel case isn't.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Apr 2023 11:14:52 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 2:14 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-04-12 10:57:17 -0400, Melanie Plageman wrote:\n> > HeapTupleHeaderHasMatch() checks if HEAP_TUPLE_HAS_MATCH is set.\n> >\n> > In htup_details.h, you will see that HEAP_TUPLE_HAS_MATCH is defined as\n> > HEAP_ONLY_TUPLE\n> > /*\n> > * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins. It is\n> > * only used in tuples that are in the hash table, and those don't need\n> > * any visibility information, so we can overlay it on a visibility flag\n> > * instead of using up a dedicated bit.\n> > */\n> > #define HEAP_TUPLE_HAS_MATCH HEAP_ONLY_TUPLE /* tuple has a join match */\n> >\n> > If you redefine HEAP_TUPLE_HAS_MATCH as something that isn't already\n> > used, say 0x1800, the query returns correct results.\n> > [...]\n> > The question is, why does this only happen for a parallel full hash join?\n>\n> I'd guess that PHJ code is missing a HeapTupleHeaderClearMatch() somewhere,\n> but the non-parallel case isn't.\n\nIndeed. Thanks! This diff fixes the case Richard provided.\n\ndiff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c\nindex a45bd3a315..54c06c5eb3 100644\n--- a/src/backend/executor/nodeHash.c\n+++ b/src/backend/executor/nodeHash.c\n@@ -1724,6 +1724,7 @@ retry:\n /* Store the hash value in the HashJoinTuple header. */\n hashTuple->hashvalue = hashvalue;\n memcpy(HJTUPLE_MINTUPLE(hashTuple), tuple, tuple->t_len);\n+ HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(hashTuple));\n\n /* Push it onto the front of the bucket's list */\n ExecParallelHashPushTuple(&hashtable->buckets.shared[bucketno],\n\nI will propose a patch that includes this change and a test.\n\nI just want to convince myself that ExecParallelHashTableInsertCurrentBatch()\ncovers the non-batch 0 cases and we don't need to add something to\nsts_puttuple().\n\n- Melanie\n\n\n",
"msg_date": "Wed, 12 Apr 2023 14:59:11 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 2:59 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Wed, Apr 12, 2023 at 2:14 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-04-12 10:57:17 -0400, Melanie Plageman wrote:\n> > > HeapTupleHeaderHasMatch() checks if HEAP_TUPLE_HAS_MATCH is set.\n> > >\n> > > In htup_details.h, you will see that HEAP_TUPLE_HAS_MATCH is defined as\n> > > HEAP_ONLY_TUPLE\n> > > /*\n> > > * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins. It is\n> > > * only used in tuples that are in the hash table, and those don't need\n> > > * any visibility information, so we can overlay it on a visibility flag\n> > > * instead of using up a dedicated bit.\n> > > */\n> > > #define HEAP_TUPLE_HAS_MATCH HEAP_ONLY_TUPLE /* tuple has a join match */\n> > >\n> > > If you redefine HEAP_TUPLE_HAS_MATCH as something that isn't already\n> > > used, say 0x1800, the query returns correct results.\n> > > [...]\n> > > The question is, why does this only happen for a parallel full hash join?\n> >\n> > I'd guess that PHJ code is missing a HeapTupleHeaderClearMatch() somewhere,\n> > but the non-parallel case isn't.\n>\n> Indeed. Thanks! This diff fixes the case Richard provided.\n>\n> diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c\n> index a45bd3a315..54c06c5eb3 100644\n> --- a/src/backend/executor/nodeHash.c\n> +++ b/src/backend/executor/nodeHash.c\n> @@ -1724,6 +1724,7 @@ retry:\n> /* Store the hash value in the HashJoinTuple header. */\n> hashTuple->hashvalue = hashvalue;\n> memcpy(HJTUPLE_MINTUPLE(hashTuple), tuple, tuple->t_len);\n> + HeapTupleHeaderClearMatch(HJTUPLE_MINTUPLE(hashTuple));\n>\n> /* Push it onto the front of the bucket's list */\n> ExecParallelHashPushTuple(&hashtable->buckets.shared[bucketno],\n>\n> I will propose a patch that includes this change and a test.\n>\n> I just want to convince myself that ExecParallelHashTableInsertCurrentBatch()\n> covers the non-batch 0 cases and we don't need to add something to\n> sts_puttuple().\n\nSo, indeed, tuples in batches after batch 0 already had their match bit\ncleared by ExecParallelHashTableInsertCurrentBatch().\n\nAttached patch includes the fix for ExecParallelHashTableInsert() as\nwell as a test. I toyed with adapting one of the existing parallel full\nhash join tests to cover this case, however, I think Richard's repro is\nmuch more clear. Maybe it is worth throwing in a few updates to the\ntables in the existing queries to provide coverage for the other\nHeapTupleHeaderClearMatch() calls in the code, though.\n\n- Melanie",
"msg_date": "Wed, 12 Apr 2023 17:48:27 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 9:48 AM Melanie Plageman\n<[email protected]> wrote:\n> Attached patch includes the fix for ExecParallelHashTableInsert() as\n> well as a test. I toyed with adapting one of the existing parallel full\n> hash join tests to cover this case, however, I think Richard's repro is\n> much more clear. Maybe it is worth throwing in a few updates to the\n> tables in the existing queries to provide coverage for the other\n> HeapTupleHeaderClearMatch() calls in the code, though.\n\nOof. Analysis and code LGTM.\n\nI thought about the way non-parallel HJ also clears the match bits\nwhen re-using the hash table for rescans. PHJ doesn't keep hash\ntables across rescans. (There's no fundamental reason why it\ncouldn't, but there was some complication and it seemed absurd to have\nNestLoop over Gather over PHJ, forking a new set of workers for every\ntuple, so I didn't implement that in the original PHJ.) But... there\nis something a little odd about the code in\nExecHashTableResetMatchFlags(), or the fact that we appear to be\ncalling it: it's using the unshared union member unconditionally,\nwhich wouldn't actually work for PHJ (there should be a variant of\nthat function with Parallel in its name if we ever want that to work).\nThat's not a bug AFAICT, as in fact we don't actually call it--it\nshould be unreachable because the hash table should be gone when we\nrescan--but it's confusing. I'm wondering if we should put in\nsomething explicit about that, maybe a comment and an assertion in\nExecReScanHashJoin().\n\n+-- Ensure that hash join tuple match bits have been cleared before putting them\n+-- into the hashtable.\n\nCould you mention that the match flags steals a bit from the HOT flag,\nie *why* we're testing a join after an update? And if we're going to\nexercise/test that case, should we do the non-parallel version too?\n\nFor the commit message, I think it's a good idea to use something like\n\"Fix ...\" for the headline of bug fix commits to make that clearer,\nand to add something like \"oversight in commit XYZ\" in the body, just\nto help people connect the dots. (Yeah, I know I failed to reference\nthe delinquent commit in the recent assertion-removal commit, my bad.)\n I think \"Discussion:\" footers are supposed to use\nhttps://postgr.es/m/XXX shortened URLs.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:49:21 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 6:50 PM Thomas Munro <[email protected]> wrote:\n>\n> On Thu, Apr 13, 2023 at 9:48 AM Melanie Plageman\n> <[email protected]> wrote:\n> > Attached patch includes the fix for ExecParallelHashTableInsert() as\n> > well as a test. I toyed with adapting one of the existing parallel full\n> > hash join tests to cover this case, however, I think Richard's repro is\n> > much more clear. Maybe it is worth throwing in a few updates to the\n> > tables in the existing queries to provide coverage for the other\n> > HeapTupleHeaderClearMatch() calls in the code, though.\n>\n> Oof. Analysis and code LGTM.\n>\n> I thought about the way non-parallel HJ also clears the match bits\n> when re-using the hash table for rescans. PHJ doesn't keep hash\n> tables across rescans. (There's no fundamental reason why it\n> couldn't, but there was some complication and it seemed absurd to have\n> NestLoop over Gather over PHJ, forking a new set of workers for every\n> tuple, so I didn't implement that in the original PHJ.) But... there\n> is something a little odd about the code in\n> ExecHashTableResetMatchFlags(), or the fact that we appear to be\n> calling it: it's using the unshared union member unconditionally,\n> which wouldn't actually work for PHJ (there should be a variant of\n> that function with Parallel in its name if we ever want that to work).\n> That's not a bug AFAICT, as in fact we don't actually call it--it\n> should be unreachable because the hash table should be gone when we\n> rescan--but it's confusing. I'm wondering if we should put in\n> something explicit about that, maybe a comment and an assertion in\n> ExecReScanHashJoin().\n\nAn assert about it not being a parallel hash join? I support this.\n\n> +-- Ensure that hash join tuple match bits have been cleared before putting them\n> +-- into the hashtable.\n>\n> Could you mention that the match flags steals a bit from the HOT flag,\n> ie *why* we're testing a join after an update?\n\nv2 attached has some wordsmithing along these lines.\n\n> And if we're going to\n> exercise/test that case, should we do the non-parallel version too?\n\nI've added this. I thought if we were adding the serial case, we might\nas well add the multi-batch case as well. However, that proved a bit\nmore challenging. We can get a HOT tuple in one of the existing tables\nwith no issues. Doing this and then deleting the reset match bit code\ndoesn't cause any of the tests to fail, however, because we use this\nexpression as the join condition when we want to emit NULL-extended\nunmatched tuples.\n\nselect count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n\nI don't think we want to add yet another time-consuming test to this\ntest file. So, I was trying to decide if it was worth changing these\nexisting tests so that they would fail when the match bit wasn't reset.\nI'm not sure.\n\n> For the commit message, I think it's a good idea to use something like\n> \"Fix ...\" for the headline of bug fix commits to make that clearer,\n> and to add something like \"oversight in commit XYZ\" in the body, just\n> to help people connect the dots. (Yeah, I know I failed to reference\n> the delinquent commit in the recent assertion-removal commit, my bad.)\n\nI've made these edits and tried to improve the commit message clarity in\ngeneral.\n\n> I think \"Discussion:\" footers are supposed to use\n> https://postgr.es/m/XXX shortened URLs.\n\nHmm. Is the problem with mine that I included \"flat\"? Because I did use\npostgr.es/m format. The message id is unfortunately long, but I believe\nthat is on google and not me.\n\n- Melanie",
"msg_date": "Wed, 12 Apr 2023 20:31:26 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 12:31 PM Melanie Plageman\n<[email protected]> wrote:\n> On Wed, Apr 12, 2023 at 6:50 PM Thomas Munro <[email protected]> wrote:\n> > I think \"Discussion:\" footers are supposed to use\n> > https://postgr.es/m/XXX shortened URLs.\n>\n> Hmm. Is the problem with mine that I included \"flat\"? Because I did use\n> postgr.es/m format. The message id is unfortunately long, but I believe\n> that is on google and not me.\n\nFor some reason I thought we weren't supposed to use the flat thing,\nbut it looks like I'm just wrong and people do that all the time so I\ntake that back.\n\nPushed. Thanks Richard and Melanie.\n\n\n",
"msg_date": "Fri, 14 Apr 2023 11:05:41 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 7:06 PM Thomas Munro <[email protected]> wrote:\n> For some reason I thought we weren't supposed to use the flat thing,\n> but it looks like I'm just wrong and people do that all the time so I\n> take that back.\n>\n> Pushed. Thanks Richard and Melanie.\n\nI tend to use http://postgr.es/m/ or https://postgr.es/m/ just to keep\nthe URL a bit shorter, and also because I like to point anyone reading\nthe commit log to the particular message that I think is most relevant\nrather than to the thread as a whole. But I don't think there's any\nhard-and-fast rule that committers have to do it one way rather than\nanother.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Apr 2023 08:37:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 08:31:26PM -0400, Melanie Plageman wrote:\n> On Wed, Apr 12, 2023 at 6:50 PM Thomas Munro <[email protected]> wrote:\n> > And if we're going to\n> > exercise/test that case, should we do the non-parallel version too?\n> \n> I've added this. I thought if we were adding the serial case, we might\n> as well add the multi-batch case as well. However, that proved a bit\n> more challenging. We can get a HOT tuple in one of the existing tables\n> with no issues. Doing this and then deleting the reset match bit code\n> doesn't cause any of the tests to fail, however, because we use this\n> expression as the join condition when we want to emit NULL-extended\n> unmatched tuples.\n> \n> select count(*) from simple r full outer join simple s on (r.id = 0 - s.id);\n> \n> I don't think we want to add yet another time-consuming test to this\n> test file. So, I was trying to decide if it was worth changing these\n> existing tests so that they would fail when the match bit wasn't reset.\n> I'm not sure.\n\nI couldn't stop thinking about how my explanation for why this test\ndidn't fail sounded wrong.\n\nAfter some further investigation, I found that the real reason that the\nHOT bit is already cleared in the tuples inserted into the hashtable for\nthis query is that the tuple descriptor for the relation \"simple\" and\nthe target list for the scan node are not identical (because we only\nneed to retain a single column from simple in order to eventually do\ncount(*)), so we make a new virtual tuple and build projection info for\nthe scan node. The virtual tuple doesn't have the HOT bit set anymore\n(the buffer heap tuple would have). So we couldn't fail a test of the\ncode clearing the match bit.\n\nUltimately this is probably fine. If we wanted to modify one of the\nexisting tests to cover the multi-batch case, changing the select\ncount(*) to a select * would do the trick. I imagine we wouldn't want to\ndo this because of the excessive output this would produce. I wondered\nif there was a pattern in the tests for getting around this. But,\nperhaps we don't care enough to cover this code.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:17:04 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n> Ultimately this is probably fine. If we wanted to modify one of the\n> existing tests to cover the multi-batch case, changing the select\n> count(*) to a select * would do the trick. I imagine we wouldn't want to\n> do this because of the excessive output this would produce. I wondered\n> if there was a pattern in the tests for getting around this.\n\nYou could use explain (ANALYZE). But the output is machine-dependant in\nvarious ways (which is why the tests use \"explain analyze so rarely).\n\nSo you'd have to filter its output with a function (like the functions\nthat exist in a few places for similar purpose).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 19 Apr 2023 12:16:24 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-19 12:16:24 -0500, Justin Pryzby wrote:\n> On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n> > Ultimately this is probably fine. If we wanted to modify one of the\n> > existing tests to cover the multi-batch case, changing the select\n> > count(*) to a select * would do the trick. I imagine we wouldn't want to\n> > do this because of the excessive output this would produce. I wondered\n> > if there was a pattern in the tests for getting around this.\n> \n> You could use explain (ANALYZE). But the output is machine-dependant in\n> various ways (which is why the tests use \"explain analyze so rarely).\n\nI think with sufficient options it's not machine specific. We have a bunch of\n EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) ..\nin our tests.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Apr 2023 12:20:51 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 12:20:51PM -0700, Andres Freund wrote:\n> On 2023-04-19 12:16:24 -0500, Justin Pryzby wrote:\n> > On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n> > > Ultimately this is probably fine. If we wanted to modify one of the\n> > > existing tests to cover the multi-batch case, changing the select\n> > > count(*) to a select * would do the trick. I imagine we wouldn't want to\n> > > do this because of the excessive output this would produce. I wondered\n> > > if there was a pattern in the tests for getting around this.\n> > \n> > You could use explain (ANALYZE). But the output is machine-dependant in\n> > various ways (which is why the tests use \"explain analyze so rarely).\n> \n> I think with sufficient options it's not machine specific.\n\nIt *can* be machine specific depending on the node type..\n\nIn particular, for parallel workers, it shows \"Workers Launched: ..\",\nwhich can vary even across executions on the same machine. And don't\nforget about \"loops=\".\n\nPlus:\nsrc/backend/commands/explain.c: \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n\n> We have a bunch of\n> EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) ..\n> in our tests.\n\nThere's 81 uses of \"timing off\", out of a total of ~1600 explains. Most\nof them are in partition_prune.sql. explain analyze is barely used.\n\nI sent a patch to elide the machine-specific parts, which would make it\neasier to use. But there was no interest.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 19 Apr 2023 19:41:35 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 3:20 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-04-19 12:16:24 -0500, Justin Pryzby wrote:\n> > On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n> > > Ultimately this is probably fine. If we wanted to modify one of the\n> > > existing tests to cover the multi-batch case, changing the select\n> > > count(*) to a select * would do the trick. I imagine we wouldn't want\n> to\n> > > do this because of the excessive output this would produce. I wondered\n> > > if there was a pattern in the tests for getting around this.\n> >\n> > You could use explain (ANALYZE). But the output is machine-dependant in\n> > various ways (which is why the tests use \"explain analyze so rarely).\n>\n> I think with sufficient options it's not machine specific. We have a bunch\n> of\n> EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) ..\n> in our tests.\n>\n\nCool. Yea, so ultimately these options are almost enough but memory\nusage changes from execution to execution. There are some tests which do\nregexp_replace() on the memory usage part of the EXPLAIN ANALYZE output\nto allow us to still compare the plans. However, I figured if I was\nalready going to go to the trouble of using regexp_replace(), I might as\nwell write a function that returns the \"Actual Rows\" field from the\nEXPLAIN ANALYZE output.\n\nThe attached patch does that. I admittedly mostly copy-pasted the\nplpgsql function from similar examples in other tests, and I suspect it\nmay be overkill and also poorly written.\n\nThe nice thing about this approach is that we can modify some of the\nexisting tests in join_hash.sql to use this function and cover the code\nto reset the matchbit for serial hashjoin, single batch parallel\nhashjoin, and all batches of parallel multi-batch hashjoin without any\nadditional queries. (I'll leave testing match bit resetting with the\nskew hashtable and match bit resetting in case of a rescan for another\nday.)\n\nI was able to delete the tests added in 558c9d75fe, as they became\nredundant.\n\nI wonder if any other tests are in need of an EXPLAIN (ANALYZE,\nMEMORY_USAGE OFF) option? Perhaps it is quite unusual to only require a\ndeterministic field like 'Actual Rows'. If we had that option we could\nalso remove the extra EXPLAIN invocations before the actual query\nexecutions.\n\n- Melanie",
"msg_date": "Wed, 19 Apr 2023 20:43:15 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 8:41 PM Justin Pryzby <[email protected]> wrote:\n>\n> On Wed, Apr 19, 2023 at 12:20:51PM -0700, Andres Freund wrote:\n> > On 2023-04-19 12:16:24 -0500, Justin Pryzby wrote:\n> > > On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n> > > > Ultimately this is probably fine. If we wanted to modify one of the\n> > > > existing tests to cover the multi-batch case, changing the select\n> > > > count(*) to a select * would do the trick. I imagine we wouldn't want to\n> > > > do this because of the excessive output this would produce. I wondered\n> > > > if there was a pattern in the tests for getting around this.\n> > >\n> > > You could use explain (ANALYZE). But the output is machine-dependant in\n> > > various ways (which is why the tests use \"explain analyze so rarely).\n> >\n> > I think with sufficient options it's not machine specific.\n>\n> It *can* be machine specific depending on the node type..\n>\n> In particular, for parallel workers, it shows \"Workers Launched: ..\",\n> which can vary even across executions on the same machine. And don't\n> forget about \"loops=\".\n>\n> Plus:\n> src/backend/commands/explain.c: \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n>\n> > We have a bunch of\n> > EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) ..\n> > in our tests.\n>\n> There's 81 uses of \"timing off\", out of a total of ~1600 explains. Most\n> of them are in partition_prune.sql. explain analyze is barely used.\n>\n> I sent a patch to elide the machine-specific parts, which would make it\n> easier to use. But there was no interest.\n\nWhile I don't know about other use cases, I would have used that here.\nDo you still have that patch laying around? I'd be interested to at\nleast review it.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 19 Apr 2023 20:47:07 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 8:43 PM Melanie Plageman\n<[email protected]> wrote:\n> On Wed, Apr 19, 2023 at 3:20 PM Andres Freund <[email protected]> wrote:\n>> On 2023-04-19 12:16:24 -0500, Justin Pryzby wrote:\n>> > On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n>> > > Ultimately this is probably fine. If we wanted to modify one of the\n>> > > existing tests to cover the multi-batch case, changing the select\n>> > > count(*) to a select * would do the trick. I imagine we wouldn't want to\n>> > > do this because of the excessive output this would produce. I wondered\n>> > > if there was a pattern in the tests for getting around this.\n>> >\n>> > You could use explain (ANALYZE). But the output is machine-dependant in\n>> > various ways (which is why the tests use \"explain analyze so rarely).\n>>\n>> I think with sufficient options it's not machine specific. We have a bunch of\n>> EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) ..\n>> in our tests.\n>\n>\n> Cool. Yea, so ultimately these options are almost enough but memory\n> usage changes from execution to execution. There are some tests which do\n> regexp_replace() on the memory usage part of the EXPLAIN ANALYZE output\n> to allow us to still compare the plans. However, I figured if I was\n> already going to go to the trouble of using regexp_replace(), I might as\n> well write a function that returns the \"Actual Rows\" field from the\n> EXPLAIN ANALYZE output.\n>\n> The attached patch does that. I admittedly mostly copy-pasted the\n> plpgsql function from similar examples in other tests, and I suspect it\n> may be overkill and also poorly written.\n\nI renamed the function to join_hash_actual_rows to avoid potentially\naffecting other tests. Nothing about the function is specific to a hash\njoin plan, so I think it is more clear to prefix the function with the\ntest file name. v2 attached.\n\n- Melanie",
"msg_date": "Thu, 20 Apr 2023 11:49:49 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 08:47:07PM -0400, Melanie Plageman wrote:\n> On Wed, Apr 19, 2023 at 8:41 PM Justin Pryzby <[email protected]> wrote:\n> >\n> > On Wed, Apr 19, 2023 at 12:20:51PM -0700, Andres Freund wrote:\n> > > On 2023-04-19 12:16:24 -0500, Justin Pryzby wrote:\n> > > > On Wed, Apr 19, 2023 at 11:17:04AM -0400, Melanie Plageman wrote:\n> > > > > Ultimately this is probably fine. If we wanted to modify one of the\n> > > > > existing tests to cover the multi-batch case, changing the select\n> > > > > count(*) to a select * would do the trick. I imagine we wouldn't want to\n> > > > > do this because of the excessive output this would produce. I wondered\n> > > > > if there was a pattern in the tests for getting around this.\n> > > >\n> > > > You could use explain (ANALYZE). But the output is machine-dependant in\n> > > > various ways (which is why the tests use \"explain analyze so rarely).\n> > >\n> > > I think with sufficient options it's not machine specific.\n> >\n> > It *can* be machine specific depending on the node type..\n> >\n> > In particular, for parallel workers, it shows \"Workers Launched: ..\",\n> > which can vary even across executions on the same machine. And don't\n> > forget about \"loops=\".\n> >\n> > Plus:\n> > src/backend/commands/explain.c: \"Buckets: %d Batches: %d Memory Usage: %ldkB\\n\",\n> >\n> > > We have a bunch of\n> > > EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF) ..\n> > > in our tests.\n> >\n> > There's 81 uses of \"timing off\", out of a total of ~1600 explains. Most\n> > of them are in partition_prune.sql. explain analyze is barely used.\n> >\n> > I sent a patch to elide the machine-specific parts, which would make it\n> > easier to use. But there was no interest.\n> \n> While I don't know about other use cases, I would have used that here.\n> Do you still have that patch laying around? I'd be interested to at\n> least review it.\n\nhttps://commitfest.postgresql.org/41/3409/\n\n\n",
"msg_date": "Thu, 20 Apr 2023 10:50:45 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "I noticed that BF animal conchuela has several times fallen over on the\ntest case added by 558c9d75f:\n\ndiff -U3 /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/join_hash.out /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/join_hash.out\n--- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/join_hash.out\t2023-04-19 10:20:26.159840000 +0200\n+++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/join_hash.out\t2023-04-19 10:21:47.971900000 +0200\n@@ -974,8 +974,8 @@\n SELECT * FROM hjtest_matchbits_t1 t1 FULL JOIN hjtest_matchbits_t2 t2 ON t1.id = t2.id;\n id | id \n ----+----\n- 1 | \n | 2\n+ 1 | \n (2 rows)\n \n -- Test serial full hash join.\n\nConsidering that this is a parallel plan, I don't think there's any\nmystery about why an ORDER-BY-less query might have unstable output\norder; the only mystery is why more of the buildfarm hasn't failed.\nCan we just add \"ORDER BY t1.id\" to this query? It looks like you\nget the same PHJ plan, although now underneath Sort/Gather Merge.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2023-04-19%2008%3A20%3A56\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2023-05-03%2006%3A21%3A03\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2023-05-19%2022%3A21%3A04\n\n\n",
"msg_date": "Fri, 19 May 2023 20:04:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Fri, May 19, 2023 at 8:05 PM Tom Lane <[email protected]> wrote:\n>\n> I noticed that BF animal conchuela has several times fallen over on the\n> test case added by 558c9d75f:\n>\n> diff -U3 /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/join_hash.out /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/join_hash.out\n> --- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/join_hash.out 2023-04-19 10:20:26.159840000 +0200\n> +++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/join_hash.out 2023-04-19 10:21:47.971900000 +0200\n> @@ -974,8 +974,8 @@\n> SELECT * FROM hjtest_matchbits_t1 t1 FULL JOIN hjtest_matchbits_t2 t2 ON t1.id = t2.id;\n> id | id\n> ----+----\n> - 1 |\n> | 2\n> + 1 |\n> (2 rows)\n>\n> -- Test serial full hash join.\n>\n> Considering that this is a parallel plan, I don't think there's any\n> mystery about why an ORDER-BY-less query might have unstable output\n> order; the only mystery is why more of the buildfarm hasn't failed.\n> Can we just add \"ORDER BY t1.id\" to this query? It looks like you\n> get the same PHJ plan, although now underneath Sort/Gather Merge.\n\nYes, this was an oversight on my part. Attached is the patch that does\njust what you suggested.\n\nI can't help but take this opportunity to bump my un-reviewed patch\nfurther upthread which adds additional test coverage for match bit\nclearing for multi-batch hash joins [1]. It happens to also remove the\ntest that failed on the buildfarm, which is why I thought to bring it\nup.\n\n-- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_bdwDN_aHVctHcc9VoDP9av7LUMeuLbch1fHD2ESouw1g%40mail.gmail.com",
"msg_date": "Wed, 7 Jun 2023 17:16:12 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Wed, Jun 07, 2023 at 05:16:12PM -0400, Melanie Plageman wrote:\n> On Fri, May 19, 2023 at 8:05 PM Tom Lane <[email protected]> wrote:\n>> Considering that this is a parallel plan, I don't think there's any\n>> mystery about why an ORDER-BY-less query might have unstable output\n>> order; the only mystery is why more of the buildfarm hasn't failed.\n>> Can we just add \"ORDER BY t1.id\" to this query? It looks like you\n>> get the same PHJ plan, although now underneath Sort/Gather Merge.\n> \n> Yes, this was an oversight on my part. Attached is the patch that does\n> just what you suggested.\n\nConfirmed that adding an ORDER BY adds a Sort node between a Gather\nMerge and a Parallel Hash Full Join, not removing coverage.\n\nThis has fallen through the cracks and conchuela has failed again\ntoday, so I went ahead and applied the fix on HEAD. Thanks!\n--\nMichael",
"msg_date": "Mon, 12 Jun 2023 12:24:24 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> This has fallen through the cracks and conchuela has failed again\n> today, so I went ahead and applied the fix on HEAD. Thanks!\n\nThanks! I'd intended to push that but it didn't get to the\ntop of the to-do queue yet. (I'm still kind of wondering why\nonly conchuela has failed to date.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Jun 2023 23:30:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
},
{
"msg_contents": "On Sun, Jun 11, 2023 at 11:24 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jun 07, 2023 at 05:16:12PM -0400, Melanie Plageman wrote:\n> > On Fri, May 19, 2023 at 8:05 PM Tom Lane <[email protected]> wrote:\n> >> Considering that this is a parallel plan, I don't think there's any\n> >> mystery about why an ORDER-BY-less query might have unstable output\n> >> order; the only mystery is why more of the buildfarm hasn't failed.\n> >> Can we just add \"ORDER BY t1.id\" to this query? It looks like you\n> >> get the same PHJ plan, although now underneath Sort/Gather Merge.\n> >\n> > Yes, this was an oversight on my part. Attached is the patch that does\n> > just what you suggested.\n>\n> Confirmed that adding an ORDER BY adds a Sort node between a Gather\n> Merge and a Parallel Hash Full Join, not removing coverage.\n>\n> This has fallen through the cracks and conchuela has failed again\n> today, so I went ahead and applied the fix on HEAD. Thanks!\n\nThanks!\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:09:20 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong results from Parallel Hash Full Join"
}
] |
[
{
"msg_contents": "Hi,\n\nIMO I think that commit 31966b1\n<https://github.com/postgres/postgres/commit/31966b151e6ab7a6284deab6e8fe5faddaf2ae4c>\nhas an oversight.\n\nAll the logic of the changes are based on the \"extend_by\" variable, which\nis a uint32, but in some places it is using \"int\", which can lead to an\noverflow at some point.\n\nI also take the opportunity to correct another oversight, regarding the\ncommit dad50f6\n<https://github.com/postgres/postgres/commit/dad50f677c42de207168a3f08982ba23c9fc6720>\n,\nfor possible duplicate assignment.\nGetLocalBufferDescriptor was called twice.\n\nTaking advantage of this, I promoted a scope reduction for some variables,\nwhich I thought was opportune.\n\nPatch attached.\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 12 Apr 2023 09:36:14 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bufmgr possible overflow"
},
{
"msg_contents": "Perhaps it's a good idea to seprate the patch for each issue.\n\nAt Wed, 12 Apr 2023 09:36:14 -0300, Ranier Vilela <[email protected]> wrote in> IMO I think that commit 31966b1\n> <https://github.com/postgres/postgres/commit/31966b151e6ab7a6284deab6e8fe5faddaf2ae4c>\n> has an oversight.\n> \n> All the logic of the changes are based on the \"extend_by\" variable, which\n> is a uint32, but in some places it is using \"int\", which can lead to an\n> overflow at some point.\n\nint is nowadays is at least 32 bits, so using int in a loop that\niterates up to a uint32 value won't cause an overflow. However, the\nfix iteself looks good because it unifies the loop variable types in\nsimilar loops.\n\nOn the other hand, I'm not a fan of changing the signature of\nsmgr_zeroextend to use uint32. I don't think it improves things and\nthe other reason is that I don't like using unnatural integer types\nunnecessarily in API parameter types. ASnyway, the patch causes a type\ninconsistency between smgr_zserextend and mdzeroextend.\n\n> I also take the opportunity to correct another oversight, regarding the\n> commit dad50f6\n> <https://github.com/postgres/postgres/commit/dad50f677c42de207168a3f08982ba23c9fc6720>\n> ,\n> for possible duplicate assignment.\n> GetLocalBufferDescriptor was called twice.\n> \n> Taking advantage of this, I promoted a scope reduction for some variables,\n> which I thought was opportune.\n\nI like the scope reductions.\n\nRegarding the duplicate assignment to existing_hdr, I prefer assigning\nit in the definition line, but I don't have a strong opinion on this\nmatter.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:29:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bufmgr possible overflow"
},
{
"msg_contents": "Em qua., 12 de abr. de 2023 às 22:29, Kyotaro Horiguchi <\[email protected]> escreveu:\n\n> Perhaps it's a good idea to seprate the patch for each issue.\n>\n> Thanks Kyotaro for taking a look.\n\n\n> At Wed, 12 Apr 2023 09:36:14 -0300, Ranier Vilela <[email protected]>\n> wrote in> IMO I think that commit 31966b1\n> > <\n> https://github.com/postgres/postgres/commit/31966b151e6ab7a6284deab6e8fe5faddaf2ae4c\n> >\n> > has an oversight.\n> >\n> > All the logic of the changes are based on the \"extend_by\" variable, which\n> > is a uint32, but in some places it is using \"int\", which can lead to an\n> > overflow at some point.\n>\n> int is nowadays is at least 32 bits, so using int in a loop that\n> iterates up to a uint32 value won't cause an overflow.\n\nIt's never good to mix data types.\nint is signed integer type and can carry only half of the positive numbers\nthat \"unsigned int\" can.\n\nfrom c.h:\n#ifndef HAVE_UINT8\ntypedef unsigned char uint8; /* == 8 bits */\ntypedef unsigned short uint16; /* == 16 bits */\ntypedef unsigned int uint32; /* == 32 bits */\n#endif /* not HAVE_UINT8 */\n\nHowever, the\n> fix iteself looks good because it unifies the loop variable types in\n> similar loops.\n>\nYeah.\n\n\n>\n> On the other hand, I'm not a fan of changing the signature of\n> smgr_zeroextend to use uint32. I don't think it improves things and\n> the other reason is that I don't like using unnatural integer types\n> unnecessarily in API parameter types.\n\nBut ExtendBufferedRelBy calls smgr_zeroextend and carries a uint32 value to\nint param.\nsmgr_zeroextend signature must be changed to work with any values from\nuint32.\n\nASnyway, the patch causes a type\n> inconsistency between smgr_zserextend and mdzeroextend.\n>\nYeah, have more inconsistency.\nextern void smgrwriteback(SMgrRelation reln, ForkNumber forknum,\n BlockNumber blocknum, BlockNumber nblocks);\n\nBlockNumber is what integer data type?\n\n\n> > I also take the opportunity to correct another oversight, regarding the\n> > commit dad50f6\n> > <\n> https://github.com/postgres/postgres/commit/dad50f677c42de207168a3f08982ba23c9fc6720\n> >\n> > ,\n> > for possible duplicate assignment.\n> > GetLocalBufferDescriptor was called twice.\n> >\n> > Taking advantage of this, I promoted a scope reduction for some\n> variables,\n> > which I thought was opportune.\n>\n> I like the scope reductions.\n>\nYeah.\n\n\n>\n> Regarding the duplicate assignment to existing_hdr, I prefer assigning\n> it in the definition line, but I don't have a strong opinion on this\n> matter.\n>\nCloser to where the variable is used is preferable if the assignment is not\ncheap.\n\nregards,\nRanier Vilela\n\nEm qua., 12 de abr. de 2023 às 22:29, Kyotaro Horiguchi <[email protected]> escreveu:Perhaps it's a good idea to seprate the patch for each issue.\nThanks Kyotaro for taking a look. \nAt Wed, 12 Apr 2023 09:36:14 -0300, Ranier Vilela <[email protected]> wrote in> IMO I think that commit 31966b1\n> <https://github.com/postgres/postgres/commit/31966b151e6ab7a6284deab6e8fe5faddaf2ae4c>\n> has an oversight.\n> \n> All the logic of the changes are based on the \"extend_by\" variable, which\n> is a uint32, but in some places it is using \"int\", which can lead to an\n> overflow at some point.\n\nint is nowadays is at least 32 bits, so using int in a loop that\niterates up to a uint32 value won't cause an overflow. It's never good to mix data types.int is signed integer type and can carry only half of the positive numbers that \"unsigned int\" can.from c.h:#ifndef HAVE_UINT8typedef unsigned char uint8;\t/* == 8 bits */typedef unsigned short uint16;\t/* == 16 bits */typedef unsigned int uint32;\t/* == 32 bits */#endif\t\t\t\t\t\t\t/* not HAVE_UINT8 */However, the\nfix iteself looks good because it unifies the loop variable types in\nsimilar loops.Yeah. \n\nOn the other hand, I'm not a fan of changing the signature of\nsmgr_zeroextend to use uint32. I don't think it improves things and\nthe other reason is that I don't like using unnatural integer types\nunnecessarily in API parameter types. But ExtendBufferedRelBy calls smgr_zeroextend and carries a uint32 value to int param.\nsmgr_zeroextend signature must be changed to work with any values from uint32.ASnyway, the patch causes a type\ninconsistency between smgr_zserextend and mdzeroextend.Yeah, have more inconsistency.extern void smgrwriteback(SMgrRelation reln, ForkNumber forknum,\t\t\t\t\t\t BlockNumber blocknum, BlockNumber nblocks);BlockNumber is what integer data type?\n\n> I also take the opportunity to correct another oversight, regarding the\n> commit dad50f6\n> <https://github.com/postgres/postgres/commit/dad50f677c42de207168a3f08982ba23c9fc6720>\n> ,\n> for possible duplicate assignment.\n> GetLocalBufferDescriptor was called twice.\n> \n> Taking advantage of this, I promoted a scope reduction for some variables,\n> which I thought was opportune.\n\nI like the scope reductions.Yeah. \n\nRegarding the duplicate assignment to existing_hdr, I prefer assigning\nit in the definition line, but I don't have a strong opinion on this\nmatter.Closer to where the variable is used is preferable if the assignment is not cheap. regards,Ranier Vilela",
"msg_date": "Thu, 13 Apr 2023 08:42:46 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bufmgr possible overflow"
}
] |
[
{
"msg_contents": "The SQL Language part of the docs has a brief page on unique indexes.\nIt doesn't mention the new NULLS NOT DISTINCT functionality on unique\nindexes and this is a good place to mention it, as it already warns\nthe user about the old/default behavior.\n\n-- \nDavid Gilman\n:DG<",
"msg_date": "Wed, 12 Apr 2023 10:40:28 -0400",
"msg_from": "David Gilman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Note new NULLS NOT DISTINCT on unique index tutorial page"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 10:40 AM David Gilman <[email protected]>\nwrote:\n\n> The SQL Language part of the docs has a brief page on unique indexes.\n> It doesn't mention the new NULLS NOT DISTINCT functionality on unique\n> indexes and this is a good place to mention it, as it already warns\n> the user about the old/default behavior.\n>\n>\nI'm ok with the wording as-is, but perhaps we can phrase it as \"distinct\"\nvs \"not equal\", thus leaning into the syntax a bit:\n\nBy default, null values in a unique column are considered distinct,\nallowing multiple nulls in the column.\n\n\nor maybe\n\nBy default, null values in a unique column are considered\n<literal>DISTINCT</literal>, allowing multiple nulls in the column.\n\nOn Wed, Apr 12, 2023 at 10:40 AM David Gilman <[email protected]> wrote:The SQL Language part of the docs has a brief page on unique indexes.\nIt doesn't mention the new NULLS NOT DISTINCT functionality on unique\nindexes and this is a good place to mention it, as it already warns\nthe user about the old/default behavior.I'm ok with the wording as-is, but perhaps we can phrase it as \"distinct\" vs \"not equal\", thus leaning into the syntax a bit:By default, null values in a unique column are considered distinct, allowing multiple nulls in the column.or maybeBy default, null values in a unique column are considered <literal>DISTINCT</literal>, allowing multiple nulls in the column.",
"msg_date": "Mon, 17 Apr 2023 13:01:37 -0400",
"msg_from": "Corey Huinker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Note new NULLS NOT DISTINCT on unique index tutorial page"
},
{
"msg_contents": "On Thu, 13 Apr 2023 at 02:40, David Gilman <[email protected]> wrote:\n> The SQL Language part of the docs has a brief page on unique indexes.\n> It doesn't mention the new NULLS NOT DISTINCT functionality on unique\n> indexes and this is a good place to mention it, as it already warns\n> the user about the old/default behavior.\n\nI think we should do this and apply it to v15 too.\n\nIt seems like a good idea to include the [NULLS [NOT] DISTINCT] in the\nsyntax synopsis too. Otherwise, the reader of that page is just left\nguessing where they'll put NULLS NOT DISTINCT to get the behaviour\nyou've added the text for.\n\nI've attached an updated patch with that plus 2 very small wording\ntweaks to your proposed text.\n\nDavid",
"msg_date": "Tue, 18 Apr 2023 15:15:19 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Note new NULLS NOT DISTINCT on unique index tutorial page"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 05:01, Corey Huinker <[email protected]> wrote:\n> I'm ok with the wording as-is, but perhaps we can phrase it as \"distinct\" vs \"not equal\", thus leaning into the syntax a bit:\n>\n> By default, null values in a unique column are considered distinct, allowing multiple nulls in the column.\n>\n>\n> or maybe\n>\n> By default, null values in a unique column are considered <literal>DISTINCT</literal>, allowing multiple nulls in the column.>\n\nI acknowledge your input, but I didn't think either of these was an\nimprovement over what David suggested. I understand that many people\nwill know that \"SELECT DISTINCT\" and \"WHERE x IS NOT DISTINCT FROM y\"\nmeans treat NULLs equally, but I don't think we should expect the\nreader here to know that's what we're talking about. In any case,\nwe're talking about existing wording here, not something David is\nadding.\n\nDavid\n\n\n",
"msg_date": "Tue, 18 Apr 2023 15:22:36 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Note new NULLS NOT DISTINCT on unique index tutorial page"
},
{
"msg_contents": "The revised patch is good. Please go ahead and commit whatever\nphrasing you or the other committers find acceptable. I don't really\nhave any preferences in how this is exactly phrased, I just think it\nshould be mentioned in the docs.\n\nOn Mon, Apr 17, 2023 at 11:15 PM David Rowley <[email protected]> wrote:\n>\n> On Thu, 13 Apr 2023 at 02:40, David Gilman <[email protected]> wrote:\n> > The SQL Language part of the docs has a brief page on unique indexes.\n> > It doesn't mention the new NULLS NOT DISTINCT functionality on unique\n> > indexes and this is a good place to mention it, as it already warns\n> > the user about the old/default behavior.\n>\n> I think we should do this and apply it to v15 too.\n>\n> It seems like a good idea to include the [NULLS [NOT] DISTINCT] in the\n> syntax synopsis too. Otherwise, the reader of that page is just left\n> guessing where they'll put NULLS NOT DISTINCT to get the behaviour\n> you've added the text for.\n>\n> I've attached an updated patch with that plus 2 very small wording\n> tweaks to your proposed text.\n>\n> David\n\n\n\n-- \nDavid Gilman\n:DG<\n\n\n",
"msg_date": "Wed, 19 Apr 2023 20:03:46 -0400",
"msg_from": "David Gilman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Note new NULLS NOT DISTINCT on unique index tutorial page"
},
{
"msg_contents": "On Thu, 20 Apr 2023 at 12:04, David Gilman <[email protected]> wrote:\n> The revised patch is good. Please go ahead and commit whatever\n> phrasing you or the other committers find acceptable. I don't really\n> have any preferences in how this is exactly phrased, I just think it\n> should be mentioned in the docs.\n\nThanks. With that, I admit to further adjusting the wording before I\npushed the result.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 Apr 2023 23:56:35 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Note new NULLS NOT DISTINCT on unique index tutorial page"
}
] |
[
{
"msg_contents": "hi hackers,\n\nIn the logical decoding on standby thread [1], Andres proposed 2 new tests (that I did\nnot find the time to complete before the finish line):\n\n- Test that we can subscribe to the standby (with the publication created on the primary)\n- Verify that invalidated logical slots do not lead to retaining WAL\n\nPlease find those 2 missing tests in the patch proposal attached.\n\nA few words about them:\n\n1) Regarding the subscription test:\n\nIt modifies wait_for_catchup() to take into account the case where the requesting\nnode is in recovery mode. Indeed, without that change, wait_for_subscription_sync() was\nfailing with:\n\n\"\nerror running SQL: 'psql:<stdin>:1: ERROR: recovery is in progress\nHINT: WAL control functions cannot be executed during recovery.'\nwhile running 'psql -XAtq -d port=61441 host=/tmp/45dt3wqs2p dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'SELECT pg_current_wal_lsn()'\n\"\n\n2) Regarding the WAL file not retained test:\n\nAs it's not possible to execute pg_switch_wal() and friends on a standby, this is\ndone on the primary. Also checking that the WAL file (linked to a restart_lsn of an invalidate\nslot) has been removed is done directly at the os/directory level.\n\nThe attached patch also removes:\n\n\"\n-log_min_messages = 'debug2'\n-log_error_verbosity = verbose\n\"\n\nas also discussed in [1].\n\nI'm not sure if adding those 2 tests should be considered as an open item. I can add this open item\nif we think that makes sense. I'd be happy to do so but it looks like I don't have the privileges\nto edit https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n[1]: https://www.postgresql.org/message-id/6d801661-e21b-7326-be1b-f90d904da66a%40gmail.com",
"msg_date": "Wed, 12 Apr 2023 18:15:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On 2023-Apr-12, Drouvot, Bertrand wrote:\n\n> I'm not sure if adding those 2 tests should be considered as an open\n> item. I can add this open item if we think that makes sense. I'd be\n> happy to do so but it looks like I don't have the privileges to edit\n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\nI think adding extra tests for new code can definitely be considered an\nopen item, since those tests might help to discover issues in said new\ncode.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 17 Apr 2023 11:55:28 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/17/23 11:55 AM, Alvaro Herrera wrote:\n> On 2023-Apr-12, Drouvot, Bertrand wrote:\n> \n>> I'm not sure if adding those 2 tests should be considered as an open\n>> item. I can add this open item if we think that makes sense. I'd be\n>> happy to do so but it looks like I don't have the privileges to edit\n>> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n> \n> I think adding extra tests for new code can definitely be considered an\n> open item, since those tests might help to discover issues in said new\n> code.\n> \n\nThanks for the feedback! Added as an open item.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 12:21:20 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Wed, 12 Apr 2023 at 21:45, Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> hi hackers,\n>\n> In the logical decoding on standby thread [1], Andres proposed 2 new tests (that I did\n> not find the time to complete before the finish line):\n>\n> - Test that we can subscribe to the standby (with the publication created on the primary)\n> - Verify that invalidated logical slots do not lead to retaining WAL\n>\n> Please find those 2 missing tests in the patch proposal attached.\n\nFew comments:\n1) Should this change be committed as a separate patch instead of\nmixing it with the new test addition patch? I feel it would be better\nto split it into 0001 and 0002 patches.\n # Name for the physical slot on primary\n@@ -235,8 +241,6 @@ $node_primary->append_conf('postgresql.conf', q{\n wal_level = 'logical'\n max_replication_slots = 4\n max_wal_senders = 4\n-log_min_messages = 'debug2'\n-log_error_verbosity = verbose\n });\n $node_primary->dump_info;\n $node_primary->start;\n\n2) We could add a commitfest entry for this, which will help in\nchecking cfbot results across platforms.\n\n3) Should the comment say subscription instead of subscriber here?\n+# We do not need the subscriber anymore\n+$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION tap_sub\");\n+$node_subscriber->stop;\n\n4) we could add a commit message for the patch\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 24 Apr 2023 09:34:54 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/24/23 6:04 AM, vignesh C wrote:\n> On Wed, 12 Apr 2023 at 21:45, Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> hi hackers,\n>>\n>> In the logical decoding on standby thread [1], Andres proposed 2 new tests (that I did\n>> not find the time to complete before the finish line):\n>>\n>> - Test that we can subscribe to the standby (with the publication created on the primary)\n>> - Verify that invalidated logical slots do not lead to retaining WAL\n>>\n>> Please find those 2 missing tests in the patch proposal attached.\n> \n> Few comments:\n\nThanks for looking at it!\n\n> 1) Should this change be committed as a separate patch instead of\n> mixing it with the new test addition patch? I feel it would be better\n> to split it into 0001 and 0002 patches.\n\nAgree, done in V2 attached.\n \n> 2) We could add a commitfest entry for this, which will help in\n> checking cfbot results across platforms.\n\nGood point, done in [1].\n\n> 3) Should the comment say subscription instead of subscriber here?\n> +# We do not need the subscriber anymore\n> +$node_subscriber->safe_psql('postgres', \"DROP SUBSCRIPTION tap_sub\");\n> +$node_subscriber->stop;\n\nComment was due to the node_subscriber being stopped. Changed to\n \"We do not need the subscription and the subscriber anymore\"\nin V2.\n\n> \n> 4) we could add a commit message for the patch\n> \n\nGood point, done.\n\n[1]: https://commitfest.postgresql.org/43/4295/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Apr 2023 07:52:52 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 11:24 AM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n\nFew comments:\n============\n1.\n+$node_subscriber->init(allows_streaming => 'logical');\n+$node_subscriber->append_conf('postgresql.conf', 'max_replication_slots = 4');\n\nWhy do we need slots on the subscriber?\n\n2.\n+# Speed up the subscription creation\n+$node_primary->safe_psql('postgres', \"SELECT pg_log_standby_snapshot()\");\n+\n+# Explicitly shut down psql instance gracefully - to avoid hangs\n+# or worse on windows\n+$psql_subscriber{subscriber_stdin} .= \"\\\\q\\n\";\n+$psql_subscriber{run}->finish;\n+\n+# Insert some rows on the primary\n+$node_primary->safe_psql('postgres',\n+ qq[INSERT INTO tab_rep select generate_series(1,10);]);\n+\n+$node_primary->wait_for_replay_catchup($node_standby);\n+\n+# To speed up the wait_for_subscription_sync\n+$node_primary->safe_psql('postgres', \"SELECT pg_log_standby_snapshot()\");\n+$node_subscriber->wait_for_subscription_sync($node_standby, 'tap_sub');\n\nIt is not clear to me why you need to do pg_log_standby_snapshot() twice.\n\n3. Why do you need $psql_subscriber to be used in a different way\ninstead of using safe_psql as is used for node_primary?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 11:54:08 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 9:45 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n>\n> The attached patch also removes:\n>\n> \"\n> -log_min_messages = 'debug2'\n> -log_error_verbosity = verbose\n> \"\n>\n> as also discussed in [1].\n>\n\nI agree that we should reduce the log level here. It is discussed in\nan email [1]. I'll push this part tomorrow unless Andres or someone\nelse thinks that we still need this.\n\n[1] - https://www.postgresql.org/message-id/523315.1681245505%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:08:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 11:54 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Apr 24, 2023 at 11:24 AM Drouvot, Bertrand\n> <[email protected]> wrote:\n> >\n>\n> Few comments:\n> ============\n>\n\n+# We can not test if the WAL file still exists immediately.\n+# We need to let some time to the standby to actually \"remove\" it.\n+my $i = 0;\n+while (1)\n+{\n+ last if !-f $standby_walfile;\n+ if ($i++ == 10 * $default_timeout)\n+ {\n+ die\n+ \"could not determine if WAL file has been retained or not, can't continue\";\n+ }\n+ usleep(100_000);\n+}\n\nIs this adhoc wait required because we can't guarantee that the\ncheckpoint is complete on standby even after using wait_for_catchup?\nIs there a guarantee that it can never fail on some slower machines?\n\nBTW, for the second test is it necessary that we first ensure that the\nWAL file has not been retained on the primary?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:15:38 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/24/23 8:24 AM, Amit Kapila wrote:\n> On Mon, Apr 24, 2023 at 11:24 AM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n> \n> Few comments:\n> ============\n\nThanks for looking at it!\n\n> 1.\n> +$node_subscriber->init(allows_streaming => 'logical');\n> +$node_subscriber->append_conf('postgresql.conf', 'max_replication_slots = 4');\n> \n> Why do we need slots on the subscriber?\n> \n\nGood point, it's not needed. I guess it has been missed during my initial patch clean up.\n\nFixed in V3 attached.\n\n> 2.\n> +# Speed up the subscription creation\n> +$node_primary->safe_psql('postgres', \"SELECT pg_log_standby_snapshot()\");\n> +\n> +# Explicitly shut down psql instance gracefully - to avoid hangs\n> +# or worse on windows\n> +$psql_subscriber{subscriber_stdin} .= \"\\\\q\\n\";\n> +$psql_subscriber{run}->finish;\n> +\n> +# Insert some rows on the primary\n> +$node_primary->safe_psql('postgres',\n> + qq[INSERT INTO tab_rep select generate_series(1,10);]);\n> +\n> +$node_primary->wait_for_replay_catchup($node_standby);\n> +\n> +# To speed up the wait_for_subscription_sync\n> +$node_primary->safe_psql('postgres', \"SELECT pg_log_standby_snapshot()\");\n> +$node_subscriber->wait_for_subscription_sync($node_standby, 'tap_sub');\n> \n> It is not clear to me why you need to do pg_log_standby_snapshot() twice.\n\nThat's because there is 2 logical slot creations that have the be done on the standby.\n\nThe one for the subscription:\n\n\"\nCREATE_REPLICATION_SLOT \"tap_sub\" LOGICAL pgoutput (SNAPSHOT 'nothing')\n\"\n\nAnd the one for the data sync:\n\n\"\nCREATE_REPLICATION_SLOT \"pg_16389_sync_16384_7225540800768250444\" LOGICAL pgoutput (SNAPSHOT 'use')\n\"\n\nWithout the second \"pg_log_standby_snapshot()\" then wait_for_subscription_sync() would be waiting\nsome time on the poll for \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\"\n\nAdding a comment in V3 to explain the need for the second pg_log_standby_snapshot().\n\n> \n> 3. Why do you need $psql_subscriber to be used in a different way\n> instead of using safe_psql as is used for node_primary?\n> \n\nBecause safe_psql() would wait for activity on the primary without being able to launch\npg_log_standby_snapshot() on the primary while waiting. psql_subscriber() allows\nto not wait synchronously.\n\nAlso adding a comment in V3 to explain why safe_psql() is not being used here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 24 Apr 2023 12:06:53 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/24/23 11:45 AM, Amit Kapila wrote:\n> On Mon, Apr 24, 2023 at 11:54 AM Amit Kapila <[email protected]> wrote:\n>>\n>> On Mon, Apr 24, 2023 at 11:24 AM Drouvot, Bertrand\n>> <[email protected]> wrote:\n>>>\n>>\n>> Few comments:\n>> ============\n>>\n> \n> +# We can not test if the WAL file still exists immediately.\n> +# We need to let some time to the standby to actually \"remove\" it.\n> +my $i = 0;\n> +while (1)\n> +{\n> + last if !-f $standby_walfile;\n> + if ($i++ == 10 * $default_timeout)\n> + {\n> + die\n> + \"could not determine if WAL file has been retained or not, can't continue\";\n> + }\n> + usleep(100_000);\n> +}\n> \n> Is this adhoc wait required because we can't guarantee that the\n> checkpoint is complete on standby even after using wait_for_catchup?\n\nYes, the restart point on the standby is not necessary completed even after wait_for_catchup is done.\n\n> Is there a guarantee that it can never fail on some slower machines?\n> \n\nWe are waiting here at a maximum for 10 * $default_timeout (means 3 minutes) before\nwe time out. Would you prefer to wait more than 3 minutes at a maximum?\n\n> BTW, for the second test is it necessary that we first ensure that the\n> WAL file has not been retained on the primary?\n> \n\nI was not sure it's worth it too. Idea was more: it's useless to verify it is removed on\nthe standby if we are not 100% sure it has been removed on the primary first. But yeah, we can get\nrid of this test if you prefer.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:06:59 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 3:36 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 4/24/23 8:24 AM, Amit Kapila wrote:\n>\n> > 2.\n> > +# Speed up the subscription creation\n> > +$node_primary->safe_psql('postgres', \"SELECT pg_log_standby_snapshot()\");\n> > +\n> > +# Explicitly shut down psql instance gracefully - to avoid hangs\n> > +# or worse on windows\n> > +$psql_subscriber{subscriber_stdin} .= \"\\\\q\\n\";\n> > +$psql_subscriber{run}->finish;\n> > +\n> > +# Insert some rows on the primary\n> > +$node_primary->safe_psql('postgres',\n> > + qq[INSERT INTO tab_rep select generate_series(1,10);]);\n> > +\n> > +$node_primary->wait_for_replay_catchup($node_standby);\n> > +\n> > +# To speed up the wait_for_subscription_sync\n> > +$node_primary->safe_psql('postgres', \"SELECT pg_log_standby_snapshot()\");\n> > +$node_subscriber->wait_for_subscription_sync($node_standby, 'tap_sub');\n> >\n> > It is not clear to me why you need to do pg_log_standby_snapshot() twice.\n>\n> That's because there is 2 logical slot creations that have the be done on the standby.\n>\n> The one for the subscription:\n>\n> \"\n> CREATE_REPLICATION_SLOT \"tap_sub\" LOGICAL pgoutput (SNAPSHOT 'nothing')\n> \"\n>\n> And the one for the data sync:\n>\n> \"\n> CREATE_REPLICATION_SLOT \"pg_16389_sync_16384_7225540800768250444\" LOGICAL pgoutput (SNAPSHOT 'use')\n> \"\n>\n> Without the second \"pg_log_standby_snapshot()\" then wait_for_subscription_sync() would be waiting\n> some time on the poll for \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\"\n>\n> Adding a comment in V3 to explain the need for the second pg_log_standby_snapshot().\n>\n\nWon't this still be unpredictable because it is possible that the\ntablesync worker may take more time to get launched or create a\nreplication slot? If that happens after your second\npg_log_standby_snapshot() then wait_for_subscription_sync() will be\nhanging. Wouldn't it be better to create a subscription with\n(copy_data = false) to make it predictable and then we won't need\npg_log_standby_snapshot() to be performed twice?\n\nIf you agree with the above suggestion then you probably need to move\nwait_for_subscription_sync() before Insert.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Apr 2023 09:53:25 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 5:38 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 4/24/23 11:45 AM, Amit Kapila wrote:\n> > On Mon, Apr 24, 2023 at 11:54 AM Amit Kapila <[email protected]> wrote:\n> >>\n> >> On Mon, Apr 24, 2023 at 11:24 AM Drouvot, Bertrand\n> >> <[email protected]> wrote:\n> >>>\n> >>\n> >> Few comments:\n> >> ============\n> >>\n> >\n> > +# We can not test if the WAL file still exists immediately.\n> > +# We need to let some time to the standby to actually \"remove\" it.\n> > +my $i = 0;\n> > +while (1)\n> > +{\n> > + last if !-f $standby_walfile;\n> > + if ($i++ == 10 * $default_timeout)\n> > + {\n> > + die\n> > + \"could not determine if WAL file has been retained or not, can't continue\";\n> > + }\n> > + usleep(100_000);\n> > +}\n> >\n> > Is this adhoc wait required because we can't guarantee that the\n> > checkpoint is complete on standby even after using wait_for_catchup?\n>\n> Yes, the restart point on the standby is not necessary completed even after wait_for_catchup is done.\n>\n> > Is there a guarantee that it can never fail on some slower machines?\n> >\n>\n> We are waiting here at a maximum for 10 * $default_timeout (means 3 minutes) before\n> we time out. Would you prefer to wait more than 3 minutes at a maximum?\n>\n\nNo, because I don't know what would be a suitable timeout here. At\nthis stage, I don't have a good idea on how to implement this test in\na better way. Can we split this into a separate patch as the first\ntest is a bit straightforward, we can push that one and then\nbrainstorm on if there is a better way to test this functionality.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Apr 2023 10:13:48 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/25/23 6:23 AM, Amit Kapila wrote:\n> On Mon, Apr 24, 2023 at 3:36 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> Without the second \"pg_log_standby_snapshot()\" then wait_for_subscription_sync() would be waiting\n>> some time on the poll for \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\"\n>>\n>> Adding a comment in V3 to explain the need for the second pg_log_standby_snapshot().\n>>\n> \n> Won't this still be unpredictable because it is possible that the\n> tablesync worker may take more time to get launched or create a\n> replication slot? If that happens after your second\n> pg_log_standby_snapshot() then wait_for_subscription_sync() will be\n> hanging. \n\nOh right, that looks like a possible scenario.\n\n> Wouldn't it be better to create a subscription with\n> (copy_data = false) to make it predictable and then we won't need\n> pg_log_standby_snapshot() to be performed twice?\n> \n> If you agree with the above suggestion then you probably need to move\n> wait_for_subscription_sync() before Insert.\n> \n\nI like that idea, thanks! Done in V4 attached.\n\nNot related to the above corner case, but while re-reading the patch I also added:\n\n\"\n$node_primary->wait_for_replay_catchup($node_standby);\n\"\n\nbetween the publication creation on the primary and the subscription to the standby\n(to ensure the publication gets replicated before we request for the subscription creation).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 25 Apr 2023 09:19:43 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/25/23 6:43 AM, Amit Kapila wrote:\n> On Mon, Apr 24, 2023 at 5:38 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> We are waiting here at a maximum for 10 * $default_timeout (means 3 minutes) before\n>> we time out. Would you prefer to wait more than 3 minutes at a maximum?\n>>\n> \n> No, because I don't know what would be a suitable timeout here.\n\nYeah, I understand that. On the other hand, there is other places that\nrely on a timeout, for example:\n\n- wait_for_catchup(), wait_for_slot_catchup(),\nwait_for_subscription_sync() by making use of poll_query_until.\n- wait_for_log() by setting a max_attempts.\n\nCouldn't we have the same concern for those ones? (aka be suitable on\nslower machines).\n\n> At\n> this stage, I don't have a good idea on how to implement this test in\n> a better way. Can we split this into a separate patch as the first\n> test is a bit straightforward, we can push that one and then\n> brainstorm on if there is a better way to test this functionality.\n> \n\nI created a dedicated v4-0002-Add-retained-WAL-test-in-035_standby_logical_deco.patch\njust shared up-thread.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Apr 2023 09:25:39 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Tue, 25 Apr 2023 at 12:51, Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On 4/25/23 6:23 AM, Amit Kapila wrote:\n> > On Mon, Apr 24, 2023 at 3:36 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n> >>\n> >> Without the second \"pg_log_standby_snapshot()\" then wait_for_subscription_sync() would be waiting\n> >> some time on the poll for \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\"\n> >>\n> >> Adding a comment in V3 to explain the need for the second pg_log_standby_snapshot().\n> >>\n> >\n> > Won't this still be unpredictable because it is possible that the\n> > tablesync worker may take more time to get launched or create a\n> > replication slot? If that happens after your second\n> > pg_log_standby_snapshot() then wait_for_subscription_sync() will be\n> > hanging.\n>\n> Oh right, that looks like a possible scenario.\n>\n> > Wouldn't it be better to create a subscription with\n> > (copy_data = false) to make it predictable and then we won't need\n> > pg_log_standby_snapshot() to be performed twice?\n> >\n> > If you agree with the above suggestion then you probably need to move\n> > wait_for_subscription_sync() before Insert.\n> >\n>\n> I like that idea, thanks! Done in V4 attached.\n>\n> Not related to the above corner case, but while re-reading the patch I also added:\n>\n> \"\n> $node_primary->wait_for_replay_catchup($node_standby);\n> \"\n>\n> between the publication creation on the primary and the subscription to the standby\n> (to ensure the publication gets replicated before we request for the subscription creation).\n\nThanks for the updated patch.\nFew comments:\n1) subscriber_stdout and subscriber_stderr are not required for this\ntest case, we could remove it, I was able to remove those variables\nand run the test successfully:\n+$node_subscriber->start;\n+\n+my %psql_subscriber = (\n+ 'subscriber_stdin' => '',\n+ 'subscriber_stdout' => '',\n+ 'subscriber_stderr' => '');\n+$psql_subscriber{run} = IPC::Run::start(\n+ [ 'psql', '-XA', '-f', '-', '-d',\n$node_subscriber->connstr('postgres') ],\n+ '<',\n+ \\$psql_subscriber{subscriber_stdin},\n+ '>',\n+ \\$psql_subscriber{subscriber_stdout},\n+ '2>',\n+ \\$psql_subscriber{subscriber_stderr},\n+ $psql_timeout);\n\nI ran it like:\nmy %psql_subscriber = (\n'subscriber_stdin' => '');\n$psql_subscriber{run} = IPC::Run::start(\n[ 'psql', '-XA', '-f', '-', '-d', $node_subscriber->connstr('postgres') ],\n'<',\n\\$psql_subscriber{subscriber_stdin},\n$psql_timeout);\n\n2) Also we have changed the default timeout here, why is this change required:\n my $node_cascading_standby =\nPostgreSQL::Test::Cluster->new('cascading_standby');\n+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;\n+my $psql_timeout = IPC::Run::timer(2 * $default_timeout);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 26 Apr 2023 09:36:09 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/26/23 6:06 AM, vignesh C wrote:\n> On Tue, 25 Apr 2023 at 12:51, Drouvot, Bertrand\n> <[email protected]> wrote:\n> Thanks for the updated patch.\n> Few comments:\n\nThanks for looking at it!\n\n> 1) subscriber_stdout and subscriber_stderr are not required for this\n> test case, we could remove it, I was able to remove those variables\n> and run the test successfully:\n> +$node_subscriber->start;\n> +\n> +my %psql_subscriber = (\n> + 'subscriber_stdin' => '',\n> + 'subscriber_stdout' => '',\n> + 'subscriber_stderr' => '');\n> +$psql_subscriber{run} = IPC::Run::start(\n> + [ 'psql', '-XA', '-f', '-', '-d',\n> $node_subscriber->connstr('postgres') ],\n> + '<',\n> + \\$psql_subscriber{subscriber_stdin},\n> + '>',\n> + \\$psql_subscriber{subscriber_stdout},\n> + '2>',\n> + \\$psql_subscriber{subscriber_stderr},\n> + $psql_timeout);\n> \n> I ran it like:\n> my %psql_subscriber = (\n> 'subscriber_stdin' => '');\n> $psql_subscriber{run} = IPC::Run::start(\n> [ 'psql', '-XA', '-f', '-', '-d', $node_subscriber->connstr('postgres') ],\n> '<',\n> \\$psql_subscriber{subscriber_stdin},\n> $psql_timeout);\n> \n\nNot using the 3 std* is also the case for example in 021_row_visibility.pl and 032_relfilenode_reuse.pl\nwhere the \"stderr\" is set but does not seem to be used.\n\nI don't think that's a problem to keep them all and I think it's better to have\nthem re-directed to dedicated places.\n\n> 2) Also we have changed the default timeout here, why is this change required:\n> my $node_cascading_standby =\n> PostgreSQL::Test::Cluster->new('cascading_standby');\n> +my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n> my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;\n> +my $psql_timeout = IPC::Run::timer(2 * $default_timeout);\n\nI think I used 021_row_visibility.pl as an example. But agree there is\nothers .pl that are using the timeout_default as the psql_timeout and that\nthe default is enough in our case. So, using the default in V5 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 26 Apr 2023 10:14:18 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Wed, 26 Apr 2023 at 13:45, Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On 4/26/23 6:06 AM, vignesh C wrote:\n> > On Tue, 25 Apr 2023 at 12:51, Drouvot, Bertrand\n> > <[email protected]> wrote:\n> > Thanks for the updated patch.\n> > Few comments:\n>\n> Thanks for looking at it!\n>\n> > 1) subscriber_stdout and subscriber_stderr are not required for this\n> > test case, we could remove it, I was able to remove those variables\n> > and run the test successfully:\n> > +$node_subscriber->start;\n> > +\n> > +my %psql_subscriber = (\n> > + 'subscriber_stdin' => '',\n> > + 'subscriber_stdout' => '',\n> > + 'subscriber_stderr' => '');\n> > +$psql_subscriber{run} = IPC::Run::start(\n> > + [ 'psql', '-XA', '-f', '-', '-d',\n> > $node_subscriber->connstr('postgres') ],\n> > + '<',\n> > + \\$psql_subscriber{subscriber_stdin},\n> > + '>',\n> > + \\$psql_subscriber{subscriber_stdout},\n> > + '2>',\n> > + \\$psql_subscriber{subscriber_stderr},\n> > + $psql_timeout);\n> >\n> > I ran it like:\n> > my %psql_subscriber = (\n> > 'subscriber_stdin' => '');\n> > $psql_subscriber{run} = IPC::Run::start(\n> > [ 'psql', '-XA', '-f', '-', '-d', $node_subscriber->connstr('postgres') ],\n> > '<',\n> > \\$psql_subscriber{subscriber_stdin},\n> > $psql_timeout);\n> >\n>\n> Not using the 3 std* is also the case for example in 021_row_visibility.pl and 032_relfilenode_reuse.pl\n> where the \"stderr\" is set but does not seem to be used.\n>\n> I don't think that's a problem to keep them all and I think it's better to have\n> them re-directed to dedicated places.\n\nok, that way it will be consistent across others too.\n\n> > 2) Also we have changed the default timeout here, why is this change required:\n> > my $node_cascading_standby =\n> > PostgreSQL::Test::Cluster->new('cascading_standby');\n> > +my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n> > my $default_timeout = $PostgreSQL::Test::Utils::timeout_default;\n> > +my $psql_timeout = IPC::Run::timer(2 * $default_timeout);\n>\n> I think I used 021_row_visibility.pl as an example. But agree there is\n> others .pl that are using the timeout_default as the psql_timeout and that\n> the default is enough in our case. So, using the default in V5 attached.\n>\n\nThanks for fixing this.\n\nThere was one typo in the commit message, subscribtion should be\nsubscription, the rest of the changes looks good to me:\nSubject: [PATCH v5] Add subscribtion to the standby test in\n 035_standby_logical_decoding.pl\n\nAdding one test, to verify that subscribtion to the standby is possible.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 26 Apr 2023 14:42:06 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Mon, Apr 24, 2023 8:07 PM Drouvot, Bertrand <[email protected]> wrote:\r\n> \r\n> On 4/24/23 11:45 AM, Amit Kapila wrote:\r\n> > On Mon, Apr 24, 2023 at 11:54 AM Amit Kapila <[email protected]>\r\n> wrote:\r\n> >>\r\n> >> On Mon, Apr 24, 2023 at 11:24 AM Drouvot, Bertrand\r\n> >> <[email protected]> wrote:\r\n> >>>\r\n> >>\r\n> >> Few comments:\r\n> >> ============\r\n> >>\r\n> >\r\n> > +# We can not test if the WAL file still exists immediately.\r\n> > +# We need to let some time to the standby to actually \"remove\" it.\r\n> > +my $i = 0;\r\n> > +while (1)\r\n> > +{\r\n> > + last if !-f $standby_walfile;\r\n> > + if ($i++ == 10 * $default_timeout)\r\n> > + {\r\n> > + die\r\n> > + \"could not determine if WAL file has been retained or not, can't continue\";\r\n> > + }\r\n> > + usleep(100_000);\r\n> > +}\r\n> >\r\n> > Is this adhoc wait required because we can't guarantee that the\r\n> > checkpoint is complete on standby even after using wait_for_catchup?\r\n> \r\n> Yes, the restart point on the standby is not necessary completed even after\r\n> wait_for_catchup is done.\r\n> \r\n\r\nI think that's because when replaying a checkpoint record, the startup process\r\nof standby only saves the information of the checkpoint, and we need to wait for\r\nthe checkpointer to perform a restartpoint (see RecoveryRestartPoint), right? If\r\nso, could we force a checkpoint on standby? After this, the standby should have\r\ncompleted the restartpoint and we don't need to wait.\r\n\r\nBesides, would it be better to wait for the cascading standby? If the wal log\r\nfile needed for cascading standby is removed on the standby, the subsequent test\r\nwill fail. Do we need to consider this scenario? I saw the following error\r\nmessage after setting recovery_min_apply_delay to 5s on the cascading standby,\r\nand the test failed due to a timeout while waiting for cascading standby.\r\n\r\nLog of cascading standby node:\r\nFATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000003 has already been removed\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Wed, 26 Apr 2023 09:58:04 +0000",
"msg_from": "\"Yu Shi (Fujitsu)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RE: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index 6f7f4e5de4..819667d42a 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -2644,7 +2644,16 @@ sub wait_for_catchup\n> \t}\n> \tif (!defined($target_lsn))\n> \t{\n> -\t\t$target_lsn = $self->lsn('write');\n> +\t\tmy $isrecovery = $self->safe_psql('postgres', \"SELECT pg_is_in_recovery()\");\n> +\t\tchomp($isrecovery);\n> +\t\tif ($isrecovery eq 't')\n> +\t\t{\n> +\t\t\t$target_lsn = $self->lsn('replay');\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\t$target_lsn = $self->lsn('write');\n> +\t\t}\n\nPlease modify the function's documentation to account for this code change.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Porque Kim no hacía nada, pero, eso sí,\ncon extraordinario éxito\" (\"Kim\", Kipling)\n\n\n",
"msg_date": "Wed, 26 Apr 2023 12:27:51 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/26/23 12:27 PM, Alvaro Herrera wrote:\n>> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n>> index 6f7f4e5de4..819667d42a 100644\n>> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n>> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n>> @@ -2644,7 +2644,16 @@ sub wait_for_catchup\n>> \t}\n>> \tif (!defined($target_lsn))\n>> \t{\n>> -\t\t$target_lsn = $self->lsn('write');\n>> +\t\tmy $isrecovery = $self->safe_psql('postgres', \"SELECT pg_is_in_recovery()\");\n>> +\t\tchomp($isrecovery);\n>> +\t\tif ($isrecovery eq 't')\n>> +\t\t{\n>> +\t\t\t$target_lsn = $self->lsn('replay');\n>> +\t\t}\n>> +\t\telse\n>> +\t\t{\n>> +\t\t\t$target_lsn = $self->lsn('write');\n>> +\t\t}\n> \n> Please modify the function's documentation to account for this code change.\n> \n\nGood point, thanks! Done in V6 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 26 Apr 2023 13:10:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/26/23 11:12 AM, vignesh C wrote:\n> On Wed, 26 Apr 2023 at 13:45, Drouvot, Bertrand\n> \n> There was one typo in the commit message, subscribtion should be\n> subscription, the rest of the changes looks good to me:\n> Subject: [PATCH v5] Add subscribtion to the standby test in\n> 035_standby_logical_decoding.pl\n> \n> Adding one test, to verify that subscribtion to the standby is possible.\n> \n\nOops, at least I repeated it twice ;-)\nFixed in V6 that I just shared up-thread.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Apr 2023 13:13:59 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/26/23 11:58 AM, Yu Shi (Fujitsu) wrote:\n> On Mon, Apr 24, 2023 8:07 PM Drouvot, Bertrand <[email protected]> wrote:\n\n> I think that's because when replaying a checkpoint record, the startup process\n> of standby only saves the information of the checkpoint, and we need to wait for\n> the checkpointer to perform a restartpoint (see RecoveryRestartPoint), right? If\n> so, could we force a checkpoint on standby? After this, the standby should have\n> completed the restartpoint and we don't need to wait.\n> \n\nThanks for looking at it!\n\nOh right, that looks like good a good way to ensure the WAL file is removed on the standby\nso that we don't need to wait.\n\nImplemented that way in V6 attached and that works fine.\n\n> Besides, would it be better to wait for the cascading standby? If the wal log\n> file needed for cascading standby is removed on the standby, the subsequent test\n> will fail. \n\nGood catch! I agree that we have to wait on the cascading standby before removing\nthe WAL files. It's done in V6 (and the test is not failing anymore if we set a\nrecovery_min_apply_delay to 5s on the cascading standby).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 26 Apr 2023 16:23:06 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 4:41 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 4/26/23 12:27 PM, Alvaro Herrera wrote:\n> >> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> >> index 6f7f4e5de4..819667d42a 100644\n> >> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> >> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> >> @@ -2644,7 +2644,16 @@ sub wait_for_catchup\n> >> }\n> >> if (!defined($target_lsn))\n> >> {\n> >> - $target_lsn = $self->lsn('write');\n> >> + my $isrecovery = $self->safe_psql('postgres', \"SELECT pg_is_in_recovery()\");\n> >> + chomp($isrecovery);\n> >> + if ($isrecovery eq 't')\n> >> + {\n> >> + $target_lsn = $self->lsn('replay');\n> >> + }\n> >> + else\n> >> + {\n> >> + $target_lsn = $self->lsn('write');\n> >> + }\n> >\n> > Please modify the function's documentation to account for this code change.\n> >\n>\n> Good point, thanks! Done in V6 attached.\n>\n\n+When in recovery, the default value of target_lsn is $node->lsn('replay')\n+instead. This is needed when the publisher passed to\nwait_for_subscription_sync()\n+is a standby node.\n\nI think this will be useful whenever wait_for_catchup has been called\nfor a standby node (where self is a standby node). I have tried even\nby commenting wait_for_subscription_sync in the new test then it fails\nfor $node_standby->wait_for_catchup('tap_sub');. So instead, how about\na comment like: \"When in recovery, the default value of target_lsn is\n$node->lsn('replay') instead which ensures that the cascaded standby\nhas caught up to what has been replayed on the standby.\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 Apr 2023 09:07:51 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/27/23 5:37 AM, Amit Kapila wrote:\n> On Wed, Apr 26, 2023 at 4:41 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n> \n> +When in recovery, the default value of target_lsn is $node->lsn('replay')\n> +instead. This is needed when the publisher passed to\n> wait_for_subscription_sync()\n> +is a standby node.\n> \n> I think this will be useful whenever wait_for_catchup has been called\n> for a standby node (where self is a standby node). I have tried even\n> by commenting wait_for_subscription_sync in the new test then it fails\n> for $node_standby->wait_for_catchup('tap_sub');. So instead, how about\n> a comment like: \"When in recovery, the default value of target_lsn is\n> $node->lsn('replay') instead which ensures that the cascaded standby\n> has caught up to what has been replayed on the standby.\"?\n> \n\nI did it that way because wait_for_subscription_sync() was the first case I had\nto work on but I do agree that your wording better describe the intend of the new\ncode.\n\nChanged in V7 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 27 Apr 2023 09:35:03 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 1:05 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 4/27/23 5:37 AM, Amit Kapila wrote:\n> > On Wed, Apr 26, 2023 at 4:41 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n> >\n> > +When in recovery, the default value of target_lsn is $node->lsn('replay')\n> > +instead. This is needed when the publisher passed to\n> > wait_for_subscription_sync()\n> > +is a standby node.\n> >\n> > I think this will be useful whenever wait_for_catchup has been called\n> > for a standby node (where self is a standby node). I have tried even\n> > by commenting wait_for_subscription_sync in the new test then it fails\n> > for $node_standby->wait_for_catchup('tap_sub');. So instead, how about\n> > a comment like: \"When in recovery, the default value of target_lsn is\n> > $node->lsn('replay') instead which ensures that the cascaded standby\n> > has caught up to what has been replayed on the standby.\"?\n> >\n>\n> I did it that way because wait_for_subscription_sync() was the first case I had\n> to work on but I do agree that your wording better describe the intend of the new\n> code.\n>\n> Changed in V7 attached.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 Apr 2023 15:24:43 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "\n\nOn 4/27/23 11:54 AM, Amit Kapila wrote:\n> On Thu, Apr 27, 2023 at 1:05 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> On 4/27/23 5:37 AM, Amit Kapila wrote:\n>>> On Wed, Apr 26, 2023 at 4:41 PM Drouvot, Bertrand\n>>> <[email protected]> wrote:\n>>>\n>>> +When in recovery, the default value of target_lsn is $node->lsn('replay')\n>>> +instead. This is needed when the publisher passed to\n>>> wait_for_subscription_sync()\n>>> +is a standby node.\n>>>\n>>> I think this will be useful whenever wait_for_catchup has been called\n>>> for a standby node (where self is a standby node). I have tried even\n>>> by commenting wait_for_subscription_sync in the new test then it fails\n>>> for $node_standby->wait_for_catchup('tap_sub');. So instead, how about\n>>> a comment like: \"When in recovery, the default value of target_lsn is\n>>> $node->lsn('replay') instead which ensures that the cascaded standby\n>>> has caught up to what has been replayed on the standby.\"?\n>>>\n>>\n>> I did it that way because wait_for_subscription_sync() was the first case I had\n>> to work on but I do agree that your wording better describe the intend of the new\n>> code.\n>>\n>> Changed in V7 attached.\n>>\n> \n> Pushed.\n> \n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 27 Apr 2023 12:54:30 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 7:53 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> > Besides, would it be better to wait for the cascading standby? If the wal log\n> > file needed for cascading standby is removed on the standby, the subsequent test\n> > will fail.\n>\n> Good catch! I agree that we have to wait on the cascading standby before removing\n> the WAL files. It's done in V6 (and the test is not failing anymore if we set a\n> recovery_min_apply_delay to 5s on the cascading standby).\n>\n\n+# Get the restart_lsn from an invalidated slot\n+my $restart_lsn = $node_standby->safe_psql('postgres',\n+ \"SELECT restart_lsn from pg_replication_slots WHERE slot_name =\n'vacuum_full_activeslot' and conflicting is true;\"\n+);\n+\n+chomp($restart_lsn);\n+\n+# Get the WAL file name associated to this lsn on the primary\n+my $walfile_name = $node_primary->safe_psql('postgres',\n+ \"SELECT pg_walfile_name('$restart_lsn')\");\n+\n+chomp($walfile_name);\n+\n+# Check the WAL file is still on the primary\n+ok(-f $node_primary->data_dir . '/pg_wal/' . $walfile_name,\n+ \"WAL file still on the primary\");\n\nHow is it guaranteed that the WAL file corresponding to the\ninvalidated slot on standby will still be present on primary? Can you\nplease explain the logic behind this test a bit more like how the WAL\nfile switch helps you to achieve the purpose?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 28 Apr 2023 09:25:16 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 4/28/23 5:55 AM, Amit Kapila wrote:\n> On Wed, Apr 26, 2023 at 7:53 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n> \n> +# Get the restart_lsn from an invalidated slot\n> +my $restart_lsn = $node_standby->safe_psql('postgres',\n> + \"SELECT restart_lsn from pg_replication_slots WHERE slot_name =\n> 'vacuum_full_activeslot' and conflicting is true;\"\n> +);\n> +\n> +chomp($restart_lsn);\n> +\n> +# Get the WAL file name associated to this lsn on the primary\n> +my $walfile_name = $node_primary->safe_psql('postgres',\n> + \"SELECT pg_walfile_name('$restart_lsn')\");\n> +\n> +chomp($walfile_name);\n> +\n> +# Check the WAL file is still on the primary\n> +ok(-f $node_primary->data_dir . '/pg_wal/' . $walfile_name,\n> + \"WAL file still on the primary\");\n> \n> How is it guaranteed that the WAL file corresponding to the\n> invalidated slot on standby will still be present on primary?\n\nThe slot(s) have been invalidated by the \"vacuum full\" test just above\nthis one. So I think the WAL we are looking for is the last one being used\nby the primary. As no activity happened on it since the vacuum full it looks to\nme that it should still be present.\n\nBut I may have missed something and maybe that's not guarantee that this WAL is still there in all the cases.\nIn that case I think it's better to remove this test (it does not provide added value here).\n\nTest removed in V7 attached.\n\n> Can you\n> please explain the logic behind this test a bit more like how the WAL\n> file switch helps you to achieve the purpose?\n> \n\nThe idea was to generate enough \"wal switch\" on the primary to ensure\nthe WAL file has been removed.\n\nI gave another thought on it and I think we can skip the test that the WAL is\nnot on the primary any more. That way, one \"wal switch\" seems to be enough\nto see it removed on the standby.\n\nIt's done in V7.\n\nV7 is not doing \"extra tests\" than necessary and I think it's probably better like this.\n\nI can see V7 failing on \"Cirrus CI / macOS - Ventura - Meson\" only (other machines are not complaining).\n\nIt does fail on \"invalidated logical slots do not lead to retaining WAL\", see https://cirrus-ci.com/task/4518083541336064\n\nI'm not sure why it is failing, any idea?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 28 Apr 2023 10:54:00 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, Apr 28, 2023 at 2:24 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> > Can you\n> > please explain the logic behind this test a bit more like how the WAL\n> > file switch helps you to achieve the purpose?\n> >\n>\n> The idea was to generate enough \"wal switch\" on the primary to ensure\n> the WAL file has been removed.\n>\n> I gave another thought on it and I think we can skip the test that the WAL is\n> not on the primary any more. That way, one \"wal switch\" seems to be enough\n> to see it removed on the standby.\n>\n> It's done in V7.\n>\n> V7 is not doing \"extra tests\" than necessary and I think it's probably better like this.\n>\n> I can see V7 failing on \"Cirrus CI / macOS - Ventura - Meson\" only (other machines are not complaining).\n>\n> It does fail on \"invalidated logical slots do not lead to retaining WAL\", see https://cirrus-ci.com/task/4518083541336064\n>\n> I'm not sure why it is failing, any idea?\n>\n\nI think the reason for the failure is that on standby, the test is not\nable to remove the file corresponding to the invalid slot. You are\nusing pg_switch_wal() to generate a switch record and I think you need\none more WAL-generating statement after that to achieve your purpose\nwhich is that during checkpoint, the tes removes the WAL file\ncorresponding to an invalid slot. Just doing checkpoint on primary may\nnot serve the need as that doesn't lead to any new insertion of WAL on\nstandby. Is your v6 failing in the same environment? If not, then it\nis probably due to the reason that the test is doing insert after\npg_switch_wal() in that version. Why did you change the order of\ninsert in v7?\n\nBTW, you can confirm the failure by changing the DEBUG2 message in\nRemoveOldXlogFiles() to LOG. In the case, where the test fails, it may\nnot remove the WAL file corresponding to an invalid slot whereas it\nwill remove the WAL file when the test succeeds.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 May 2023 11:58:06 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/2/23 8:28 AM, Amit Kapila wrote:\n> On Fri, Apr 28, 2023 at 2:24 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> I can see V7 failing on \"Cirrus CI / macOS - Ventura - Meson\" only (other machines are not complaining).\n>>\n>> It does fail on \"invalidated logical slots do not lead to retaining WAL\", see https://cirrus-ci.com/task/4518083541336064\n>>\n>> I'm not sure why it is failing, any idea?\n>>\n> \n> I think the reason for the failure is that on standby, the test is not\n> able to remove the file corresponding to the invalid slot. You are\n> using pg_switch_wal() to generate a switch record and I think you need\n> one more WAL-generating statement after that to achieve your purpose\n> which is that during checkpoint, the tes removes the WAL file\n> corresponding to an invalid slot. Just doing checkpoint on primary may\n> not serve the need as that doesn't lead to any new insertion of WAL on\n> standby. Is your v6 failing in the same environment?\n\nThanks for the feedback!\n\nNo V6 was working fine.\n\n> If not, then it\n> is probably due to the reason that the test is doing insert after\n> pg_switch_wal() in that version. Why did you change the order of\n> insert in v7?\n> \n\nI thought doing the insert before the switch was ok and as my local test\nwas running fine I did not re-consider the ordering.\n\n> BTW, you can confirm the failure by changing the DEBUG2 message in\n> RemoveOldXlogFiles() to LOG. In the case, where the test fails, it may\n> not remove the WAL file corresponding to an invalid slot whereas it\n> will remove the WAL file when the test succeeds.\n\nYeah, I added more debug information and what I can see is that the WAL file\nwe want to see removed is \"000000010000000000000003\" while the standby emits:\n\n\"\n2023-05-02 10:03:28.351 UTC [16971][checkpointer] LOG: attempting to remove WAL segments older than log file 000000000000000000000002\n2023-05-02 10:03:28.351 UTC [16971][checkpointer] LOG: recycled write-ahead log file \"000000010000000000000002\"\n\"\n\nAs per your suggestion, changing the insert ordering (like in V8 attached) makes it now work on the failing environment too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 2 May 2023 13:22:26 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Tue, May 2, 2023 at 4:52 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n>\n> As per your suggestion, changing the insert ordering (like in V8 attached) makes it now work on the failing environment too.\n>\n\nI think it is better to use wait_for_replay_catchup() to wait for\nstandby to catch up. I have changed that and a comment in the\nattached. I'll push this tomorrow unless there are further comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 3 May 2023 15:59:25 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/3/23 12:29 PM, Amit Kapila wrote:\n> On Tue, May 2, 2023 at 4:52 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>>\n>> As per your suggestion, changing the insert ordering (like in V8 attached) makes it now work on the failing environment too.\n>>\n> \n> I think it is better to use wait_for_replay_catchup() to wait for\n> standby to catch up.\n\nOh right, that's a discussion we already had in [1], I should have thought about it.\n\n> I have changed that and a comment in the\n> attached. I'll push this tomorrow unless there are further comments.\n> \n\nLGTM, thanks!\n\n[1]: https://www.postgresql.org/message-id/acbac69e-9ae8-c546-3216-8ecb38e7a93d%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 3 May 2023 14:16:54 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Wed, 3 May 2023 at 15:59, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, May 2, 2023 at 4:52 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n> >\n> >\n> > As per your suggestion, changing the insert ordering (like in V8 attached) makes it now work on the failing environment too.\n> >\n>\n> I think it is better to use wait_for_replay_catchup() to wait for\n> standby to catch up. I have changed that and a comment in the\n> attached. I'll push this tomorrow unless there are further comments.\n\nThanks for posting the updated patch, I had run this test in a loop of\n100 times to verify that there was no failure because of race\nconditions. The 100 times execution passed successfully.\n\nOne suggestion:\n\"wal file\" should be changed to \"WAL file\":\n+# Request a checkpoint on the standby to trigger the WAL file(s) removal\n+$node_standby->safe_psql('postgres', 'checkpoint;');\n+\n+# Verify that the wal file has not been retained on the standby\n+my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 4 May 2023 08:37:19 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n>\n> Thanks for posting the updated patch, I had run this test in a loop of\n> 100 times to verify that there was no failure because of race\n> conditions. The 100 times execution passed successfully.\n>\n> One suggestion:\n> \"wal file\" should be changed to \"WAL file\":\n> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n> +$node_standby->safe_psql('postgres', 'checkpoint;');\n> +\n> +# Verify that the wal file has not been retained on the standby\n> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n>\n\nThanks for the verification. I have pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 4 May 2023 10:13:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/4/23 6:43 AM, Amit Kapila wrote:\n> On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n>>\n>> Thanks for posting the updated patch, I had run this test in a loop of\n>> 100 times to verify that there was no failure because of race\n>> conditions. The 100 times execution passed successfully.\n>>\n>> One suggestion:\n>> \"wal file\" should be changed to \"WAL file\":\n>> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n>> +$node_standby->safe_psql('postgres', 'checkpoint;');\n>> +\n>> +# Verify that the wal file has not been retained on the standby\n>> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n>>\n> \n> Thanks for the verification. I have pushed the patch.\n> \n\nThanks!\n\nI've marked the CF entry as Committed and moved the associated PostgreSQL 16 Open Item\nto \"resolved before 16beta1\".\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 4 May 2023 07:46:04 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/4/23 6:43 AM, Amit Kapila wrote:\n> On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n>>\n>> Thanks for posting the updated patch, I had run this test in a loop of\n>> 100 times to verify that there was no failure because of race\n>> conditions. The 100 times execution passed successfully.\n>>\n>> One suggestion:\n>> \"wal file\" should be changed to \"WAL file\":\n>> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n>> +$node_standby->safe_psql('postgres', 'checkpoint;');\n>> +\n>> +# Verify that the wal file has not been retained on the standby\n>> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n>>\n> \n> Thanks for the verification. I have pushed the patch.\n> \n\nIt looks like there is still something wrong with this test as there\nare a bunch of cfbot errors on this new test (mainly on macOS - Ventura - Meson).\n\nI'll try to reproduce with more debug infos.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 May 2023 07:38:55 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, May 5, 2023 at 11:08 AM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 5/4/23 6:43 AM, Amit Kapila wrote:\n> > On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n> >>\n> >> Thanks for posting the updated patch, I had run this test in a loop of\n> >> 100 times to verify that there was no failure because of race\n> >> conditions. The 100 times execution passed successfully.\n> >>\n> >> One suggestion:\n> >> \"wal file\" should be changed to \"WAL file\":\n> >> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n> >> +$node_standby->safe_psql('postgres', 'checkpoint;');\n> >> +\n> >> +# Verify that the wal file has not been retained on the standby\n> >> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n> >>\n> >\n> > Thanks for the verification. I have pushed the patch.\n> >\n>\n> It looks like there is still something wrong with this test as there\n> are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - Meson).\n>\n\nIs it possible for you to point me to those failures?\n\n> I'll try to reproduce with more debug infos.\n>\n\nOkay, thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 May 2023 12:34:17 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, 5 May 2023 at 12:34, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, May 5, 2023 at 11:08 AM Drouvot, Bertrand\n> <[email protected]> wrote:\n> >\n> > On 5/4/23 6:43 AM, Amit Kapila wrote:\n> > > On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n> > >>\n> > >> Thanks for posting the updated patch, I had run this test in a loop of\n> > >> 100 times to verify that there was no failure because of race\n> > >> conditions. The 100 times execution passed successfully.\n> > >>\n> > >> One suggestion:\n> > >> \"wal file\" should be changed to \"WAL file\":\n> > >> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n> > >> +$node_standby->safe_psql('postgres', 'checkpoint;');\n> > >> +\n> > >> +# Verify that the wal file has not been retained on the standby\n> > >> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n> > >>\n> > >\n> > > Thanks for the verification. I have pushed the patch.\n> > >\n> >\n> > It looks like there is still something wrong with this test as there\n> > are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - Meson).\n> >\n>\n> Is it possible for you to point me to those failures?\n\nI think these failures are occuring in CFBOT, once such instance is at:\nhttps://cirrus-ci.com/task/6642271152504832?logs=test_world#L39\nhttps://api.cirrus-ci.com/v1/artifact/task/6642271152504832/testrun/build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 5 May 2023 12:41:05 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/5/23 9:11 AM, vignesh C wrote:\n> On Fri, 5 May 2023 at 12:34, Amit Kapila <[email protected]> wrote:\n>>\n>> On Fri, May 5, 2023 at 11:08 AM Drouvot, Bertrand\n>> <[email protected]> wrote:\n>>>\n>>> On 5/4/23 6:43 AM, Amit Kapila wrote:\n>>>> On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n>>>>>\n>>>>> Thanks for posting the updated patch, I had run this test in a loop of\n>>>>> 100 times to verify that there was no failure because of race\n>>>>> conditions. The 100 times execution passed successfully.\n>>>>>\n>>>>> One suggestion:\n>>>>> \"wal file\" should be changed to \"WAL file\":\n>>>>> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n>>>>> +$node_standby->safe_psql('postgres', 'checkpoint;');\n>>>>> +\n>>>>> +# Verify that the wal file has not been retained on the standby\n>>>>> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n>>>>>\n>>>>\n>>>> Thanks for the verification. I have pushed the patch.\n>>>>\n>>>\n>>> It looks like there is still something wrong with this test as there\n>>> are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - Meson).\n>>>\n>>\n>> Is it possible for you to point me to those failures?\n> \n> I think these failures are occuring in CFBOT, once such instance is at:\n> https://cirrus-ci.com/task/6642271152504832?logs=test_world#L39\n> https://api.cirrus-ci.com/v1/artifact/task/6642271152504832/testrun/build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n> \n\nYeah, thanks, that's one of them.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 May 2023 09:18:04 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "\n\nOn 5/5/23 9:04 AM, Amit Kapila wrote:\n> On Fri, May 5, 2023 at 11:08 AM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> On 5/4/23 6:43 AM, Amit Kapila wrote:\n>>> On Thu, May 4, 2023 at 8:37 AM vignesh C <[email protected]> wrote:\n>>>>\n>>>> Thanks for posting the updated patch, I had run this test in a loop of\n>>>> 100 times to verify that there was no failure because of race\n>>>> conditions. The 100 times execution passed successfully.\n>>>>\n>>>> One suggestion:\n>>>> \"wal file\" should be changed to \"WAL file\":\n>>>> +# Request a checkpoint on the standby to trigger the WAL file(s) removal\n>>>> +$node_standby->safe_psql('postgres', 'checkpoint;');\n>>>> +\n>>>> +# Verify that the wal file has not been retained on the standby\n>>>> +my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n>>>>\n>>>\n>>> Thanks for the verification. I have pushed the patch.\n>>>\n>>\n>> It looks like there is still something wrong with this test as there\n>> are a bunch of cfbot errors on this new test (mainly on macOS - Ventura - Meson).\n>>\n> \n> Is it possible for you to point me to those failures?\n> \n>> I'll try to reproduce with more debug infos.\n>>\n> \n> Okay, thanks!\n> \n\nAfter multiple attempts, I got one failing one.\n\nIssue is that we expect this file to be removed:\n\n[07:24:27.261](0.899s) #WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n\nBut the standby emits:\n\n2023-05-05 07:24:27.216 UTC [17909][client backend] [035_standby_logical_decoding.pl][3/6:0] LOG: statement: checkpoint;\n2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG: restartpoint starting: immediate wait\n2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG: attempting to remove WAL segments older than log file 000000000000000000000002\n\nSo it seems the test is not right (missing activity??), not sure why yet.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 May 2023 09:46:41 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, May 5, 2023 at 1:16 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n>\n> After multiple attempts, I got one failing one.\n>\n> Issue is that we expect this file to be removed:\n>\n> [07:24:27.261](0.899s) #WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n>\n> But the standby emits:\n>\n> 2023-05-05 07:24:27.216 UTC [17909][client backend] [035_standby_logical_decoding.pl][3/6:0] LOG: statement: checkpoint;\n> 2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG: restartpoint starting: immediate wait\n> 2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG: attempting to remove WAL segments older than log file 000000000000000000000002\n>\n> So it seems the test is not right (missing activity??), not sure why yet.\n>\n\nCan you try to print the value returned by\nXLogGetReplicationSlotMinimumLSN() in KeepLogSeg() on standby? Also,\nplease try to print \"attempting to remove WAL segments ...\" on the\nprimary. We can see, if by any chance some slot is holding us to\nremove the required WAL file.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 May 2023 14:59:06 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, May 5, 2023 at 2:59 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, May 5, 2023 at 1:16 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n> >\n> >\n> > After multiple attempts, I got one failing one.\n> >\n> > Issue is that we expect this file to be removed:\n> >\n> > [07:24:27.261](0.899s) #WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n> >\n> > But the standby emits:\n> >\n> > 2023-05-05 07:24:27.216 UTC [17909][client backend] [035_standby_logical_decoding.pl][3/6:0] LOG: statement: checkpoint;\n> > 2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG: restartpoint starting: immediate wait\n> > 2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG: attempting to remove WAL segments older than log file 000000000000000000000002\n> >\n> > So it seems the test is not right (missing activity??), not sure why yet.\n> >\n>\n> Can you try to print the value returned by\n> XLogGetReplicationSlotMinimumLSN() in KeepLogSeg() on standby? Also,\n> please try to print \"attempting to remove WAL segments ...\" on the\n> primary. We can see, if by any chance some slot is holding us to\n> remove the required WAL file.\n>\n\nWe can also probably check the values of 'endptr', 'receivePtr', and\nreplayPtr on standby in the below code:\n\nCreateRestartPoint()\n{\n...\n/*\n* Retreat _logSegNo using the current end of xlog replayed or received,\n* whichever is later.\n*/\nreceivePtr = GetWalRcvFlushRecPtr(NULL, NULL);\nreplayPtr = GetXLogReplayRecPtr(&replayTLI);\nendptr = (receivePtr < replayPtr) ? replayPtr : receivePtr;\nKeepLogSeg(endptr, &_logSegNo);\n...\n}\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 May 2023 15:30:59 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "\n\nOn 5/5/23 11:29 AM, Amit Kapila wrote:\n> On Fri, May 5, 2023 at 1:16 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>>\n>> After multiple attempts, I got one failing one.\n>>\n>> Issue is that we expect this file to be removed:\n>>\n>> [07:24:27.261](0.899s) #WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n>>\n>> But the standby emits:\n>>\n>> 2023-05-05 07:24:27.216 UTC [17909][client backend] [035_standby_logical_decoding.pl][3/6:0] LOG: statement: checkpoint;\n>> 2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG: restartpoint starting: immediate wait\n>> 2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG: attempting to remove WAL segments older than log file 000000000000000000000002\n>>\n>> So it seems the test is not right (missing activity??), not sure why yet.\n>>\n> \n> Can you try to print the value returned by\n> XLogGetReplicationSlotMinimumLSN() in KeepLogSeg() on standby? Also,\n> please try to print \"attempting to remove WAL segments ...\" on the\n> primary. We can see, if by any chance some slot is holding us to\n> remove the required WAL file.\n> \n\nI turned DEBUG2 on. We can also see on the primary:\n\n2023-05-05 08:23:30.843 UTC [16833][checkpointer] LOCATION: CheckPointReplicationSlots, slot.c:1576\n2023-05-05 08:23:30.844 UTC [16833][checkpointer] DEBUG: 00000: snapshot of 0+0 running transaction ids (lsn 0/40000D0 oldest xid 746 latest complete 745 next xid 746)\n2023-05-05 08:23:30.844 UTC [16833][checkpointer] LOCATION: LogCurrentRunningXacts, standby.c:1377\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOG: 00000: BDT1 about to call RemoveOldXlogFiles in CreateCheckPoint\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: CreateCheckPoint, xlog.c:6835\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOG: 00000: attempting to remove WAL segments older than log file 000000000000000000000002\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: RemoveOldXlogFiles, xlog.c:3560\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000001\"\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: RemoveXlogFile, xlog.c:3708\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000002\"\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: RemoveXlogFile, xlog.c:3708\n2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG: 00000: SlruScanDirectory invoking callback on pg_subtrans/0000\n\nSo, 000000010000000000000003 is not removed on the primary.\n\nIt has been recycled on:\n\n2023-05-05 08:23:38.605 UTC [16833][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000003\"\n\nWhich is later than the test:\n\n[08:23:31.931](0.000s) not ok 19 - invalidated logical slots do not lead to retaining WAL\n\nFWIW, the failing test with DEBUG2 can be found there: https://cirrus-ci.com/task/5615316688961536\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 May 2023 12:32:33 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, May 5, 2023 at 4:02 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 5/5/23 11:29 AM, Amit Kapila wrote:\n> > On Fri, May 5, 2023 at 1:16 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n> >>\n> >>\n> >> After multiple attempts, I got one failing one.\n> >>\n> >> Issue is that we expect this file to be removed:\n> >>\n> >> [07:24:27.261](0.899s) #WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n> >>\n> >> But the standby emits:\n> >>\n> >> 2023-05-05 07:24:27.216 UTC [17909][client backend] [035_standby_logical_decoding.pl][3/6:0] LOG: statement: checkpoint;\n> >> 2023-05-05 07:24:27.216 UTC [17745][checkpointer] LOG: restartpoint starting: immediate wait\n> >> 2023-05-05 07:24:27.259 UTC [17745][checkpointer] LOG: attempting to remove WAL segments older than log file 000000000000000000000002\n> >>\n> >> So it seems the test is not right (missing activity??), not sure why yet.\n> >>\n> >\n> > Can you try to print the value returned by\n> > XLogGetReplicationSlotMinimumLSN() in KeepLogSeg() on standby? Also,\n> > please try to print \"attempting to remove WAL segments ...\" on the\n> > primary. We can see, if by any chance some slot is holding us to\n> > remove the required WAL file.\n> >\n>\n> I turned DEBUG2 on. We can also see on the primary:\n>\n> 2023-05-05 08:23:30.843 UTC [16833][checkpointer] LOCATION: CheckPointReplicationSlots, slot.c:1576\n> 2023-05-05 08:23:30.844 UTC [16833][checkpointer] DEBUG: 00000: snapshot of 0+0 running transaction ids (lsn 0/40000D0 oldest xid 746 latest complete 745 next xid 746)\n> 2023-05-05 08:23:30.844 UTC [16833][checkpointer] LOCATION: LogCurrentRunningXacts, standby.c:1377\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOG: 00000: BDT1 about to call RemoveOldXlogFiles in CreateCheckPoint\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: CreateCheckPoint, xlog.c:6835\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOG: 00000: attempting to remove WAL segments older than log file 000000000000000000000002\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: RemoveOldXlogFiles, xlog.c:3560\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000001\"\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: RemoveXlogFile, xlog.c:3708\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000002\"\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] LOCATION: RemoveXlogFile, xlog.c:3708\n> 2023-05-05 08:23:30.845 UTC [16833][checkpointer] DEBUG: 00000: SlruScanDirectory invoking callback on pg_subtrans/0000\n>\n> So, 000000010000000000000003 is not removed on the primary.\n>\n\nHow did you concluded that 000000010000000000000003 is the file the\ntest is expecting to be removed?\n\n\n--\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 May 2023 16:28:14 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "\n\nOn 5/5/23 12:58 PM, Amit Kapila wrote:\n> On Fri, May 5, 2023 at 4:02 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n\n> How did you concluded that 000000010000000000000003 is the file the\n> test is expecting to be removed?\n> \nbecause I added a note in the test that way:\n\n\"\n@@ -535,6 +539,7 @@ $node_standby->safe_psql('postgres', 'checkpoint;');\n\n # Verify that the WAL file has not been retained on the standby\n my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n+note \"BDT WAL file is $standby_walfile\";\n ok(!-f \"$standby_walfile\",\n \"invalidated logical slots do not lead to retaining WAL\");\n\"\n\nso that I can check in the test log file:\n\ngrep \"WAL file is\" ./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n[08:23:31.931](2.217s) # BDT WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 May 2023 14:06:41 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, May 5, 2023 at 5:36 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 5/5/23 12:58 PM, Amit Kapila wrote:\n> > On Fri, May 5, 2023 at 4:02 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n>\n> > How did you concluded that 000000010000000000000003 is the file the\n> > test is expecting to be removed?\n> >\n> because I added a note in the test that way:\n>\n> \"\n> @@ -535,6 +539,7 @@ $node_standby->safe_psql('postgres', 'checkpoint;');\n>\n> # Verify that the WAL file has not been retained on the standby\n> my $standby_walfile = $node_standby->data_dir . '/pg_wal/' . $walfile_name;\n> +note \"BDT WAL file is $standby_walfile\";\n> ok(!-f \"$standby_walfile\",\n> \"invalidated logical slots do not lead to retaining WAL\");\n> \"\n>\n> so that I can check in the test log file:\n>\n> grep \"WAL file is\" ./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n> [08:23:31.931](2.217s) # BDT WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n>\n\nIt seems due to some reason the current wal file is not switched due\nto some reason. I think we need to add more DEBUG info to find that\nout. Can you please try to print 'RedoRecPtr', '_logSegNo', and\nrecptr?\n\n/*\n* Delete old log files, those no longer needed for last checkpoint to\n* prevent the disk holding the xlog from growing full.\n*/\nXLByteToSeg(RedoRecPtr, _logSegNo, wal_segment_size);\nKeepLogSeg(recptr, &_logSegNo);\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 5 May 2023 17:58:18 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "\n\nOn 5/5/23 2:28 PM, Amit Kapila wrote:\n> On Fri, May 5, 2023 at 5:36 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n> \n> It seems due to some reason the current wal file is not switched due\n> to some reason.\n\nOh wait, here is a NON failing one: https://cirrus-ci.com/task/5086849685782528 (I modified the\n.cirrus.yml so that we can download the \"testrun.zip\" file even if the test is not failing).\n\nSo, in this testrun.zip we can see, that the test is ok:\n\n$ grep -i retain ./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n[10:06:08.789](0.000s) ok 19 - invalidated logical slots do not lead to retaining WAL\n\nand that the WAL file we expect to be removed is:\n\n$ grep \"WAL file is\" ./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n[10:06:08.789](0.925s) # BDT WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n\nThis WAL file has been removed by the standby:\n\n$ grep -i 000000010000000000000003 ./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log | grep -i recy\n2023-05-05 10:06:08.787 UTC [17521][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000003\"\n\nBut on the primary, it has been recycled way after that time:\n\n$ grep -i 000000010000000000000003 ./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log | grep -i recy\n2023-05-05 10:06:13.370 UTC [16785][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000003\"\n\nAs, the checkpoint on the primary after the WAL file switch only recycled (001 and 002):\n\n$ grep -i recycled ./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log\n2023-05-05 10:05:57.196 UTC [16785][checkpointer] LOG: 00000: checkpoint complete: wrote 4 buffers (3.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.027 s; sync files=0, longest=0.000 s, average=0.000 s; distance=11219 kB, estimate=11219 kB; lsn=0/2000060, redo lsn=0/2000028\n2023-05-05 10:06:08.138 UTC [16785][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000001\"\n2023-05-05 10:06:08.138 UTC [16785][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000002\"\n2023-05-05 10:06:08.138 UTC [16785][checkpointer] LOG: 00000: checkpoint complete: wrote 20 buffers (15.6%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.001 s, sync=0.001 s, total=0.003 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=32768 kB; lsn=0/40000D0, redo lsn=0/4000098\n\n\nSo, even on a successful test, we can see that the WAL file we expect to be removed on the standby has not been recycled on the primary before the test.\n\n> I think we need to add more DEBUG info to find that\n> out. Can you please try to print 'RedoRecPtr', '_logSegNo', and\n> recptr?\n>\n\nYes, will do.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 5 May 2023 16:23:31 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Fri, May 5, 2023 at 7:53 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n>\n> On 5/5/23 2:28 PM, Amit Kapila wrote:\n> > On Fri, May 5, 2023 at 5:36 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n> >\n> > It seems due to some reason the current wal file is not switched due\n> > to some reason.\n>\n> Oh wait, here is a NON failing one: https://cirrus-ci.com/task/5086849685782528 (I modified the\n> .cirrus.yml so that we can download the \"testrun.zip\" file even if the test is not failing).\n>\n> So, in this testrun.zip we can see, that the test is ok:\n>\n> $ grep -i retain ./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n> [10:06:08.789](0.000s) ok 19 - invalidated logical slots do not lead to retaining WAL\n>\n> and that the WAL file we expect to be removed is:\n>\n> $ grep \"WAL file is\" ./build/testrun/recovery/035_standby_logical_decoding/log/regress_log_035_standby_logical_decoding\n> [10:06:08.789](0.925s) # BDT WAL file is /Users/admin/pgsql/build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata/pg_wal/000000010000000000000003\n>\n> This WAL file has been removed by the standby:\n>\n> $ grep -i 000000010000000000000003 ./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log | grep -i recy\n> 2023-05-05 10:06:08.787 UTC [17521][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000003\"\n>\n> But on the primary, it has been recycled way after that time:\n>\n> $ grep -i 000000010000000000000003 ./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log | grep -i recy\n> 2023-05-05 10:06:13.370 UTC [16785][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000003\"\n>\n> As, the checkpoint on the primary after the WAL file switch only recycled (001 and 002):\n>\n> $ grep -i recycled ./build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_primary.log\n> 2023-05-05 10:05:57.196 UTC [16785][checkpointer] LOG: 00000: checkpoint complete: wrote 4 buffers (3.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.027 s; sync files=0, longest=0.000 s, average=0.000 s; distance=11219 kB, estimate=11219 kB; lsn=0/2000060, redo lsn=0/2000028\n> 2023-05-05 10:06:08.138 UTC [16785][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000001\"\n> 2023-05-05 10:06:08.138 UTC [16785][checkpointer] DEBUG: 00000: recycled write-ahead log file \"000000010000000000000002\"\n> 2023-05-05 10:06:08.138 UTC [16785][checkpointer] LOG: 00000: checkpoint complete: wrote 20 buffers (15.6%); 0 WAL file(s) added, 0 removed, 2 recycled; write=0.001 s, sync=0.001 s, total=0.003 s; sync files=0, longest=0.000 s, average=0.000 s; distance=32768 kB, estimate=32768 kB; lsn=0/40000D0, redo lsn=0/4000098\n>\n>\n> So, even on a successful test, we can see that the WAL file we expect to be removed on the standby has not been recycled on the primary before the test.\n>\n\nOkay, one possibility of not removing on primary is that at the time\nof checkpoint (when we compute RedoRecPtr), the wal_swtich and insert\nis not yet performed because in that case it will compute the\nRedoRecPtr as a location before those operations which would be 0000*3\nfile. However, it is not clear how is that possible except from a\nbackground checkpoint happening at that point but from LOGs, it\nappears that the checkpoint triggered by test has recycled the wal\nfiles.\n\n> > I think we need to add more DEBUG info to find that\n> > out. Can you please try to print 'RedoRecPtr', '_logSegNo', and\n> > recptr?\n> >\n>\n> Yes, will do.\n>\n\nOkay, thanks, please try to print similar locations on standby in\nCreateRestartPoint().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 6 May 2023 07:40:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/6/23 4:10 AM, Amit Kapila wrote:\n> On Fri, May 5, 2023 at 7:53 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> On 5/5/23 2:28 PM, Amit Kapila wrote:\n>>> On Fri, May 5, 2023 at 5:36 PM Drouvot, Bertrand\n>>\n>> So, even on a successful test, we can see that the WAL file we expect to be removed on the standby has not been recycled on the primary before the test.\n>>\n> \n> Okay, one possibility of not removing on primary is that at the time\n> of checkpoint (when we compute RedoRecPtr), the wal_swtich and insert\n> is not yet performed because in that case it will compute the\n> RedoRecPtr as a location before those operations which would be 0000*3\n> file. However, it is not clear how is that possible except from a\n> background checkpoint happening at that point but from LOGs, it\n> appears that the checkpoint triggered by test has recycled the wal\n> files.\n> \n>>> I think we need to add more DEBUG info to find that\n>>> out. Can you please try to print 'RedoRecPtr', '_logSegNo', and\n>>> recptr?\n>>>\n>>\n>> Yes, will do.\n>>\n> \n> Okay, thanks, please try to print similar locations on standby in\n> CreateRestartPoint().\n> \n\nThe extra information is displayed that way:\n\nhttps://github.com/bdrouvot/postgres/commit/a3d6d58d105b379c04a17a1129bfb709302588ca#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bR6822-R6830\nhttps://github.com/bdrouvot/postgres/commit/a3d6d58d105b379c04a17a1129bfb709302588ca#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bR7269-R7271\nhttps://github.com/bdrouvot/postgres/commit/a3d6d58d105b379c04a17a1129bfb709302588ca#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bR7281-R7284\n\nThere is 2 runs with this extra info in place:\n\nA successful one: https://cirrus-ci.com/task/6528745436086272\nA failed one: https://cirrus-ci.com/task/4558139312308224\n\nFor both the testrun.zip is available in the Artifacts section.\n\nSharing this now in case you want to have a look (I'll have a look at them early next week on my side).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 6 May 2023 10:22:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> There is 2 runs with this extra info in place:\n>\n> A successful one: https://cirrus-ci.com/task/6528745436086272\n> A failed one: https://cirrus-ci.com/task/4558139312308224\n>\n\nThanks, I think I got some clue as to why this test is failing\nrandomly. Following is the comparison of successful and failed run\nlogs for standby:\n\nSuccess case\n============\n2023-05-06 07:23:05.496 UTC [17617][walsender]\n[cascading_standby][3/0:0] DEBUG: 00000: write 0/4000148 flush\n0/4000000 apply 0/4000000 reply_time 2023-05-06 07:23:05.496365+00\n2023-05-06 07:23:05.496 UTC [17617][walsender]\n[cascading_standby][3/0:0] LOCATION: ProcessStandbyReplyMessage,\nwalsender.c:2101\n2023-05-06 07:23:05.496 UTC [17617][walsender]\n[cascading_standby][3/0:0] DEBUG: 00000: write 0/4000148 flush\n0/4000148 apply 0/4000000 reply_time 2023-05-06 07:23:05.4964+00\n2023-05-06 07:23:05.496 UTC [17617][walsender]\n[cascading_standby][3/0:0] LOCATION: ProcessStandbyReplyMessage,\nwalsender.c:2101\n2023-05-06 07:23:05.496 UTC [17617][walsender]\n[cascading_standby][3/0:0] DEBUG: 00000: write 0/4000148 flush\n0/4000148 apply 0/4000148 reply_time 2023-05-06 07:23:05.496531+00\n2023-05-06 07:23:05.496 UTC [17617][walsender]\n[cascading_standby][3/0:0] LOCATION: ProcessStandbyReplyMessage,\nwalsender.c:2101\n2023-05-06 07:23:05.500 UTC [17706][client backend]\n[035_standby_logical_decoding.pl][2/12:0] LOG: 00000: statement:\ncheckpoint;\n2023-05-06 07:23:05.500 UTC [17706][client backend]\n[035_standby_logical_decoding.pl][2/12:0] LOCATION:\nexec_simple_query, postgres.c:1074\n2023-05-06 07:23:05.500 UTC [17550][checkpointer] LOG: 00000:\nrestartpoint starting: immediate wait\n...\n...\n2023-05-06 07:23:05.500 UTC [17550][checkpointer] LOCATION:\nCheckPointReplicationSlots, slot.c:1576\n2023-05-06 07:23:05.501 UTC [17550][checkpointer] DEBUG: 00000:\nupdated min recovery point to 0/4000148 on timeline 1\n2023-05-06 07:23:05.501 UTC [17550][checkpointer] LOCATION:\nUpdateMinRecoveryPoint, xlog.c:2500\n2023-05-06 07:23:05.515 UTC [17550][checkpointer] LOG: 00000:\nCreateRestartPoint: After XLByteToSeg RedoRecPtr is 0/4000098,\n_logSegNo is 4\n2023-05-06 07:23:05.515 UTC [17550][checkpointer] LOCATION:\nCreateRestartPoint, xlog.c:7271\n2023-05-06 07:23:05.515 UTC [17550][checkpointer] LOG: 00000:\nCreateRestartPoint: After KeepLogSeg RedoRecPtr is 0/4000098, endptr\nis 0/4000148, _logSegNo is 4\n\nFailed case:\n==========\n2023-05-06 07:53:19.657 UTC [17914][walsender]\n[cascading_standby][3/0:0] DEBUG: 00000: write 0/3D1A000 flush\n0/3CFA000 apply 0/4000000 reply_time 2023-05-06 07:53:19.65207+00\n2023-05-06 07:53:19.657 UTC [17914][walsender]\n[cascading_standby][3/0:0] LOCATION: ProcessStandbyReplyMessage,\nwalsender.c:2101\n2023-05-06 07:53:19.657 UTC [17914][walsender]\n[cascading_standby][3/0:0] DEBUG: 00000: write 0/3D1A000 flush\n0/3D1A000 apply 0/4000000 reply_time 2023-05-06 07:53:19.656471+00\n2023-05-06 07:53:19.657 UTC [17914][walsender]\n[cascading_standby][3/0:0] LOCATION: ProcessStandbyReplyMessage,\nwalsender.c:2101\n...\n...\n2023-05-06 07:53:19.686 UTC [17881][checkpointer] DEBUG: 00000:\nupdated min recovery point to 0/4000148 on timeline 1\n2023-05-06 07:53:19.686 UTC [17881][checkpointer] LOCATION:\nUpdateMinRecoveryPoint, xlog.c:2500\n2023-05-06 07:53:19.707 UTC [17881][checkpointer] LOG: 00000:\nCreateRestartPoint: After XLByteToSeg RedoRecPtr is 0/4000098,\n_logSegNo is 4\n2023-05-06 07:53:19.707 UTC [17881][checkpointer] LOCATION:\nCreateRestartPoint, xlog.c:7271\n2023-05-06 07:53:19.707 UTC [17881][checkpointer] LOG: 00000:\nCreateRestartPoint: After KeepLogSeg RedoRecPtr is 0/4000098, endptr\nis 0/4000148, _logSegNo is 3\n\nObservations:\n============\n1. In the failed run, the KeepLogSeg(), reduced the _logSegNo to 3\nwhich is the reason for the failure because now the standby won't be\nable to remove/recycle the WAL file corresponding to segment number 3\nwhich the test was expecting.\n2. We didn't expect the KeepLogSeg() to reduce the _logSegNo because\nall logical slots were invalidated. However, I think we forgot that\nboth standby and primary have physical slots which might also\ninfluence the XLogGetReplicationSlotMinimumLSN() calculation in\nKeepLogSeg().\n3. Now, the reason for its success in some of the runs is that\nrestart_lsn of physical slots also moved ahead by the time checkpoint\nhappens. You can see the difference of LSNs for\nProcessStandbyReplyMessage in failed and successful cases.\n\nNext steps:\n=========\n1. The first thing is we should verify this theory by adding some LOG\nin KeepLogSeg() to see if the _logSegNo is reduced due to the value\nreturned by XLogGetReplicationSlotMinimumLSN().\n2. The reason for the required file not being removed in the primary\nis also that it has a physical slot which prevents the file removal.\n3. If the above theory is correct then I see a few possibilities to\nfix this test (a) somehow ensure that restart_lsn of the physical slot\non standby is advanced up to the point where we can safely remove the\nrequired files; (b) just create a separate test case by initializing a\nfresh node for primary and standby where we only have logical slots on\nstandby. This will be a bit costly but probably less risky. (c) any\nbetter ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 6 May 2023 18:58:42 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/6/23 3:28 PM, Amit Kapila wrote:\n> On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> There is 2 runs with this extra info in place:\n>>\n>> A successful one: https://cirrus-ci.com/task/6528745436086272\n>> A failed one: https://cirrus-ci.com/task/4558139312308224\n>>\n> \n> Thanks, I think I got some clue as to why this test is failing\n> randomly. \n\nGreat, thanks!\n\n> Observations:\n> ============\n> 1. In the failed run, the KeepLogSeg(), reduced the _logSegNo to 3\n> which is the reason for the failure because now the standby won't be\n> able to remove/recycle the WAL file corresponding to segment number 3\n> which the test was expecting.\n\nAgree.\n\n> 2. We didn't expect the KeepLogSeg() to reduce the _logSegNo because\n> all logical slots were invalidated. However, I think we forgot that\n> both standby and primary have physical slots which might also\n> influence the XLogGetReplicationSlotMinimumLSN() calculation in\n> KeepLogSeg().\n\nOh right...\n\n> Next steps:\n> =========\n> 1. The first thing is we should verify this theory by adding some LOG\n> in KeepLogSeg() to see if the _logSegNo is reduced due to the value\n> returned by XLogGetReplicationSlotMinimumLSN().\n\nYeah, will do that early next week.\n\n> 2. The reason for the required file not being removed in the primary\n> is also that it has a physical slot which prevents the file removal.\n\nYeah, agree. But this one is not an issue as we are not\nchecking for the WAL file removal on the primary, do you agree?\n\n> 3. If the above theory is correct then I see a few possibilities to\n> fix this test (a) somehow ensure that restart_lsn of the physical slot\n> on standby is advanced up to the point where we can safely remove the\n> required files; (b) just create a separate test case by initializing a\n> fresh node for primary and standby where we only have logical slots on\n> standby. This will be a bit costly but probably less risky. (c) any\n> better ideas?\n> \n\n(c): Since, I think, the physical slot on the primary is not a concern for\nthe reason mentioned above, then instead of (b):\n\nWhat about postponing the physical slot creation on the standby and the\ncascading standby node initialization after this test?\n\nThat way, this test would be done without a physical slot on the standby and\nwe could also get rid of the \"Wait for the cascading standby to catchup before\nremoving the WAL file(s)\" part.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 6 May 2023 18:02:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Sat, May 6, 2023 at 9:33 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 5/6/23 3:28 PM, Amit Kapila wrote:\n> > On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n>\n> > Next steps:\n> > =========\n> > 1. The first thing is we should verify this theory by adding some LOG\n> > in KeepLogSeg() to see if the _logSegNo is reduced due to the value\n> > returned by XLogGetReplicationSlotMinimumLSN().\n>\n> Yeah, will do that early next week.\n>\n> > 2. The reason for the required file not being removed in the primary\n> > is also that it has a physical slot which prevents the file removal.\n>\n> Yeah, agree. But this one is not an issue as we are not\n> checking for the WAL file removal on the primary, do you agree?\n>\n\nAgreed.\n\n> > 3. If the above theory is correct then I see a few possibilities to\n> > fix this test (a) somehow ensure that restart_lsn of the physical slot\n> > on standby is advanced up to the point where we can safely remove the\n> > required files; (b) just create a separate test case by initializing a\n> > fresh node for primary and standby where we only have logical slots on\n> > standby. This will be a bit costly but probably less risky. (c) any\n> > better ideas?\n> >\n>\n> (c): Since, I think, the physical slot on the primary is not a concern for\n> the reason mentioned above, then instead of (b):\n>\n> What about postponing the physical slot creation on the standby and the\n> cascading standby node initialization after this test?\n>\n\nYeah, that is also possible. But, I have a few questions regarding\nthat: (a) There doesn't seem to be a physical slot on cascading\nstandby, if I am missing something, can you please point me to the\nrelevant part of the test? (b) Which test is currently dependent on\nthe physical slot on standby?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 8 May 2023 08:12:12 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/8/23 4:42 AM, Amit Kapila wrote:\n> On Sat, May 6, 2023 at 9:33 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n>>\n>> On 5/6/23 3:28 PM, Amit Kapila wrote:\n>>> On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand\n>>> <[email protected]> wrote:\n>>\n>>> Next steps:\n>>> =========\n>>> 1. The first thing is we should verify this theory by adding some LOG\n>>> in KeepLogSeg() to see if the _logSegNo is reduced due to the value\n>>> returned by XLogGetReplicationSlotMinimumLSN().\n>>\n>> Yeah, will do that early next week.\n\nIt's done with the following changes:\n\nhttps://github.com/bdrouvot/postgres/commit/79e1bd9ab429a22f876b9364eb8a0da2dacaaef7#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bL7454-R7514\n\nWith that in place, there is one failing test here: https://cirrus-ci.com/task/5173216310722560\n\nWhere we can see:\n\n2023-05-08 07:42:56.301 UTC [18038][checkpointer] LOCATION: UpdateMinRecoveryPoint, xlog.c:2500\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: CreateRestartPoint: After XLByteToSeg RedoRecPtr is 0/4000098, _logSegNo is 4\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: CreateRestartPoint, xlog.c:7271\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: KeepLogSeg: segno changed to 4 due to XLByteToSeg\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: KeepLogSeg, xlog.c:7473\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: KeepLogSeg: segno changed to 3 due to XLogGetReplicationSlotMinimumLSN()\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: KeepLogSeg, xlog.c:7483\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: CreateRestartPoint: After KeepLogSeg RedoRecPtr is 0/4000098, endptr is 0/4000148, _logSegNo is 3\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: CreateRestartPoint, xlog.c:7284\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: BDT1 about to call RemoveOldXlogFiles in CreateRestartPoint\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: CreateRestartPoint, xlog.c:7313\n2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: attempting to remove WAL segments older than log file 000000000000000000000002\n\nSo the suspicion about XLogGetReplicationSlotMinimumLSN() was correct (_logSegNo moved from\n4 to 3 due to XLogGetReplicationSlotMinimumLSN()).\n\n>> What about postponing the physical slot creation on the standby and the\n>> cascading standby node initialization after this test?\n>>\n> \n> Yeah, that is also possible. But, I have a few questions regarding\n> that: (a) There doesn't seem to be a physical slot on cascading\n> standby, if I am missing something, can you please point me to the\n> relevant part of the test?\n\nThat's right. There is a physical slot only on the primary and on the standby.\n\nWhat I meant up-thread is to also postpone the cascading standby node initialization\nafter this test (once the physical slot on the standby is created).\n\nPlease find attached a proposal doing so.\n\n> (b) Which test is currently dependent on\n> the physical slot on standby?\n\nNot a test but the cascading standby initialization with the \"primary_slot_name\" parameter.\n\nAlso, now I think that's better to have the physical slot on the standby + hsf set to on on the\ncascading standby (coming from the standby backup).\n\nIdea is to avoid any risk of logical slot invalidation on the cascading standby in the\nstandby promotion test.\n\nThat was not the case before the attached proposal though (hsf was off on the cascading standby).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 8 May 2023 10:14:37 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Mon, May 8, 2023 at 1:45 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 5/8/23 4:42 AM, Amit Kapila wrote:\n> > On Sat, May 6, 2023 at 9:33 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n> >>\n> >> On 5/6/23 3:28 PM, Amit Kapila wrote:\n> >>> On Sat, May 6, 2023 at 1:52 PM Drouvot, Bertrand\n> >>> <[email protected]> wrote:\n> >>\n> >>> Next steps:\n> >>> =========\n> >>> 1. The first thing is we should verify this theory by adding some LOG\n> >>> in KeepLogSeg() to see if the _logSegNo is reduced due to the value\n> >>> returned by XLogGetReplicationSlotMinimumLSN().\n> >>\n> >> Yeah, will do that early next week.\n>\n> It's done with the following changes:\n>\n> https://github.com/bdrouvot/postgres/commit/79e1bd9ab429a22f876b9364eb8a0da2dacaaef7#diff-c1cb3ab2a19606390c1a7ed00ffe5a45531702ca5faf999d401c548f8951c65bL7454-R7514\n>\n> With that in place, there is one failing test here: https://cirrus-ci.com/task/5173216310722560\n>\n> Where we can see:\n>\n> 2023-05-08 07:42:56.301 UTC [18038][checkpointer] LOCATION: UpdateMinRecoveryPoint, xlog.c:2500\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: CreateRestartPoint: After XLByteToSeg RedoRecPtr is 0/4000098, _logSegNo is 4\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: CreateRestartPoint, xlog.c:7271\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: KeepLogSeg: segno changed to 4 due to XLByteToSeg\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: KeepLogSeg, xlog.c:7473\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: KeepLogSeg: segno changed to 3 due to XLogGetReplicationSlotMinimumLSN()\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: KeepLogSeg, xlog.c:7483\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: CreateRestartPoint: After KeepLogSeg RedoRecPtr is 0/4000098, endptr is 0/4000148, _logSegNo is 3\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: CreateRestartPoint, xlog.c:7284\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: BDT1 about to call RemoveOldXlogFiles in CreateRestartPoint\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOCATION: CreateRestartPoint, xlog.c:7313\n> 2023-05-08 07:42:56.302 UTC [18038][checkpointer] LOG: 00000: attempting to remove WAL segments older than log file 000000000000000000000002\n>\n> So the suspicion about XLogGetReplicationSlotMinimumLSN() was correct (_logSegNo moved from\n> 4 to 3 due to XLogGetReplicationSlotMinimumLSN()).\n>\n> >> What about postponing the physical slot creation on the standby and the\n> >> cascading standby node initialization after this test?\n> >>\n> >\n> > Yeah, that is also possible. But, I have a few questions regarding\n> > that: (a) There doesn't seem to be a physical slot on cascading\n> > standby, if I am missing something, can you please point me to the\n> > relevant part of the test?\n>\n> That's right. There is a physical slot only on the primary and on the standby.\n>\n> What I meant up-thread is to also postpone the cascading standby node initialization\n> after this test (once the physical slot on the standby is created).\n>\n> Please find attached a proposal doing so.\n>\n> > (b) Which test is currently dependent on\n> > the physical slot on standby?\n>\n> Not a test but the cascading standby initialization with the \"primary_slot_name\" parameter.\n>\n> Also, now I think that's better to have the physical slot on the standby + hsf set to on on the\n> cascading standby (coming from the standby backup).\n>\n> Idea is to avoid any risk of logical slot invalidation on the cascading standby in the\n> standby promotion test.\n>\n\nWhy not initialize the cascading standby node just before the standby\npromotion test: \"Test standby promotion and logical decoding behavior\nafter the standby gets promoted.\"? That way we will avoid any unknown\nside-effects of cascading standby and it will anyway look more logical\nto initialize it where the test needs it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 May 2023 11:32:09 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "Hi,\n\nOn 5/9/23 8:02 AM, Amit Kapila wrote:\n> On Mon, May 8, 2023 at 1:45 PM Drouvot, Bertrand\n> <[email protected]> wrote:\n\n> \n> Why not initialize the cascading standby node just before the standby\n> promotion test: \"Test standby promotion and logical decoding behavior\n> after the standby gets promoted.\"? That way we will avoid any unknown\n> side-effects of cascading standby and it will anyway look more logical\n> to initialize it where the test needs it.\n> \n\nYeah, that's even better. Moved the physical slot creation on the standby\nand the cascading standby initialization where \"strictly\" needed in V2\nattached.\n\nAlso ensuring that hsf is set to on on the cascading standby to be on the\nsafe side of thing.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 9 May 2023 09:14:09 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "On Tue, May 9, 2023 at 12:44 PM Drouvot, Bertrand\n<[email protected]> wrote:\n>\n> On 5/9/23 8:02 AM, Amit Kapila wrote:\n> > On Mon, May 8, 2023 at 1:45 PM Drouvot, Bertrand\n> > <[email protected]> wrote:\n>\n> >\n> > Why not initialize the cascading standby node just before the standby\n> > promotion test: \"Test standby promotion and logical decoding behavior\n> > after the standby gets promoted.\"? That way we will avoid any unknown\n> > side-effects of cascading standby and it will anyway look more logical\n> > to initialize it where the test needs it.\n> >\n>\n> Yeah, that's even better. Moved the physical slot creation on the standby\n> and the cascading standby initialization where \"strictly\" needed in V2\n> attached.\n>\n> Also ensuring that hsf is set to on on the cascading standby to be on the\n> safe side of thing.\n>\n\nPushed this yesterday.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 May 2023 16:11:37 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
},
{
"msg_contents": "\n\nOn 5/10/23 12:41 PM, Amit Kapila wrote:\n> On Tue, May 9, 2023 at 12:44 PM Drouvot, Bertrand\n\n> \n> Pushed this yesterday.\n> \n\nThanks!\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 11 May 2023 09:09:53 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Add two missing tests in 035_standby_logical_decoding.pl"
}
] |
[
{
"msg_contents": "Pursuant to the discussion at [1], here's a patch that removes our\nold restriction that a plan node having initPlans can't be marked\nparallel-safe (dating to commit ab77a5a45). That was really a special\ncase of the fact that we couldn't transmit subplans to parallel\nworkers at all. We fixed that in commit 5e6d8d2bb and follow-ons,\nbut this case never got addressed.\n\nAlong the way, this also takes care of some sloppiness about updating\npath costs to match when we move initplans from one place to another\nduring createplan.c and setrefs.c. Since all the planning decisions are\nalready made by that point, this is just cosmetic; but it seems good\nto keep EXPLAIN output consistent with where the initplans are.\n\nThe diff in query_planner() might be worth remarking on. I found\nthat one because after fixing things to allow parallel-safe initplans,\none partition_prune test case changed plans (as shown in the patch)\n--- but only when debug_parallel_query was active. The reason\nproved to be that we only bothered to mark Result nodes as potentially\nparallel-safe when debug_parallel_query is on. This neglects the\nfact that parallel-safety may be of interest for a sub-query even\nthough the Result itself doesn't parallelize.\n\nThere's only one existing test case that visibly changes plan with\nthese changes. The new plan is clearly saner-looking than before,\nand testing with some data loaded into the table confirms that it\nis faster. I'm not sure if it's worth devising more test cases.\n\nI'll park this in the July commitfest.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/ZDVt6MaNWkRDO1LQ%40telsasoft.com",
"msg_date": "Wed, 12 Apr 2023 12:43:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Allowing parallel-safe initplans"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 12:44 PM Tom Lane <[email protected]> wrote:\n> Pursuant to the discussion at [1], here's a patch that removes our\n> old restriction that a plan node having initPlans can't be marked\n> parallel-safe (dating to commit ab77a5a45). That was really a special\n> case of the fact that we couldn't transmit subplans to parallel\n> workers at all. We fixed that in commit 5e6d8d2bb and follow-ons,\n> but this case never got addressed.\n\nNice.\n\n> Along the way, this also takes care of some sloppiness about updating\n> path costs to match when we move initplans from one place to another\n> during createplan.c and setrefs.c. Since all the planning decisions are\n> already made by that point, this is just cosmetic; but it seems good\n> to keep EXPLAIN output consistent with where the initplans are.\n\nOK. It would be nicer if we had a more principled approach here, but\nthat's a job for another day.\n\n> There's only one existing test case that visibly changes plan with\n> these changes. The new plan is clearly saner-looking than before,\n> and testing with some data loaded into the table confirms that it\n> is faster. I'm not sure if it's worth devising more test cases.\n\nIt seems like it would be nice to see one or two additional scenarios\nwhere these changes bring a benefit, with different kinds of plan\nshapes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Apr 2023 14:06:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 12:43 AM Tom Lane <[email protected]> wrote:\n\n> Pursuant to the discussion at [1], here's a patch that removes our\n> old restriction that a plan node having initPlans can't be marked\n> parallel-safe (dating to commit ab77a5a45). That was really a special\n> case of the fact that we couldn't transmit subplans to parallel\n> workers at all. We fixed that in commit 5e6d8d2bb and follow-ons,\n> but this case never got addressed.\n\n\nThe patch looks good to me. Some comments from me:\n\n* For the diff in standard_planner, I was wondering why not move the\ninitPlans up to the Gather node, just as we did before. So I tried that\nway but did not notice the breakage of regression tests as stated in the\ncomments. Would you please confirm that?\n\n* Not related to this patch. In SS_make_initplan_from_plan, the comment\nsays that the node's parParam and args lists remain empty. I wonder if\nwe need to explicitly set node->parParam and node->args to NIL before\nthat comment, or can we depend on makeNode to initialize them to NIL?\n\n\n> There's only one existing test case that visibly changes plan with\n> these changes. The new plan is clearly saner-looking than before,\n> and testing with some data loaded into the table confirms that it\n> is faster. I'm not sure if it's worth devising more test cases.\n\n\nI also think it's better to have more test cases covering this change.\n\nThanks\nRichard\n\nOn Thu, Apr 13, 2023 at 12:43 AM Tom Lane <[email protected]> wrote:Pursuant to the discussion at [1], here's a patch that removes our\nold restriction that a plan node having initPlans can't be marked\nparallel-safe (dating to commit ab77a5a45). That was really a special\ncase of the fact that we couldn't transmit subplans to parallel\nworkers at all. We fixed that in commit 5e6d8d2bb and follow-ons,\nbut this case never got addressed.The patch looks good to me. Some comments from me:* For the diff in standard_planner, I was wondering why not move theinitPlans up to the Gather node, just as we did before. So I tried thatway but did not notice the breakage of regression tests as stated in thecomments. Would you please confirm that?* Not related to this patch. In SS_make_initplan_from_plan, the commentsays that the node's parParam and args lists remain empty. I wonder ifwe need to explicitly set node->parParam and node->args to NIL beforethat comment, or can we depend on makeNode to initialize them to NIL? \nThere's only one existing test case that visibly changes plan with\nthese changes. The new plan is clearly saner-looking than before,\nand testing with some data loaded into the table confirms that it\nis faster. I'm not sure if it's worth devising more test cases.I also think it's better to have more test cases covering this change.ThanksRichard",
"msg_date": "Thu, 13 Apr 2023 16:23:06 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> * For the diff in standard_planner, I was wondering why not move the\n> initPlans up to the Gather node, just as we did before. So I tried that\n> way but did not notice the breakage of regression tests as stated in the\n> comments. Would you please confirm that?\n\nTry it with debug_parallel_query = regress.\n\n> * Not related to this patch. In SS_make_initplan_from_plan, the comment\n> says that the node's parParam and args lists remain empty. I wonder if\n> we need to explicitly set node->parParam and node->args to NIL before\n> that comment, or can we depend on makeNode to initialize them to NIL?\n\nI'm generally a fan of explicitly initializing fields, but the basic\nargument for that is greppability. That comment serves the purpose,\nso I don't feel a big need to change it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:00:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 10:00 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > * For the diff in standard_planner, I was wondering why not move the\n> > initPlans up to the Gather node, just as we did before. So I tried that\n> > way but did not notice the breakage of regression tests as stated in the\n> > comments. Would you please confirm that?\n>\n> Try it with debug_parallel_query = regress.\n\n\nAh, I see. With DEBUG_PARALLEL_REGRESS the initPlans that move to the\nGather would become invisible along with the Gather node.\n\nAs I tried this, I found that the breakage caused by moving the\ninitPlans to the Gather node might be more than just being cosmetic.\nSometimes it may cause wrong results. As an example, consider\n\ncreate table a (i int, j int);\ninsert into a values (1, 1);\ncreate index on a(i, j);\n\nset enable_seqscan to off;\nset debug_parallel_query to on;\n\n# select min(i) from a;\n min\n-----\n 0\n(1 row)\n\nAs we can see, the result is not correct. And the plan looks like\n\n# explain (verbose, costs off) select min(i) from a;\n QUERY PLAN\n-----------------------------------------------------------\n Gather\n Output: ($0)\n Workers Planned: 1\n Single Copy: true\n InitPlan 1 (returns $0)\n -> Limit\n Output: a.i\n -> Index Only Scan using a_i_j_idx on public.a\n Output: a.i\n Index Cond: (a.i IS NOT NULL)\n -> Result\n Output: $0\n(12 rows)\n\nThe initPlan has been moved from the Result node to the Gather node. As\na result, when doing tuple projection for the Result node, we'd get a\nParamExecData entry with NULL execPlan. So the initPlan does not get\nchance to be executed. And we'd get the output as the default value\nfrom the ParamExecData entry, which is zero as shown.\n\nSo now I begin to wonder if this wrong result issue is possible to exist\nin other places where we move initPlans. But I haven't tried hard to\nverify that.\n\nThanks\nRichard\n\nOn Thu, Apr 13, 2023 at 10:00 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> * For the diff in standard_planner, I was wondering why not move the\n> initPlans up to the Gather node, just as we did before. So I tried that\n> way but did not notice the breakage of regression tests as stated in the\n> comments. Would you please confirm that?\n\nTry it with debug_parallel_query = regress.Ah, I see. With DEBUG_PARALLEL_REGRESS the initPlans that move to theGather would become invisible along with the Gather node.As I tried this, I found that the breakage caused by moving theinitPlans to the Gather node might be more than just being cosmetic.Sometimes it may cause wrong results. As an example, considercreate table a (i int, j int);insert into a values (1, 1);create index on a(i, j);set enable_seqscan to off;set debug_parallel_query to on;# select min(i) from a; min----- 0(1 row)As we can see, the result is not correct. And the plan looks like# explain (verbose, costs off) select min(i) from a; QUERY PLAN----------------------------------------------------------- Gather Output: ($0) Workers Planned: 1 Single Copy: true InitPlan 1 (returns $0) -> Limit Output: a.i -> Index Only Scan using a_i_j_idx on public.a Output: a.i Index Cond: (a.i IS NOT NULL) -> Result Output: $0(12 rows)The initPlan has been moved from the Result node to the Gather node. Asa result, when doing tuple projection for the Result node, we'd get aParamExecData entry with NULL execPlan. So the initPlan does not getchance to be executed. And we'd get the output as the default valuefrom the ParamExecData entry, which is zero as shown.So now I begin to wonder if this wrong result issue is possible to existin other places where we move initPlans. But I haven't tried hard toverify that.ThanksRichard",
"msg_date": "Mon, 17 Apr 2023 10:57:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 10:57 AM Richard Guo <[email protected]> wrote:\n\n> The initPlan has been moved from the Result node to the Gather node. As\n> a result, when doing tuple projection for the Result node, we'd get a\n> ParamExecData entry with NULL execPlan. So the initPlan does not get\n> chance to be executed. And we'd get the output as the default value\n> from the ParamExecData entry, which is zero as shown.\n>\n> So now I begin to wonder if this wrong result issue is possible to exist\n> in other places where we move initPlans. But I haven't tried hard to\n> verify that.\n>\n\nI looked further into this issue and I believe other places are good.\nThe problem with this query is that the es/ecxt_param_exec_vals used to\nstore info about the initplan is not the same one as in the Result\nnode's expression context for projection, because we've forked a new\nprocess for the parallel worker and then created and initialized a new\nEState node, and allocated a new es_param_exec_vals array for the new\nEState. When doing projection for the Result node, the current code\njust goes ahead and accesses the new es_param_exec_vals, thus fails to\nretrieve the info about the initplan. Hmm, I doubt this is sensible.\n\nSo now it seems that the breakage of regression tests is more severe\nthan being cosmetic. I wonder if we need to update the comments to\nindicate the potential wrong results issue if we move the initPlans to\nthe Gather node.\n\nThanks\nRichard\n\nOn Mon, Apr 17, 2023 at 10:57 AM Richard Guo <[email protected]> wrote:The initPlan has been moved from the Result node to the Gather node. Asa result, when doing tuple projection for the Result node, we'd get aParamExecData entry with NULL execPlan. So the initPlan does not getchance to be executed. And we'd get the output as the default valuefrom the ParamExecData entry, which is zero as shown.So now I begin to wonder if this wrong result issue is possible to existin other places where we move initPlans. But I haven't tried hard toverify that.I looked further into this issue and I believe other places are good.The problem with this query is that the es/ecxt_param_exec_vals used tostore info about the initplan is not the same one as in the Resultnode's expression context for projection, because we've forked a newprocess for the parallel worker and then created and initialized a newEState node, and allocated a new es_param_exec_vals array for the newEState. When doing projection for the Result node, the current codejust goes ahead and accesses the new es_param_exec_vals, thus fails toretrieve the info about the initplan. Hmm, I doubt this is sensible.So now it seems that the breakage of regression tests is more severethan being cosmetic. I wonder if we need to update the comments toindicate the potential wrong results issue if we move the initPlans tothe Gather node.ThanksRichard",
"msg_date": "Mon, 17 Apr 2023 15:26:46 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> So now it seems that the breakage of regression tests is more severe\n> than being cosmetic. I wonder if we need to update the comments to\n> indicate the potential wrong results issue if we move the initPlans to\n> the Gather node.\n\nI wondered about that too, but how come neither of us saw non-cosmetic\nfailures (ie, actual query output changes not just EXPLAIN changes)\nwhen we tried this? Maybe the case is somehow not exercised, but if\nso I'm more worried about adding regression tests than comments.\n\nI think actually that it does work beyond the EXPLAIN weirdness,\nbecause since e89a71fb4 the Gather machinery knows how to transmit\nthe values of Params listed in Gather.initParam to workers, and that\nis filled in setrefs.c in a way that looks like it'd work regardless\nof whether the Gather appeared organically or was stuck on by the\ndebug_parallel_query hackery. I've not tried to verify that\ndirectly though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 11:04:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 11:04 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > So now it seems that the breakage of regression tests is more severe\n> > than being cosmetic. I wonder if we need to update the comments to\n> > indicate the potential wrong results issue if we move the initPlans to\n> > the Gather node.\n>\n> I wondered about that too, but how come neither of us saw non-cosmetic\n> failures (ie, actual query output changes not just EXPLAIN changes)\n> when we tried this? Maybe the case is somehow not exercised, but if\n> so I'm more worried about adding regression tests than comments.\n\n\nSorry I forgot to mention that I did see query output changes after\nmoving the initPlans to the Gather node. First of all let me make sure\nI was doing it in the right way. On the base of the patch, I was using\nthe diff as below\n\n if (debug_parallel_query != DEBUG_PARALLEL_OFF &&\n- top_plan->parallel_safe && top_plan->initPlan == NIL)\n+ top_plan->parallel_safe)\n {\n Gather *gather = makeNode(Gather);\n\n+ gather->plan.initPlan = top_plan->initPlan;\n+ top_plan->initPlan = NIL;\n+\n gather->plan.targetlist = top_plan->targetlist;\n\nAnd then I changed the default value of debug_parallel_query to\nDEBUG_PARALLEL_REGRESS. And then I just ran 'make installcheck' and saw\nthe query output changes.\n\n\n> I think actually that it does work beyond the EXPLAIN weirdness,\n> because since e89a71fb4 the Gather machinery knows how to transmit\n> the values of Params listed in Gather.initParam to workers, and that\n> is filled in setrefs.c in a way that looks like it'd work regardless\n> of whether the Gather appeared organically or was stuck on by the\n> debug_parallel_query hackery. I've not tried to verify that\n> directly though.\n\n\nIt seems that in this case the top_plan does not have any extParam, so\nthe Gather node that is added atop the top_plan does not have a chance\nto get its initParam filled in set_param_references().\n\nThanks\nRichard\n\nOn Mon, Apr 17, 2023 at 11:04 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> So now it seems that the breakage of regression tests is more severe\n> than being cosmetic. I wonder if we need to update the comments to\n> indicate the potential wrong results issue if we move the initPlans to\n> the Gather node.\n\nI wondered about that too, but how come neither of us saw non-cosmetic\nfailures (ie, actual query output changes not just EXPLAIN changes)\nwhen we tried this? Maybe the case is somehow not exercised, but if\nso I'm more worried about adding regression tests than comments.Sorry I forgot to mention that I did see query output changes aftermoving the initPlans to the Gather node. First of all let me make sureI was doing it in the right way. On the base of the patch, I was usingthe diff as below if (debug_parallel_query != DEBUG_PARALLEL_OFF &&- top_plan->parallel_safe && top_plan->initPlan == NIL)+ top_plan->parallel_safe) { Gather *gather = makeNode(Gather);+ gather->plan.initPlan = top_plan->initPlan;+ top_plan->initPlan = NIL;+ gather->plan.targetlist = top_plan->targetlist;And then I changed the default value of debug_parallel_query toDEBUG_PARALLEL_REGRESS. And then I just ran 'make installcheck' and sawthe query output changes. \nI think actually that it does work beyond the EXPLAIN weirdness,\nbecause since e89a71fb4 the Gather machinery knows how to transmit\nthe values of Params listed in Gather.initParam to workers, and that\nis filled in setrefs.c in a way that looks like it'd work regardless\nof whether the Gather appeared organically or was stuck on by the\ndebug_parallel_query hackery. I've not tried to verify that\ndirectly though.It seems that in this case the top_plan does not have any extParam, sothe Gather node that is added atop the top_plan does not have a chanceto get its initParam filled in set_param_references().ThanksRichard",
"msg_date": "Tue, 18 Apr 2023 15:14:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> On Mon, Apr 17, 2023 at 11:04 PM Tom Lane <[email protected]> wrote:\n>> I wondered about that too, but how come neither of us saw non-cosmetic\n>> failures (ie, actual query output changes not just EXPLAIN changes)\n>> when we tried this?\n\n> Sorry I forgot to mention that I did see query output changes after\n> moving the initPlans to the Gather node.\n\nHmm, my memory was just of seeing the EXPLAIN output changes, but\nmaybe those got my attention to the extent of missing the others.\n\n> It seems that in this case the top_plan does not have any extParam, so\n> the Gather node that is added atop the top_plan does not have a chance\n> to get its initParam filled in set_param_references().\n\nOh, so maybe we'd need to copy up extParam as well? But it's largely\nmoot, since I don't see a good way to avoid breaking the EXPLAIN\noutput.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 09:33:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 9:33 PM Tom Lane <[email protected]> wrote:\n\n> Richard Guo <[email protected]> writes:\n> > It seems that in this case the top_plan does not have any extParam, so\n> > the Gather node that is added atop the top_plan does not have a chance\n> > to get its initParam filled in set_param_references().\n>\n> Oh, so maybe we'd need to copy up extParam as well? But it's largely\n> moot, since I don't see a good way to avoid breaking the EXPLAIN\n> output.\n\n\nYeah, seems breaking the EXPLAIN output is inevitable if we move the\ninitPlans to the Gather node. So maybe we need to keep the logic as in\nv1 patch, i.e. avoid adding a Gather node when top_plan has initPlans.\nIf we do so, I wonder if we need to explain the potential wrong results\nissue in the comments.\n\nThanks\nRichard\n\nOn Tue, Apr 18, 2023 at 9:33 PM Tom Lane <[email protected]> wrote:Richard Guo <[email protected]> writes:\n> It seems that in this case the top_plan does not have any extParam, so\n> the Gather node that is added atop the top_plan does not have a chance\n> to get its initParam filled in set_param_references().\n\nOh, so maybe we'd need to copy up extParam as well? But it's largely\nmoot, since I don't see a good way to avoid breaking the EXPLAIN\noutput.Yeah, seems breaking the EXPLAIN output is inevitable if we move theinitPlans to the Gather node. So maybe we need to keep the logic as inv1 patch, i.e. avoid adding a Gather node when top_plan has initPlans.If we do so, I wonder if we need to explain the potential wrong resultsissue in the comments.ThanksRichard",
"msg_date": "Wed, 19 Apr 2023 10:42:08 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "I wrote:\n> Richard Guo <[email protected]> writes:\n>> On Mon, Apr 17, 2023 at 11:04 PM Tom Lane <[email protected]> wrote:\n>>> I wondered about that too, but how come neither of us saw non-cosmetic\n>>> failures (ie, actual query output changes not just EXPLAIN changes)\n>>> when we tried this?\n\n>> Sorry I forgot to mention that I did see query output changes after\n>> moving the initPlans to the Gather node.\n\n> Hmm, my memory was just of seeing the EXPLAIN output changes, but\n> maybe those got my attention to the extent of missing the others.\n\nI got around to trying this, and you are right, there are some wrong\nquery answers as well as EXPLAIN output changes. This mystified me\nfor awhile, because it sure looks like e89a71fb4 should have made it\nwork.\n\n>> It seems that in this case the top_plan does not have any extParam, so\n>> the Gather node that is added atop the top_plan does not have a chance\n>> to get its initParam filled in set_param_references().\n\nEventually I noticed that all the failing cases were instances of\noptimizing MIN()/MAX() aggregates into indexscans, and then I figured\nout what the problem is: we substitute Params for the optimized-away\nAggref nodes in setrefs.c, *after* SS_finalize_plan has been run.\nThat means we fail to account for those Params in extParam/allParam\nsets. We've gotten away with that up to now because such Params\ncould only appear where Aggrefs can appear, which is only in top-level\n(above scans and joins) nodes, which generally don't have any of the\nsorts of rescan optimizations that extParam/allParam bits control.\nBut this patch results in needing to have a correct extParam set for\nthe node just below Gather, and we don't. I am not sure whether there\nare any reachable bugs without this patch; but there might be, or some\nfuture optimization might introduce one.\n\nIt seems like the cleanest fix for this is to replace such optimized\nAggrefs in a separate tree scan before running SS_finalize_plan.\nThat's fairly annoying from a planner-runtime standpoint, although\nwe could skip the extra pass in the typical case where no minmax aggs\nhave been optimized.\n\nI also thought about swapping the order of operations so that we\nrun SS_finalize_plan after setrefs.c. That falls down because of\nset_param_references itself, which requires those bits to be\ncalculated already. But maybe we could integrate that computation\ninto SS_finalize_plan instead? There's certainly nothing very\npretty about the way it's done now.\n\nA band-aid fix that seemed to work is to have set_param_references\nconsult the Gather's own allParam set instead of the extParam set\nof its child. That feels like a kluge though, and it would not\nhelp matters for any future bug involving another usage of those\nbitmapsets.\n\nBTW, there is another way in which setrefs.c can inject PARAM_EXEC\nParams: it can translate PARAM_MULTIEXPR Params into those. So\nthose won't be accounted for either. I think this is probably\nnot a problem, especially not after 87f3667ec got us out of the\nbusiness of treating those like initPlan outputs. But it does\nseem like \"you can't inject PARAM_EXEC Params during setrefs.c\"\nwould not be a workable coding rule; it's too tempting to do\nexactly that.\n\nSo at this point my inclination is to try to move SS_finalize_plan\nto run after setrefs.c, but I've not written any code yet. I'm\nnot sure if we'd need to back-patch that, but it at least seems\nlike important future-proofing.\n\nNone of this would lead me to want to move initPlans to\nGather nodes injected by debug_parallel_query, though.\nWe'd have to kluge something to keep the EXPLAIN output\nlooking the same, and that seems like a kluge too many.\nWhat I am wondering is if the issue is reachable for\nGather nodes that are built organically by the regular\nplanner paths. It seems like that might be the case,\neither now or after applying this patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jul 2023 17:36:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "I wrote:\n> Eventually I noticed that all the failing cases were instances of\n> optimizing MIN()/MAX() aggregates into indexscans, and then I figured\n> out what the problem is: we substitute Params for the optimized-away\n> Aggref nodes in setrefs.c, *after* SS_finalize_plan has been run.\n> That means we fail to account for those Params in extParam/allParam\n> sets. We've gotten away with that up to now because such Params\n> could only appear where Aggrefs can appear, which is only in top-level\n> (above scans and joins) nodes, which generally don't have any of the\n> sorts of rescan optimizations that extParam/allParam bits control.\n> But this patch results in needing to have a correct extParam set for\n> the node just below Gather, and we don't. I am not sure whether there\n> are any reachable bugs without this patch; but there might be, or some\n> future optimization might introduce one.\n\n> It seems like the cleanest fix for this is to replace such optimized\n> Aggrefs in a separate tree scan before running SS_finalize_plan.\n> That's fairly annoying from a planner-runtime standpoint, although\n> we could skip the extra pass in the typical case where no minmax aggs\n> have been optimized.\n> I also thought about swapping the order of operations so that we\n> run SS_finalize_plan after setrefs.c. That falls down because of\n> set_param_references itself, which requires those bits to be\n> calculated already. But maybe we could integrate that computation\n> into SS_finalize_plan instead? There's certainly nothing very\n> pretty about the way it's done now.\n\nI tried both of those and concluded they'd be too messy for a patch\nthat we might find ourselves having to back-patch. So 0001 attached\nfixes it by teaching SS_finalize_plan to treat optimized MIN()/MAX()\naggregates as if they were already Params. It's slightly annoying\nto have knowledge of that optimization metastasizing into another\nplace, but the alternatives are even less palatable.\n\nHaving done that, if you adjust 0002 to inject Gathers even when\ndebug_parallel_query = regress, the only diffs in the core regression\ntests are that some initPlans disappear from EXPLAIN output. The\noutputs of the actual queries are still correct, demonstrating that\ne89a71fb4 does indeed make it work as long as the param bitmapsets\nare correct.\n\nI'm still resistant to the idea of kluging EXPLAIN to the extent\nof hiding the EXPLAIN output changes. It wouldn't be that hard\nto do really, but I worry that such a kluge might hide real problems\nin future. So what I did in 0002 was to allow initPlans for an\ninjected Gather only if debug_parallel_query = on, so that there\nwill be a place for EXPLAIN to show them. Other than the changes\nin that area, 0002 is the same as the previous patch.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 13 Jul 2023 17:44:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 5:44 AM Tom Lane <[email protected]> wrote:\n\n> I tried both of those and concluded they'd be too messy for a patch\n> that we might find ourselves having to back-patch. So 0001 attached\n> fixes it by teaching SS_finalize_plan to treat optimized MIN()/MAX()\n> aggregates as if they were already Params. It's slightly annoying\n> to have knowledge of that optimization metastasizing into another\n> place, but the alternatives are even less palatable.\n\n\nI tried with 0001 patch and can confirm that the wrong result issue\nshown in [1] is fixed.\n\nexplain (costs off, verbose) select min(i) from a;\n QUERY PLAN\n-----------------------------------------------------------\n Gather\n Output: ($0)\n Workers Planned: 1\n Params Evaluated: $0 <==== initplan params\n Single Copy: true\n InitPlan 1 (returns $0)\n -> Limit\n Output: a.i\n -> Index Only Scan using a_i_j_idx on public.a\n Output: a.i\n Index Cond: (a.i IS NOT NULL)\n -> Result\n Output: $0\n(13 rows)\n\nNow the Gather.initParam is filled and e89a71fb4 does its work to\ntransmit the Params to workers.\n\nSo +1 to 0001 patch.\n\n\n> I'm still resistant to the idea of kluging EXPLAIN to the extent\n> of hiding the EXPLAIN output changes. It wouldn't be that hard\n> to do really, but I worry that such a kluge might hide real problems\n> in future. So what I did in 0002 was to allow initPlans for an\n> injected Gather only if debug_parallel_query = on, so that there\n> will be a place for EXPLAIN to show them. Other than the changes\n> in that area, 0002 is the same as the previous patch.\n\n\nAlso +1 to 0002 patch.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs48p-WpnLdR9ZQ4QsHZP_a-P0rktAYo4Z3uOHUAkH3fjQg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, Jul 14, 2023 at 5:44 AM Tom Lane <[email protected]> wrote:\nI tried both of those and concluded they'd be too messy for a patch\nthat we might find ourselves having to back-patch. So 0001 attached\nfixes it by teaching SS_finalize_plan to treat optimized MIN()/MAX()\naggregates as if they were already Params. It's slightly annoying\nto have knowledge of that optimization metastasizing into another\nplace, but the alternatives are even less palatable.I tried with 0001 patch and can confirm that the wrong result issueshown in [1] is fixed.explain (costs off, verbose) select min(i) from a; QUERY PLAN----------------------------------------------------------- Gather Output: ($0) Workers Planned: 1 Params Evaluated: $0 <==== initplan params Single Copy: true InitPlan 1 (returns $0) -> Limit Output: a.i -> Index Only Scan using a_i_j_idx on public.a Output: a.i Index Cond: (a.i IS NOT NULL) -> Result Output: $0(13 rows)Now the Gather.initParam is filled and e89a71fb4 does its work totransmit the Params to workers.So +1 to 0001 patch. \nI'm still resistant to the idea of kluging EXPLAIN to the extent\nof hiding the EXPLAIN output changes. It wouldn't be that hard\nto do really, but I worry that such a kluge might hide real problems\nin future. So what I did in 0002 was to allow initPlans for an\ninjected Gather only if debug_parallel_query = on, so that there\nwill be a place for EXPLAIN to show them. Other than the changes\nin that area, 0002 is the same as the previous patch.Also +1 to 0002 patch.[1] https://www.postgresql.org/message-id/CAMbWs48p-WpnLdR9ZQ4QsHZP_a-P0rktAYo4Z3uOHUAkH3fjQg%40mail.gmail.comThanksRichard",
"msg_date": "Fri, 14 Jul 2023 15:35:25 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allowing parallel-safe initplans"
},
{
"msg_contents": "Richard Guo <[email protected]> writes:\n> So +1 to 0001 patch.\n> Also +1 to 0002 patch.\n\nPushed, thanks for looking at it!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jul 2023 11:57:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allowing parallel-safe initplans"
}
] |
[
{
"msg_contents": "Hi,\n\nForking of [1].\n\nOn 2023-04-12 14:11:10 -0400, Joe Conway wrote:\n> On 4/12/23 13:44, Alvaro Herrera wrote:\n> > Revert \"Catalog NOT NULL constraints\" and fallout\n> >\n> > This reverts commit e056c557aef4 and minor later fixes thereof.\n>\n> Seems 76c111a7f1 (as well as some other maybe) needs to be reverted as well.\n\nThis reminds me: Is there a chance you could help out trying to make sepgsql\nget tested as part of CI?\n\nCurrently CI uses only Debian for linux - can sepgsql be made work on that?\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/84b8db57-de05-b413-1826-bc8c9cef85e3%40joeconway.com\n\n\n",
"msg_date": "Wed, 12 Apr 2023 11:17:48 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "testing sepgsql in CI"
},
{
"msg_contents": "On 4/12/23 14:17, Andres Freund wrote:\n> Hi,\n> \n> Forking of [1].\n> \n> On 2023-04-12 14:11:10 -0400, Joe Conway wrote:\n>> On 4/12/23 13:44, Alvaro Herrera wrote:\n>> > Revert \"Catalog NOT NULL constraints\" and fallout\n>> >\n>> > This reverts commit e056c557aef4 and minor later fixes thereof.\n>>\n>> Seems 76c111a7f1 (as well as some other maybe) needs to be reverted as well.\n> \n> This reminds me: Is there a chance you could help out trying to make sepgsql\n> get tested as part of CI?\n> \n> Currently CI uses only Debian for linux - can sepgsql be made work on that?\n\nTheoretically selinux can be made to work on Debian, but I only have \none, failed, past attempt at making it work ;-)\n\nI can certainly try to help though.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 12 Apr 2023 14:36:43 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: testing sepgsql in CI"
}
] |
[
{
"msg_contents": "Hi,\n\npg_create_logical_replication_slot can take longer than usual on a standby\nwhen there is no activity on the primary. We don't have enough information\nin the pg_stat_activity or process title to debug why this is taking so\nlong. Attached a small patch to update the process title while waiting for\nthe wal in read_local_xlog_page_guts. Any thoughts on introducing a new\nwait event too?\n\nFor example, in my setup, slot creation took 8 minutes 13 seconds. It only\nsucceeded after I ran select txid_current() on primary.\n\npostgres=# select pg_create_logical_replication_slot('s1','test_decoding');\n\n pg_create_logical_replication_slot\n------------------------------------\n (s1,0/C096D10)\n(1 row)\n\nTime: 493365.995 ms (08:13.366)\n\nThanks,\nSirisha",
"msg_date": "Wed, 12 Apr 2023 15:43:40 -0700",
"msg_from": "sirisha chamarthi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Add ps display while waiting for wal in read_local_xlog_page_guts"
},
{
"msg_contents": "sirisha chamarthi <[email protected]> writes:\n> pg_create_logical_replication_slot can take longer than usual on a standby\n> when there is no activity on the primary. We don't have enough information\n> in the pg_stat_activity or process title to debug why this is taking so\n> long. Attached a small patch to update the process title while waiting for\n> the wal in read_local_xlog_page_guts. Any thoughts on introducing a new\n> wait event too?\n\nset_ps_display is a fairly expensive operation on a lot of platforms,\nso I'm concerned about the overhead this proposal would add. However,\ngetting rid of that pg_usleep in favor of a proper wait event seems\nlike a good idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 22:29:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add ps display while waiting for wal in read_local_xlog_page_guts"
},
{
"msg_contents": "Hi,\n\nOn 4/13/23 12:43 AM, sirisha chamarthi wrote:\n> Hi,\n> \n> pg_create_logical_replication_slot can take longer than usual on a standby when there is no activity on the primary. We don't have enough information in the pg_stat_activity or process title to debug why this is taking so long. Attached a small patch to update the process title while waiting for the wal in read_local_xlog_page_guts. Any thoughts on introducing a new wait event too?\n> \n> For example, in my setup, slot creation took 8 minutes 13 seconds. It only succeeded after I ran select txid_current() on primary.\n\nFWIW, this behavior has been mentioned in 0fdab27ad6 and a new function (pg_log_standby_snapshot()) has been created/documented to accelerate the slot creation on the standby.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Apr 2023 08:02:48 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add ps display while waiting for wal in read_local_xlog_page_guts"
},
{
"msg_contents": "Hi,\n\nOn 4/13/23 4:29 AM, Tom Lane wrote:\n> sirisha chamarthi <[email protected]> writes:\n>> pg_create_logical_replication_slot can take longer than usual on a standby\n>> when there is no activity on the primary. We don't have enough information\n>> in the pg_stat_activity or process title to debug why this is taking so\n>> long. Attached a small patch to update the process title while waiting for\n>> the wal in read_local_xlog_page_guts. \n\nThanks for the patch!\n\n> Any thoughts on introducing a new\n>> wait event too?\n> \n> set_ps_display is a fairly expensive operation on a lot of platforms,\n> so I'm concerned about the overhead this proposal would add. However,\n> getting rid of that pg_usleep in favor of a proper wait event seems\n> like a good idea.\n> \n\n+1 for adding a proper wait event.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Apr 2023 08:05:27 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Add ps display while waiting for wal in read_local_xlog_page_guts"
}
] |
[
{
"msg_contents": "Hi all,\n\nWith db4f21e in place, there is no need to worry about explicitely\nfreeing any regular expressions that may have been compiled when\nloading HBA or ident files because MemoryContextDelete() would be\nable to take care of that now that these are palloc'd (the definitions\nin regcustom.h superseed the ones of regguts.h).\n\nThe logic in hba.c that scans all the HBA and ident lines to any\nregexps can be simplified a lot. Most of this code is new in 16~, so\nI think that it is worth cleaning up this stuff now rather than wait\nfor 17 to open for business. Still, this is optional, and I don't\nmind waiting for 17 if the regexp/palloc business proves to be an\nissue during beta.\n\nFWIW, one would see leaks in the postmaster process with files like\nthat on repeated SIGHUPs before db4f21e:\n$ cat $PGDATA/pg_hba.conf\nlocal \"/^db\\d{2,4}$\" all trust\nlocal \"/^db\\d{2,po}$\" all trust\nlocal \"/^db\\d{2,4}$\" all trust\n$ cat $PGDATA/pg_ident.conf\nfoo \"/^user\\d{2,po}$\" bar\nfoo \"/^uesr\\d{2,4}$\" bar\n\nWith this configuration, there are no leaks on SIGHUPs after db4f21e\nas MemoryContextDelete() does all the job. Please see the attached.\n\nThoughts or opinions?\n--\nMichael",
"msg_date": "Thu, 13 Apr 2023 09:16:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 12:16 PM Michael Paquier <[email protected]> wrote:\n> The logic in hba.c that scans all the HBA and ident lines to any\n> regexps can be simplified a lot. Most of this code is new in 16~, so\n> I think that it is worth cleaning up this stuff now rather than wait\n> for 17 to open for business. Still, this is optional, and I don't\n> mind waiting for 17 if the regexp/palloc business proves to be an\n> issue during beta.\n\nUp to the RMT of course, but it sounds a bit like (1) you potentially\nhad an open item already until last week (new code in 16 that could\nleak regexes), and (2) I missed this when looking for manual memory\nmanagement code that could be nuked, probably because it's hiding\nbehind a few layers of functions call, but there are clearly comments\nthat are now wrong. So there are two different ways for a commitfest\nlawyer to argue this should be tidied up for 16.\n\n\n",
"msg_date": "Thu, 13 Apr 2023 12:53:51 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 12:53:51PM +1200, Thomas Munro wrote:\n> Up to the RMT of course, but it sounds a bit like (1) you potentially\n> had an open item already until last week (new code in 16 that could\n> leak regexes),\n\nWell, not really.. Note that HEAD does not leak regexes, because\nchanges like 02d3448 have made sure that all these are explicitely\nfreed when a file is parsed and where there is no need to switch to\nthe new one because errors were found on the way. In short, one can\njust take the previous conf files I pasted and there will be no leaks\non HEAD in the context of the postmaster, even before bea3d7e. Sure,\nthere could be issues if one changed the shape of the code, but all\nthe existing compiled regexes were covered (this stuff already exists\nin ~15 for the regexps of the user name in ident lines).\n\n> and (2) I missed this when looking for manual memory\n> management code that could be nuked, probably because it's hiding\n> behind a few layers of functions call, but there are clearly comments\n> that are now wrong. So there are two different ways for a commitfest\n> lawyer to argue this should be tidied up for 16.\n\nYes, the comments are incorrect anyway.\n--\nMichael",
"msg_date": "Thu, 13 Apr 2023 10:24:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Apr 13, 2023 at 12:53:51PM +1200, Thomas Munro wrote:\n>> and (2) I missed this when looking for manual memory\n>> management code that could be nuked, probably because it's hiding\n>> behind a few layers of functions call, but there are clearly comments\n>> that are now wrong. So there are two different ways for a commitfest\n>> lawyer to argue this should be tidied up for 16.\n\n> Yes, the comments are incorrect anyway.\n\n+1 for cleanup, if this is new code. It does us no good in the long\nrun for v16 to handle this differently from both earlier and later\nversions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Apr 2023 22:25:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 10:25:42PM -0400, Tom Lane wrote:\n> +1 for cleanup, if this is new code. It does us no good in the long\n> run for v16 to handle this differently from both earlier and later\n> versions.\n\nOkidoki. Let me know if anybody has an objection about what's been\nsent upthread. The sooner, the better I guess..\n--\nMichael",
"msg_date": "Thu, 13 Apr 2023 12:48:14 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "Hi,\n\nOn 4/13/23 5:48 AM, Michael Paquier wrote:\n> On Wed, Apr 12, 2023 at 10:25:42PM -0400, Tom Lane wrote:\n>> +1 for cleanup, if this is new code. It does us no good in the long\n>> run for v16 to handle this differently from both earlier and later\n>> versions.\n> \n> Okidoki. Let me know if anybody has an objection about what's been\n> sent upthread. The sooner, the better I guess..\n\nThanks for the patch, nice catch related to db4f21e.\n\nI had a look at the patch you shared up-thread and it looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 13 Apr 2023 08:26:51 +0200",
"msg_from": "\"Drouvot, Bertrand\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On 2023-Apr-13, Michael Paquier wrote:\n\n> With db4f21e in place, there is no need to worry about explicitely\n> freeing any regular expressions that may have been compiled when\n> loading HBA or ident files because MemoryContextDelete() would be\n> able to take care of that now that these are palloc'd (the definitions\n> in regcustom.h superseed the ones of regguts.h).\n\nHmm, nice.\n\n> The logic in hba.c that scans all the HBA and ident lines to any\n> regexps can be simplified a lot. Most of this code is new in 16~, so\n> I think that it is worth cleaning up this stuff now rather than wait\n> for 17 to open for business. Still, this is optional, and I don't\n> mind waiting for 17 if the regexp/palloc business proves to be an\n> issue during beta.\n\nI agree with the downthread votes to clean this up now rather than\nwaiting. Also, you're adding exactly zero lines of new code, so I don't\nthink feature freeze affects the decision.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n",
"msg_date": "Thu, 13 Apr 2023 11:58:51 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 11:58:51AM +0200, Alvaro Herrera wrote:\n> I agree with the downthread votes to clean this up now rather than\n> waiting. Also, you're adding exactly zero lines of new code, so I don't\n> think feature freeze affects the decision.\n\nThanks, done that.\n\nThe commit log mentions Lab.c instead of hba.c. I had that fixed\nlocally, but it seems like I've messed up things a bit.. Sorry about\nthat.\n--\nMichael",
"msg_date": "Fri, 14 Apr 2023 07:32:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 07:32:13AM +0900, Michael Paquier wrote:\n> On Thu, Apr 13, 2023 at 11:58:51AM +0200, Alvaro Herrera wrote:\n>> I agree with the downthread votes to clean this up now rather than\n>> waiting. Also, you're adding exactly zero lines of new code, so I don't\n>> think feature freeze affects the decision.\n> \n> Thanks, done that.\n> \n> The commit log mentions Lab.c instead of hba.c. I had that fixed\n> locally, but it seems like I've messed up things a bit.. Sorry about\n> that.\n\nAFAICT this is committed, but the commitfest entry [0] is still set to\n\"needs review.\" Can it be closed now?\n\n[0] https://commitfest.postgresql.org/43/4277/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 14:21:41 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 02:21:41PM -0700, Nathan Bossart wrote:\n> AFAICT this is committed, but the commitfest entry [0] is still set to\n> \"needs review.\" Can it be closed now?\n\nYes, done. Thanks.\n--\nMichael",
"msg_date": "Tue, 18 Apr 2023 07:05:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clean up hba.c of code freeing regexps"
}
] |
[
{
"msg_contents": "Greetings,\n\n* [email protected] ([email protected]) wrote:\n> The PGBuildfarm member pollock had the following event on branch HEAD:\n> Failed at Stage: Make\n> The snapshot timestamp for the build is: 2023-04-13 13:06:34\n> The specs of this machine are:\n> \tOS: OmniOS / illumos / r151038\n> \tArch: amd64\n> \tComp: gcc / 10.2.0\n\nI've reached out to Andy to ask about dropping --with-gssapi from\npollock's configure line. It's not entirely clear to me what is\ninstalled on that system as it seems to have a gssapi_ext.h but not the\ncredential store feature, which would seem to indicate a decade+ old\nversion of MIT Kerberos or some other GSSAPI library which hasn't kept\npace, at the least, with ongoing Kerberos development.\n\nAs they recently also dropped GSSAPI support for OpenSSH for OmniOS[1],\nit seems unlikely to be an issue for them to remove it for PG too.\n\nThanks,\n\nStephen\n\n[1]: https://github.com/omniosorg/omnios-build/blob/r151038/doc/ReleaseNotes.md",
"msg_date": "Thu, 13 Apr 2023 09:26:35 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGBuildfarm member pollock Branch HEAD Failed at Stage Make"
}
] |
[
{
"msg_contents": "Hi,\n\nThe documentation [1] says max_wal_size and min_wal_size defaults are 1GB\nand 80 MB respectively. However, these are configured based on the\nwal_segment_size and documentation is not clear about it. Attached a patch\nto fix the documentation.\n\n[1] https://www.postgresql.org/docs/devel/runtime-config-wal.html\n\nThanks,\nSirisha",
"msg_date": "Thu, 13 Apr 2023 12:01:04 -0700",
"msg_from": "sirisha chamarthi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "At Thu, 13 Apr 2023 12:01:04 -0700, sirisha chamarthi <[email protected]> wrote in \n> The documentation [1] says max_wal_size and min_wal_size defaults are 1GB\n> and 80 MB respectively. However, these are configured based on the\n> wal_segment_size and documentation is not clear about it. Attached a patch\n> to fix the documentation.\n> \n> [1] https://www.postgresql.org/docs/devel/runtime-config-wal.html\n\nGood catch! Now wal_segment_size is easily changed.\n\n- The default is 1 GB.\n+ The default value is configured to maximum of 64 times the <varname>wal_segment_size</varname> or 1 GB.\n- The default is 80 MB.\n+ The default value is configured to maximum of 5 times the <varname>wal_segment_size</varname> or 80 MB.\n\nHowever, I believe that most users don't change the WAL segment size,\nso the primary information is that the default sizes are 1GB and 80MB.\n\nSo, I personally think it should be written like this: \"The default\nsize is 80MB. However, if you have changed the WAL segment size from\nthe default of 16MB, it will be five times the segment size.\", but I'm\nnot sure what the others think about this..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 14 Apr 2023 17:01:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "Hi,\n\nOn Fri, Apr 14, 2023 at 1:01 AM Kyotaro Horiguchi <[email protected]>\nwrote:\n\n> At Thu, 13 Apr 2023 12:01:04 -0700, sirisha chamarthi <\n> [email protected]> wrote in\n> > The documentation [1] says max_wal_size and min_wal_size defaults are 1GB\n> > and 80 MB respectively. However, these are configured based on the\n> > wal_segment_size and documentation is not clear about it. Attached a\n> patch\n> > to fix the documentation.\n> >\n> > [1] https://www.postgresql.org/docs/devel/runtime-config-wal.html\n>\n> Good catch! Now wal_segment_size is easily changed.\n>\n> - The default is 1 GB.\n> + The default value is configured to maximum of 64 times the\n> <varname>wal_segment_size</varname> or 1 GB.\n> - The default is 80 MB.\n> + The default value is configured to maximum of 5 times the\n> <varname>wal_segment_size</varname> or 80 MB.\n>\n> However, I believe that most users don't change the WAL segment size,\n> so the primary information is that the default sizes are 1GB and 80MB.\n>\n> So, I personally think it should be written like this: \"The default\n> size is 80MB. However, if you have changed the WAL segment size from\n> the default of 16MB, it will be five times the segment size.\", but I'm\n> not sure what the others think about this..\n\n\nThis looks good to me.\n\nThanks,\nSirisha\n\nHi,On Fri, Apr 14, 2023 at 1:01 AM Kyotaro Horiguchi <[email protected]> wrote:At Thu, 13 Apr 2023 12:01:04 -0700, sirisha chamarthi <[email protected]> wrote in \n> The documentation [1] says max_wal_size and min_wal_size defaults are 1GB\n> and 80 MB respectively. However, these are configured based on the\n> wal_segment_size and documentation is not clear about it. Attached a patch\n> to fix the documentation.\n> \n> [1] https://www.postgresql.org/docs/devel/runtime-config-wal.html\n\nGood catch! Now wal_segment_size is easily changed.\n\n- The default is 1 GB.\n+ The default value is configured to maximum of 64 times the <varname>wal_segment_size</varname> or 1 GB.\n- The default is 80 MB.\n+ The default value is configured to maximum of 5 times the <varname>wal_segment_size</varname> or 80 MB.\n\nHowever, I believe that most users don't change the WAL segment size,\nso the primary information is that the default sizes are 1GB and 80MB.\n\nSo, I personally think it should be written like this: \"The default\nsize is 80MB. However, if you have changed the WAL segment size from\nthe default of 16MB, it will be five times the segment size.\", but I'm\nnot sure what the others think about this..This looks good to me.Thanks,Sirisha",
"msg_date": "Mon, 17 Apr 2023 19:57:58 -0700",
"msg_from": "sirisha chamarthi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 07:57:58PM -0700, sirisha chamarthi wrote:\n> On Fri, Apr 14, 2023 at 1:01 AM Kyotaro Horiguchi <[email protected]>\n> wrote:\n>> So, I personally think it should be written like this: \"The default\n>> size is 80MB. However, if you have changed the WAL segment size from\n>> the default of 16MB, it will be five times the segment size.\", but I'm\n>> not sure what the others think about this..\n\nYes, I was under the impression that this should mention 16MB, but\nI'd also add a note about initdb when a non-default value is specified\nfor the segment size.\n--\nMichael",
"msg_date": "Tue, 18 Apr 2023 13:38:27 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "Hi\n\nOn Mon, Apr 17, 2023 at 9:38 PM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Apr 17, 2023 at 07:57:58PM -0700, sirisha chamarthi wrote:\n> > On Fri, Apr 14, 2023 at 1:01 AM Kyotaro Horiguchi <\n> [email protected]>\n> > wrote:\n> >> So, I personally think it should be written like this: \"The default\n> >> size is 80MB. However, if you have changed the WAL segment size from\n> >> the default of 16MB, it will be five times the segment size.\", but I'm\n> >> not sure what the others think about this..\n>\n> Yes, I was under the impression that this should mention 16MB, but\n> I'd also add a note about initdb when a non-default value is specified\n> for the segment size.\n>\n\nHow about the text below?\n\n\"The default size is 80MB. However, if you have changed the WAL segment size\n from the default of 16MB with the initdb option --wal-segsize, it will be\nfive times the segment size.\"\n\nHiOn Mon, Apr 17, 2023 at 9:38 PM Michael Paquier <[email protected]> wrote:On Mon, Apr 17, 2023 at 07:57:58PM -0700, sirisha chamarthi wrote:\n> On Fri, Apr 14, 2023 at 1:01 AM Kyotaro Horiguchi <[email protected]>\n> wrote:\n>> So, I personally think it should be written like this: \"The default\n>> size is 80MB. However, if you have changed the WAL segment size from\n>> the default of 16MB, it will be five times the segment size.\", but I'm\n>> not sure what the others think about this..\n\nYes, I was under the impression that this should mention 16MB, but\nI'd also add a note about initdb when a non-default value is specified\nfor the segment size.How about the text below? \"The default size is 80MB. However, if you have changed the WAL segment size from the default of 16MB with the initdb option --wal-segsize, it will be five times the segment size.\"",
"msg_date": "Tue, 18 Apr 2023 01:46:21 -0700",
"msg_from": "sirisha chamarthi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 01:46:21AM -0700, sirisha chamarthi wrote:\n> \"The default size is 80MB. However, if you have changed the WAL segment size\n> from the default of 16MB with the initdb option --wal-segsize, it will be\n> five times the segment size.\"\n\nYes, I think that something close to that would be OK. Do others have\nany comments or extra ideas to offer?\n--\nMichael",
"msg_date": "Sat, 22 Apr 2023 17:39:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "\n\nOn 2023/04/22 17:39, Michael Paquier wrote:\n> On Tue, Apr 18, 2023 at 01:46:21AM -0700, sirisha chamarthi wrote:\n>> \"The default size is 80MB. However, if you have changed the WAL segment size\n>> from the default of 16MB with the initdb option --wal-segsize, it will be\n>> five times the segment size.\"\n> \n> Yes, I think that something close to that would be OK. Do others have\n> any comments or extra ideas to offer?\n\nIf the WAL segment size is changed from the default value of 16MB during initdb,\nthe value of min_wal_size in postgresql.conf is set to five times the new segment\nsize by initdb. However, pg_settings.boot_val (the value of min_wal_size used\nwhen the parameter is not otherwise set) is still 80MB. So if pg_settings.boot_val\nshould be documented as the default value, the current description seems OK.\n\nOr we can clarify that the default value of min_wal_size is 80MB, but when initdb\nis run with a non-default segment size, the value in postgresql.conf is changed\nto five times the new segment size? This will help users better understand\nthe behavior of the setting and how it is affected by changes made during initdb.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 22 Apr 2023 18:52:27 +0900",
"msg_from": "Fujii Masao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "At Sat, 22 Apr 2023 18:52:27 +0900, Fujii Masao <[email protected]> wrote in \n> If the WAL segment size is changed from the default value of 16MB\n> during initdb,\n> the value of min_wal_size in postgresql.conf is set to five times the\n> new segment\n> size by initdb. However, pg_settings.boot_val (the value of\n> min_wal_size used\n> when the parameter is not otherwise set) is still 80MB. So if\n> pg_settings.boot_val\n> should be documented as the default value, the current description\n> seems OK.\n\nHmm, things are a bit more complex than I initially thought. The value is actually set in the configuration file, but the lines are added by initdb. The documentation states the following.\n\nhttps://www.postgresql.org/docs/devel/view-pg-settings.html\n> boot_val text\n> Parameter value assumed at server startup if the parameter is not\n> otherwise set\n\nSo, the description is accurate as it stands. If I remove the lines, they will revert to the default values.\n\n\n> Or we can clarify that the default value of min_wal_size is 80MB, but\n> when initdb\n> is run with a non-default segment size, the value in postgresql.conf\n> is changed\n> to five times the new segment size? This will help users better\n> understand\n> the behavior of the setting and how it is affected by changes made\n> during initdb.\n\nSo, to clarify the situation, I would phrase it like this:\n\nThe default size is 80MB. Note that if you have changed the WAL\nsegment size from the default of 16MB with the initdb option\n--wal-segsize, the tool should have added the settings with the values\nequal to five times the specified segment size to the configuration\nfile.\n\nIt might be a bit wordy, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 24 Apr 2023 10:57:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
},
{
"msg_contents": "At Mon, 24 Apr 2023 10:57:56 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> So, to clarify the situation, I would phrase it like this:\n> \n> The default size is 80MB. Note that if you have changed the WAL\n> segment size from the default of 16MB with the initdb option\n> --wal-segsize, the tool should have added the settings with the values\n> equal to five times the specified segment size to the configuration\n> file.\n> \n> It might be a bit wordy, though..\n\nOr would the following work?\n\nThe default size is 80MB. Note that initdb may have added the setting\nfor this value if you have specified the WAL segment size when running\nthe tool.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 24 Apr 2023 11:03:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix documentation for max_wal_size and min_wal_size"
}
] |
[
{
"msg_contents": "I find that if I run the following test against a standard debug build\non HEAD, my local installation reliably segfaults:\n\n$ meson test --setup running --suite test_rls_hooks-running\n\nAttached is a \"bt full\" run from gdb against a core dump. The query\n\"EXPLAIN (costs off) SELECT * FROM rls_test_permissive;\" runs when the\nbackend segfaults.\n\nThe top frame of the back trace is suggestive of a use-after-free:\n\n#0 copyObjectImpl (from=0x7f7f7f7f7f7f7f7e) at copyfuncs.c:187\n187 switch (nodeTag(from))\n...\n\n\"git bisect\" suggests that the problem began at commit 6ee30209,\n\"SQL/JSON: support the IS JSON predicate\".\n\nIt's a bit surprising that the bug reproduces when I run a standard\ntest, and yet we appear to have a bug that's about 2 weeks old. There\nmay be something unusual about my system that will turn out to be\nrelevant -- though there is nothing particularly exotic about this\nmachine. My repro doesn't rely on concurrent execution, or timing, or\nanything like that -- it's quite reliable.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 13 Apr 2023 21:14:01 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "segfault tied to \"IS JSON predicate\" commit"
},
{
"msg_contents": "On Thu, Apr 13, 2023 at 09:14:01PM -0700, Peter Geoghegan wrote:\n> I find that if I run the following test against a standard debug build\n> on HEAD, my local installation reliably segfaults:\n> \n> $ meson test --setup running --suite test_rls_hooks-running\n> \n> Attached is a \"bt full\" run from gdb against a core dump. The query\n> \"EXPLAIN (costs off) SELECT * FROM rls_test_permissive;\" runs when the\n> backend segfaults.\n> \n> The top frame of the back trace is suggestive of a use-after-free:\n> \n> #0 copyObjectImpl (from=0x7f7f7f7f7f7f7f7e) at copyfuncs.c:187\n> 187 switch (nodeTag(from))\n> ...\n> \n> \"git bisect\" suggests that the problem began at commit 6ee30209,\n> \"SQL/JSON: support the IS JSON predicate\".\n> \n> It's a bit surprising that the bug reproduces when I run a standard\n> test, and yet we appear to have a bug that's about 2 weeks old. There\n> may be something unusual about my system that will turn out to be\n> relevant -- though there is nothing particularly exotic about this\n> machine. My repro doesn't rely on concurrent execution, or timing, or\n> anything like that -- it's quite reliable.\n\nI was able to reproduce this yesterday but not today.\n\nI think what happened is that you (and I) are in the habbit of running\n\"meson test tmp_install\" to compile new binaries and install them into\n./tmp_install, and then run a server out from there. But nowadays\nthere's also \"meson test install_test_files\". I'm not sure what\ncombination of things are out of sync, but I suspect you forgot one of\n0) compile *and* install the new binaries; or 1) restart the running\npostmaster; or, 2) install the new shared library (\"test files\").\n\nI saw the crash again when I did this:\n\ntime ninja\ntime meson test tmp_install install_test_files regress/regress # does not recompile, BTW\n./tmp_install/usr/local/pgsql/bin/postgres -D ./testrun/regress/regress/tmp_check/data -p 5678 -c autovacuum=no&\ngit checkout HEAD~222\ntime meson test tmp_install install_test_files\ntime PGPORT=5678 meson test --setup running test_rls_hooks-running/regress\n\nIn this case, I'm not sure if there's anything to blame meson for; the\nissue is running server, which surely has different structures since\nlast month.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 15 Apr 2023 16:46:14 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: segfault tied to \"IS JSON predicate\" commit"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 2:46 PM Justin Pryzby <[email protected]> wrote:\n> I think what happened is that you (and I) are in the habbit of running\n> \"meson test tmp_install\" to compile new binaries and install them into\n> ./tmp_install, and then run a server out from there.\n\nThat's not my habit; this is running against a server that was\ninstalled into a dedicated install directory. Though I agree that an\nissue with the environment seems likely.\n\n> But nowadays\n> there's also \"meson test install_test_files\".\n\nThat only applies with \"--setup tmp_install\", which is the default\ntest setup, and the one that you must be using implicitly. But I'm\nusing \"--setup running\" for this.\n\nMore concretely, the tmp_install setup has the tests you say are requirements:\n\n$ meson test --setup tmp_install --list | grep install\npostgresql:setup / tmp_install\npostgresql:setup / install_test_files\n\nBut not the running setup:\n\n$ meson test --setup running --list | grep install | wc -l\n0\n\nI'm aware of the requirement around specifying \"--suite tmp_install\n...\" right before \"... --suite what_i_really_want_to_test\" is\nspecified. However, I can't see how it could be my fault for\nforgetting that, since it's structurally impossible to specify\n\"--suite tmp_install\" when using \"--setup running\". I was using the\nsetup that gives you behavior that's approximately equivalent to \"make\ninstallcheck\" (namely \"--setup running\"), so naturally this would have\nbeen impossible.\n\nLet's review:\n\n* There are two possible --setup modes. I didn't use the default\n(which is \"--setup tmp_install\") here. Rather, I used \"--setup\nrunning\", which is kinda like \"make installcheck\".\n\n* There is a test suite named \"setup\", though it's only available with\n\"--setup tmp_install\", the default setup. (This is not to be confused\nwith the meson-specific notion of a --setup.)\n\n* The \"setup\" suite happens to contain an individual test called\n\"tmp_install\" (as well as one called \"install_test_files\")\n\n* I cannot possibly have forgotten this, since asking for it with\n\"--setup running\" just doesn't work.\n\nLet's demonstrate what I mean. The following does not and cannot\nwork, so I cannot have forgotten to do it in any practical sense:\n\n$ meson test --setup running postgresql:setup / tmp_install\nninja: no work to do.\nNo suitable tests defined.\n\nSuch an incantation can only be expected to work with --setup tmp_install, the\ndefault. So this version does work:\n\n$ meson test --setup tmp_install postgresql:setup / tmp_install\n**SNIP**\n1/1 postgresql:setup / tmp_install OK 0.72s\n**SNIP**\n\nNot confusing at all!\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 15 Apr 2023 16:11:32 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: segfault tied to \"IS JSON predicate\" commit"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 4:11 PM Peter Geoghegan <[email protected]> wrote:\n> $ meson test --setup tmp_install --list | grep install\n> postgresql:setup / tmp_install\n> postgresql:setup / install_test_files\n>\n> But not the running setup:\n>\n> $ meson test --setup running --list | grep install | wc -l\n> 0\n\nThere is a concrete problem here: commit b6a0d469ca (\"meson: Prevent\ninstallation of test files during main install\") overlooked \"--setup\nrunning\". It did not add a way for the setup to run \"postgresql:setup\n/ install_test_files\" (or perhaps something very similar).\n\nThe segfault must have been caused by unwitting use of a leftover\nancient test_rls_hooks.so from before commit b6a0d469ca. My old stale\n.so must have continued to work for a little while, before it broke.\nNow that I've fully deleted my install directory, I can see a clear\nproblem, which is much less mysterious than the segfault. Namely, the\nfollowing doesn't still work:\n\n$ meson test --setup running --suite test_rls_hooks-running\n\nThis time it's not a segfault, though -- it's due to the .so being\nunavailable. Adding \"--suite setup\" fixes nothing, since I'm using\n\"--setup running\"; while the \"--suite running\" tests will actually run\nand install the .so, they won't install it into the installation\ndirectory I'm actually using (only into a tmp_install directory).\nWhile I was wrong to implicate commit 6ee30209 (the IS JSON commit) at\nfirst, there is a bug here. A bug in b6a0d469ca.\n\nISTM that b6a0d469ca has created an unmet need for a \"--suite\nsetup-running\", which is analogous to \"--suite setup\" but works with\n\"--setup running\". That way there'd at least be a\n\"postgresql:setup-running / install_test_files\" test that could be\nused here, like so:\n\n$ meson test --setup running --suite setup-running --suite\ntest_rls_hooks-running\n\nBut...maybe it would be better to find a way to install the stuff from\n\"postgresql:setup / install_test_files\" in a less kludgy, more\nstandard kind of way? I see that the commit message from b6a0d469ca\nsays \"there is no way to set up up the build system to install extra\nthings only when told\". Is that *really* the case?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 15 Apr 2023 17:15:11 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: segfault tied to \"IS JSON predicate\" commit"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 5:15 PM Peter Geoghegan <[email protected]> wrote:\n> ISTM that b6a0d469ca has created an unmet need for a \"--suite\n> setup-running\", which is analogous to \"--suite setup\" but works with\n> \"--setup running\". That way there'd at least be a\n> \"postgresql:setup-running / install_test_files\" test that could be\n> used here, like so:\n>\n> $ meson test --setup running --suite setup-running --suite\n> test_rls_hooks-running\n>\n> But...maybe it would be better to find a way to install the stuff from\n> \"postgresql:setup / install_test_files\" in a less kludgy, more\n> standard kind of way? I see that the commit message from b6a0d469ca\n> says \"there is no way to set up up the build system to install extra\n> things only when told\". Is that *really* the case?\n\nI see that CI deals with this problem using this kludge on FreeBSD,\nwhich tests \"--setup running\":\n\n meson test $MTEST_ARGS --quiet --suite setup\n export LD_LIBRARY_PATH=\"$(pwd)/build/tmp_install/usr/local/pgsql/lib/:$LD_LIBRARY_PATH\"\n\nThat's why CI never failed due to commit b6a0d469ca.\n\nThis doesn't seem like something that should become standard operating\nprocedure. Not that it is right now, mind you. This isn't documented\nanywhere, even though \"--setup running\" is documented (albeit lightly)\nin the sgml docs.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Apr 2023 09:32:51 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: segfault tied to \"IS JSON predicate\" commit"
}
] |
[
{
"msg_contents": "Just for fun, I broke time up into 15 minute intervals and counted how\nmany machines were showing red on HEAD at each sample point (lateral\njoin for last tick interpolation of data I collect from the BF), and\nplotted that over time. See attached.\n\nI excluded seawasp (it tells us about *future* breakage against LLVM\nwhich unfortunately we don't always act on immediately, more on that\nshortly) and lorikeet (it is known to be broken, not our fault, though\nFWIW it hasn't crashed in > 3 months so I think we did actually work\naround it successfully).\n\nIt looks a bit like the CI system has cut out the high spikes (=\ntrashing the whole farm) and generally reduced the area of red.",
"msg_date": "Fri, 14 Apr 2023 17:36:12 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": true,
"msg_subject": "Build farm breakage over time"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nToo small value of work_mem cause memory overflow in parallel hash join \nbecause of too much number batches.\nThere is the plan:\n\nexplain SELECT * FROM solixschema.MIG_50GB_APR04_G1_H a join \nsolixschema.MIG_50GB_APR04_G2_H b on a.seq_pk = b.seq_pk join \nsolixschema.MIG_50GB_APR04_G3_H c on b.seq_p\nk = c.seq_pk join solixschema.MIG_50GB_APR04_G4_H d on c.seq_pk = \nd.seq_pk join solixschema.MIG_50GB_APR04_G5_H e on d.seq_pk = e.seq_pk;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Gather (cost=205209076.76..598109290.40 rows=121319744 width=63084)\n Workers Planned: 8\n -> Parallel Hash Join (cost=205208076.76..585976316.00 \nrows=15164968 width=63084)\n Hash Cond: (b.seq_pk = a.seq_pk)\n -> Parallel Hash Join (cost=55621683.59..251148173.17 \nrows=14936978 width=37851)\n Hash Cond: (b.seq_pk = c.seq_pk)\n -> Parallel Hash Join (cost=27797595.68..104604780.40 \nrows=15346430 width=25234)\n Hash Cond: (b.seq_pk = d.seq_pk)\n -> Parallel Seq Scan on mig_50gb_apr04_g2_h b \n(cost=0.00..4021793.90 rows=15783190 width=12617)\n -> Parallel Hash (cost=3911716.30..3911716.30 \nrows=15346430 width=12617)\n -> Parallel Seq Scan on mig_50gb_apr04_g4_h \nd (cost=0.00..3911716.30 rows=15346430 width=12617)\n -> Parallel Hash (cost=3913841.85..3913841.85 \nrows=15362085 width=12617)\n -> Parallel Seq Scan on mig_50gb_apr04_g3_h c \n(cost=0.00..3913841.85 rows=15362085 width=12617)\n -> Parallel Hash (cost=102628306.07..102628306.07 \nrows=15164968 width=25233)\n -> Parallel Hash Join (cost=27848049.61..102628306.07 \nrows=15164968 width=25233)\n Hash Cond: (a.seq_pk = e.seq_pk)\n -> Parallel Seq Scan on mig_50gb_apr04_g1_h a \n(cost=0.00..3877018.68 rows=15164968 width=12617)\n -> Parallel Hash (cost=3921510.05..3921510.05 \nrows=15382205 width=12616)\n -> Parallel Seq Scan on mig_50gb_apr04_g5_h \ne (cost=0.00..3921510.05 rows=15382205 width=12616)\n\n\nwork_mem is 4MB and leader + two parallel workers consumes about 10Gb each.\nThere are 262144 batches:\n\n(gdb) p *hjstate->hj_HashTable\n$2 = {nbuckets = 1024, log2_nbuckets = 10, nbuckets_original = 1024,\n nbuckets_optimal = 1024, log2_nbuckets_optimal = 10, buckets = {\n unshared = 0x7fa5d5211000, shared = 0x7fa5d5211000}, keepNulls = \nfalse,\n skewEnabled = false, skewBucket = 0x0, skewBucketLen = 0, \nnSkewBuckets = 0,\n skewBucketNums = 0x0, nbatch = 262144, curbatch = 86506,\n nbatch_original = 262144, nbatch_outstart = 262144, growEnabled = true,\n totalTuples = 122600000, partialTuples = 61136408, skewTuples = 0,\n innerBatchFile = 0x0, outerBatchFile = 0x0,\n outer_hashfunctions = 0x55ce086a3288, inner_hashfunctions = \n0x55ce086a32d8,\n hashStrict = 0x55ce086a3328, collations = 0x55ce086a3340, spaceUsed = 0,\n spaceAllowed = 8388608, spacePeak = 204800, spaceUsedSkew = 0,\n spaceAllowedSkew = 167772, hashCxt = 0x55ce086a3170,\n batchCxt = 0x55ce086a5180, chunks = 0x0, current_chunk = 0x7fa5d5283000,\n area = 0x55ce085b56d8, parallel_state = 0x7fa5ee993520,\n batches = 0x7fa5d3ff8048, current_chunk_shared = 1099512193024}\n\n\nThe biggest memory contexts are:\n\nExecutorState: 1362623568\n HashTableContext: 102760280\n HashBatchContext: 7968\n HashTableContext: 178257752\n HashBatchContext: 7968\n HashTableContext: 5306745728\n HashBatchContext: 7968\n\n\nThere is still some gap between size reported by memory context sump and \nactual size of backend.\nBut is seems to be obvious, that trying to fit in work_mem \nsharedtuplestore creates so much batches, that them consume much more \nmemory than work_mem.\n\n\n",
"msg_date": "Fri, 14 Apr 2023 13:59:27 +0300",
"msg_from": "Konstantin Knizhnik <[email protected]>",
"msg_from_op": true,
"msg_subject": "OOM in hash join"
},
{
"msg_contents": "On Fri, 14 Apr 2023 at 12:59, Konstantin Knizhnik <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> Too small value of work_mem cause memory overflow in parallel hash join\n> because of too much number batches.\n> There is the plan:\n\n[...]\n\n> There is still some gap between size reported by memory context sump and\n> actual size of backend.\n> But is seems to be obvious, that trying to fit in work_mem\n> sharedtuplestore creates so much batches, that them consume much more\n> memory than work_mem.\n\nThe same issue [0] was reported a few weeks ago, with the same\ndiagnosis here [1]. I think it's being worked on over there.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/20230228190643.1e368315%40karst\n[1] https://www.postgresql.org/message-id/flat/3013398b-316c-638f-2a73-3783e8e2ef02%40enterprisedb.com#ceb9e14383122ade8b949b7479c6f7e2\n\n\n",
"msg_date": "Fri, 14 Apr 2023 13:21:05 +0200",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OOM in hash join"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 10:59 PM Konstantin Knizhnik <[email protected]> wrote:\n> Too small value of work_mem cause memory overflow in parallel hash join\n> because of too much number batches.\n\nYeah. Not only in parallel hash join, but in any hash join\n(admittedly parallel hash join has higher per-batch overheads; that is\nperhaps something we could improve). That's why we tried to invent an\nalternative strategy where you loop over batches N times, instead of\nmaking more batches, at some point:\n\nhttps://www.postgresql.org/message-id/flat/CA+hUKGKWWmf=WELLG=aUGbcugRaSQbtm0tKYiBut-B2rVKX63g@mail.gmail.com\n\nThat thread starts out talking about 'extreme skew' etc but the more\ngeneral problem is that, at some point, even with perfectly evenly\ndistributed keys, adding more batches requires more memory than you\ncan save by doing so. Sure, it's a problem that we don't account for\nthat memory properly, as complained about here:\n\nhttps://www.postgresql.org/message-id/flat/20190504003414.bulcbnge3rhwhcsh@development\n\nIf you did have perfect prediction of every byte you will need, maybe\nyou could say, oh, well, we just don't have enough memory for a hash\njoin, so let's do a sort/merge instead. But you can't, because (1)\nsome types aren't merge-joinable, and (2) in reality sometimes you've\nalready started the hash join due to imperfect stats so it's too late\nto change strategies.\n\n\n",
"msg_date": "Fri, 14 Apr 2023 23:27:55 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OOM in hash join"
},
{
"msg_contents": "On Fri, 14 Apr 2023 13:21:05 +0200\nMatthias van de Meent <[email protected]> wrote:\n\n> On Fri, 14 Apr 2023 at 12:59, Konstantin Knizhnik <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > Too small value of work_mem cause memory overflow in parallel hash join\n> > because of too much number batches.\n> > There is the plan: \n> \n> [...]\n> \n> > There is still some gap between size reported by memory context sump and\n> > actual size of backend.\n> > But is seems to be obvious, that trying to fit in work_mem\n> > sharedtuplestore creates so much batches, that them consume much more\n> > memory than work_mem.\n\nIndeed. The memory consumed by batches is not accounted and the consumption\nreported in explain analyze is wrong.\n\nWould you be able to test the latest patchset posted [1] ? This does not fix\nthe work_mem overflow, but it helps to keep the number of batches\nbalanced and acceptable. Any feedback, comment or review would be useful.\n\n[1] https://www.postgresql.org/message-id/flat/20230408020119.32a0841b%40karst#616c1f41fcc10e8f89d41e8e5693618c\n\nRegards,\n\n\n",
"msg_date": "Fri, 14 Apr 2023 13:43:21 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OOM in hash join"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 11:43 PM Jehan-Guillaume de Rorthais\n<[email protected]> wrote:\n> Would you be able to test the latest patchset posted [1] ? This does not fix\n> the work_mem overflow, but it helps to keep the number of batches\n> balanced and acceptable. Any feedback, comment or review would be useful.\n>\n> [1] https://www.postgresql.org/message-id/flat/20230408020119.32a0841b%40karst#616c1f41fcc10e8f89d41e8e5693618c\n\nHi Jehan-Guillaume. I hadn't paid attention to that thread before\nprobably due to timing and the subject and erm ETOOMANYTHREADS.\nThanks for all the work you've done to study this area and also review\nand summarise the previous writing/patches/ideas.\n\n\n",
"msg_date": "Sat, 15 Apr 2023 00:46:09 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OOM in hash join"
}
] |
[
{
"msg_contents": "Hi,\n\n\nWe have observed that running the same self JOIN query on postgres FDW\nsetup is returning different results with set enable_nestloop off & on. I\nam at today's latest commit:- 928e05ddfd4031c67e101c5e74dbb5c8ec4f9e23\n\nI created a local FDW setup. And ran this experiment on the same. Kindly\nrefer to the P.S section for details.\n\n|********************************************************************|\n*Below is the output difference along with query plan:-*\npostgres@71609=#set enable_nestloop=off;\nSET\npostgres@71609=#select * from pg_tbl_foreign tbl1 join pg_tbl_foreign tbl2\non tbl1.id1 < 5 and now() < '23-Feb-2020'::timestamp;\n id1 | id2 | id1 | id2\n-----+-----+-----+-----\n 1 | 10 | 1 | 10\n 2 | 20 | 1 | 10\n 3 | 30 | 1 | 10\n 1 | 10 | 2 | 20\n 2 | 20 | 2 | 20\n 3 | 30 | 2 | 20\n 1 | 10 | 3 | 30\n 2 | 20 | 3 | 30\n 3 | 30 | 3 | 30\n(9 rows)\n\npostgres@71609=#explain (analyze, verbose) select * from pg_tbl_foreign\ntbl1 join pg_tbl_foreign tbl2 on tbl1.id1 < 5 and now() <\n'23-Feb-2020'::timestamp;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n Foreign Scan (cost=100.00..49310.40 rows=2183680 width=16) (actual\ntime=0.514..0.515 rows=9 loops=1)\n Output: tbl1.id1, tbl1.id2, tbl2.id1, tbl2.id2\n Relations: (public.pg_tbl_foreign tbl1) INNER JOIN\n(public.pg_tbl_foreign tbl2)\n Remote SQL: SELECT r1.id1, r1.id2, r2.id1, r2.id2 FROM (public.pg_tbl r1\nINNER JOIN public.pg_tbl r2 ON (((r1.id1 < 5))))\n Planning Time: 0.139 ms\n Execution Time: 0.984 ms\n(6 rows)\n\npostgres@71609=#set enable_nestloop=on;\nSET\npostgres@71609=#select * from pg_tbl_foreign tbl1 join pg_tbl_foreign tbl2\non tbl1.id1 < 5 and now() < '23-Feb-2020'::timestamp;\n id1 | id2 | id1 | id2\n-----+-----+-----+-----\n(0 rows)\n\npostgres@71609=#explain (analyze, verbose) select * from pg_tbl_foreign\ntbl1 join pg_tbl_foreign tbl2 on tbl1.id1 < 5 and now() <\n'23-Feb-2020'::timestamp;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------\n Result (cost=200.00..27644.00 rows=2183680 width=16) (actual\ntime=0.003..0.004 rows=0 loops=1)\n Output: tbl1.id1, tbl1.id2, tbl2.id1, tbl2.id2\n One-Time Filter: (now() < '2020-02-23 00:00:00'::timestamp without time\nzone)\n -> Nested Loop (cost=200.00..27644.00 rows=2183680 width=16) (never\nexecuted)\n Output: tbl1.id1, tbl1.id2, tbl2.id1, tbl2.id2\n -> Foreign Scan on public.pg_tbl_foreign tbl2\n (cost=100.00..186.80 rows=2560 width=8) (never executed)\n Output: tbl2.id1, tbl2.id2\n Remote SQL: SELECT id1, id2 FROM public.pg_tbl\n -> Materialize (cost=100.00..163.32 rows=853 width=8) (never\nexecuted)\n Output: tbl1.id1, tbl1.id2\n -> Foreign Scan on public.pg_tbl_foreign tbl1\n (cost=100.00..159.06 rows=853 width=8) (never executed)\n Output: tbl1.id1, tbl1.id2\n Remote SQL: SELECT id1, id2 FROM public.pg_tbl WHERE\n((id1 < 5))\n Planning Time: 0.178 ms\n Execution Time: 0.292 ms\n(15 rows)\n\n|********************************************************************|\n\nI debugged this issue and was able to find a fix for the same. Kindly\nplease refer to the attached fix. With the fix I am able to resolve the\nissue. But I am not that confident whether this change would affect some\nother existing functionally but it has helped me resolve this result\ndifference in output.\n\n*What is the technical issue?*\nThe problem here is the use of extract_actual_clauses. Because of which the\nplan creation misses adding the second condition of AND i.e \"now() <\n'23-Feb-2020'::timestamp\" in the plan. Because it is not considered a\npseudo constant and extract_actual_clause is passed with false as the\nsecond parameter and it gets skipped from the list. As a result that\ncondition is never taken into consideration as either one-time filter\n(before or after) or part of SQL remote execution\n\n*Why do I think the fix is correct?*\nThe fix is simple, where we have created a new function similar to\nextract_actual_clause which just extracts all the conditions from the list\nwith no checks and returns the list to the caller. As a result all\nconditions would be taken into consideration in the query plan.\n\n*After my fix patch:-*\npostgres@78754=#set enable_nestloop=off;\nSET\npostgres@78754=#select * from pg_tbl_foreign tbl1 join pg_tbl_foreign tbl2\non tbl1.id1 < 5 and now() < '23-Feb-2020'::timestamp;\n id1 | id2 | id1 | id2\n-----+-----+-----+-----\n(0 rows)\n ^\npostgres@78754=#explain (analyze, verbose) select * from pg_tbl_foreign\ntbl1 join pg_tbl_foreign tbl2 on tbl1.id1 < 5 and now() <\n'23-Feb-2020'::timestamp;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n Foreign Scan (cost=100.00..49310.40 rows=2183680 width=16) (actual\ntime=0.652..0.652 rows=0 loops=1)\n Output: tbl1.id1, tbl1.id2, tbl2.id1, tbl2.id2\n Filter: (now() < '2020-02-23 00:00:00'::timestamp without time zone)\n Rows Removed by Filter: 9\n Relations: (public.pg_tbl_foreign tbl1) INNER JOIN\n(public.pg_tbl_foreign tbl2)\n Remote SQL: SELECT r1.id1, r1.id2, r2.id1, r2.id2 FROM (public.pg_tbl r1\nINNER JOIN public.pg_tbl r2 ON (((r1.id1 < 5))))\n Planning Time: 0.133 ms\n Execution Time: 1.127 ms\n(8 rows)\n\npostgres@78754=#set enable_nestloop=on;\nSET\npostgres@78754=#select * from pg_tbl_foreign tbl1 join pg_tbl_foreign tbl2\non tbl1.id1 < 5 and now() < '23-Feb-2020'::timestamp;\n id1 | id2 | id1 | id2\n-----+-----+-----+-----\n(0 rows)\n\npostgres@78754=#explain (analyze, verbose) select * from pg_tbl_foreign\ntbl1 join pg_tbl_foreign tbl2 on tbl1.id1 < 5 and now() <\n'23-Feb-2020'::timestamp;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------\n Result (cost=200.00..27644.00 rows=2183680 width=16) (actual\ntime=0.004..0.005 rows=0 loops=1)\n Output: tbl1.id1, tbl1.id2, tbl2.id1, tbl2.id2\n One-Time Filter: (now() < '2020-02-23 00:00:00'::timestamp without time\nzone)\n -> Nested Loop (cost=200.00..27644.00 rows=2183680 width=16) (never\nexecuted)\n Output: tbl1.id1, tbl1.id2, tbl2.id1, tbl2.id2\n -> Foreign Scan on public.pg_tbl_foreign tbl2\n (cost=100.00..186.80 rows=2560 width=8) (never executed)\n Output: tbl2.id1, tbl2.id2\n Remote SQL: SELECT id1, id2 FROM public.pg_tbl\n -> Materialize (cost=100.00..163.32 rows=853 width=8) (never\nexecuted)\n Output: tbl1.id1, tbl1.id2\n -> Foreign Scan on public.pg_tbl_foreign tbl1\n (cost=100.00..159.06 rows=853 width=8) (never executed)\n Output: tbl1.id1, tbl1.id2\n Remote SQL: SELECT id1, id2 FROM public.pg_tbl WHERE\n((id1 < 5))\n Planning Time: 0.134 ms\n Execution Time: 0.347 ms\n(15 rows)\n|********************************************************************|\n\nKindly please comment if I am in the correct direction or not?\n\n\nRegards,\nNishant Sharma.\nDeveloper at EnterpriseDB, Pune, India.\n\n\n\nP.S\nSteps that I used to create local postgres FDW setup ( followed link -\nhttps://www.postgresql.org/docs/current/postgres-fdw.html\n<https://www.postgresql.org/docs/current/postgres-fdw.html):-> )\n\n1) ./configure --prefix=/home/edb/POSTGRES_INSTALL/MASTER\n--with-pgport=9996 --with-openssl --with-libxml --with-zlib --with-tcl\n--with-perl --with-libxslt --with-ossp-uuid --with-ldap --with-pam\n--enable-nls --enable-debug --enable-depend --enable-dtrace --with-selinux\n--with-icu --enable-tap-tests --enable-cassert CFLAGS=\"-g -O0\"\n\n2) make\n\n3) make install\n\n4) cd contrib/postgres_fdw/\n\n5) make install\n\n6) Start the server\n\n7)\n[edb@localhost MASTER]$ bin/psql postgres edb;\npsql (16devel)\nType \"help\" for help.\n\npostgres@70613=#create database remote_db;\nCREATE DATABASE\npostgres@70613=#quit\n\n[edb@localhost MASTER]$ bin/psql remote_db edb;\npsql (16devel)\nType \"help\" for help.\n\nremote_db@70613=#CREATE USER fdw_user;\nCREATE ROLE\n\nremote_db@70613=#GRANT ALL ON SCHEMA public TO fdw_user;\nGRANT\nremote_db@70613=#quit\n\n[edb@localhost MASTER]$ bin/psql remote_db fdw_user;\npsql (16devel)\nType \"help\" for help.\n\nremote_db@70613=#create table pg_tbl(id1 int, id2 int);\nCREATE TABLE\nremote_db@70613=#insert into pg_tbl values(1, 10);\nINSERT 0 1\nremote_db@70613=#insert into pg_tbl values(2, 20);\nINSERT 0 1\nremote_db@70613=#insert into pg_tbl values(3, 30);\nINSERT 0 1\n\n8)\nNew terminal/Tab:-\n[edb@localhost MASTER]$ bin/psql postgres edb;\npostgres@71609=#create extension postgres_fdw;\nCREATE EXTENSION\npostgres@71609=#CREATE SERVER localhost_fdw FOREIGN DATA WRAPPER\npostgres_fdw OPTIONS (dbname 'remote_db', host 'localhost', port '9996');\nCREATE SERVER\npostgres@71609=#CREATE USER MAPPING for edb SERVER localhost_fdw OPTIONS\n(user 'fdw_user', password '');\nCREATE USER MAPPING\npostgres@71609=#GRANT ALL ON FOREIGN SERVER localhost_fdw TO edb;\nGRANT\npostgres@71609=#CREATE FOREIGN TABLE pg_tbl_foreign(id1 int, id2 int)\nSERVER localhost_fdw OPTIONS (schema_name 'public', table_name 'pg_tbl');\nCREATE FOREIGN TABLE\npostgres@71609=#select * from pg_tbl_foreign;\n id1 | id2\n-----+-----\n 1 | 10\n 2 | 20\n 3 | 30\n(3 rows)",
"msg_date": "Fri, 14 Apr 2023 17:08:39 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Nishant,\n\nOn Fri, Apr 14, 2023 at 8:39 PM Nishant Sharma\n<[email protected]> wrote:\n> I debugged this issue and was able to find a fix for the same. Kindly please refer to the attached fix. With the fix I am able to resolve the issue.\n\nThanks for the report and patch!\n\n> What is the technical issue?\n> The problem here is the use of extract_actual_clauses. Because of which the plan creation misses adding the second condition of AND i.e \"now() < '23-Feb-2020'::timestamp\" in the plan. Because it is not considered a pseudo constant and extract_actual_clause is passed with false as the second parameter and it gets skipped from the list. As a result that condition is never taken into consideration as either one-time filter (before or after) or part of SQL remote execution\n>\n> Why do I think the fix is correct?\n> The fix is simple, where we have created a new function similar to extract_actual_clause which just extracts all the conditions from the list with no checks and returns the list to the caller. As a result all conditions would be taken into consideration in the query plan.\n\nI think that the root cause for this issue would be in the\ncreate_scan_plan handling of pseudoconstant quals when creating a\nforeign-join (or custom-join) plan. Anyway, I will look at your patch\nclosely, first.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 14 Apr 2023 21:50:55 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Thanks Etsuro for your response!\n\nOne small typo correction in my answer to \"What is the technical issue?\"\n\"it is *not* considered a pseudo constant\" --> \"it is considered a pseudo\nconstant\"\n\n\nRegards,\nNishant.\n\nOn Fri, Apr 14, 2023 at 6:21 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> Hi Nishant,\n>\n> On Fri, Apr 14, 2023 at 8:39 PM Nishant Sharma\n> <[email protected]> wrote:\n> > I debugged this issue and was able to find a fix for the same. Kindly\n> please refer to the attached fix. With the fix I am able to resolve the\n> issue.\n>\n> Thanks for the report and patch!\n>\n> > What is the technical issue?\n> > The problem here is the use of extract_actual_clauses. Because of which\n> the plan creation misses adding the second condition of AND i.e \"now() <\n> '23-Feb-2020'::timestamp\" in the plan. Because it is not considered a\n> pseudo constant and extract_actual_clause is passed with false as the\n> second parameter and it gets skipped from the list. As a result that\n> condition is never taken into consideration as either one-time filter\n> (before or after) or part of SQL remote execution\n> >\n> > Why do I think the fix is correct?\n> > The fix is simple, where we have created a new function similar to\n> extract_actual_clause which just extracts all the conditions from the list\n> with no checks and returns the list to the caller. As a result all\n> conditions would be taken into consideration in the query plan.\n>\n> I think that the root cause for this issue would be in the\n> create_scan_plan handling of pseudoconstant quals when creating a\n> foreign-join (or custom-join) plan. Anyway, I will look at your patch\n> closely, first.\n>\n> Best regards,\n> Etsuro Fujita\n>\n\nThanks Etsuro for your response!One small typo correction in my answer to \"What is the technical issue?\"\"it is not considered a pseudo constant\" --> \"it is considered a pseudo constant\"Regards,Nishant.On Fri, Apr 14, 2023 at 6:21 PM Etsuro Fujita <[email protected]> wrote:Hi Nishant,\n\nOn Fri, Apr 14, 2023 at 8:39 PM Nishant Sharma\n<[email protected]> wrote:\n> I debugged this issue and was able to find a fix for the same. Kindly please refer to the attached fix. With the fix I am able to resolve the issue.\n\nThanks for the report and patch!\n\n> What is the technical issue?\n> The problem here is the use of extract_actual_clauses. Because of which the plan creation misses adding the second condition of AND i.e \"now() < '23-Feb-2020'::timestamp\" in the plan. Because it is not considered a pseudo constant and extract_actual_clause is passed with false as the second parameter and it gets skipped from the list. As a result that condition is never taken into consideration as either one-time filter (before or after) or part of SQL remote execution\n>\n> Why do I think the fix is correct?\n> The fix is simple, where we have created a new function similar to extract_actual_clause which just extracts all the conditions from the list with no checks and returns the list to the caller. As a result all conditions would be taken into consideration in the query plan.\n\nI think that the root cause for this issue would be in the\ncreate_scan_plan handling of pseudoconstant quals when creating a\nforeign-join (or custom-join) plan. Anyway, I will look at your patch\nclosely, first.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 17 Apr 2023 11:00:05 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Etsuro Fujita,\n\n\nAny updates? -- did you get a chance to look into this?\n\n\nRegards,\nNishant.\n\nOn Mon, Apr 17, 2023 at 11:00 AM Nishant Sharma <\[email protected]> wrote:\n\n> Thanks Etsuro for your response!\n>\n> One small typo correction in my answer to \"What is the technical issue?\"\n> \"it is *not* considered a pseudo constant\" --> \"it is considered a pseudo\n> constant\"\n>\n>\n> Regards,\n> Nishant.\n>\n> On Fri, Apr 14, 2023 at 6:21 PM Etsuro Fujita <[email protected]>\n> wrote:\n>\n>> Hi Nishant,\n>>\n>> On Fri, Apr 14, 2023 at 8:39 PM Nishant Sharma\n>> <[email protected]> wrote:\n>> > I debugged this issue and was able to find a fix for the same. Kindly\n>> please refer to the attached fix. With the fix I am able to resolve the\n>> issue.\n>>\n>> Thanks for the report and patch!\n>>\n>> > What is the technical issue?\n>> > The problem here is the use of extract_actual_clauses. Because of which\n>> the plan creation misses adding the second condition of AND i.e \"now() <\n>> '23-Feb-2020'::timestamp\" in the plan. Because it is not considered a\n>> pseudo constant and extract_actual_clause is passed with false as the\n>> second parameter and it gets skipped from the list. As a result that\n>> condition is never taken into consideration as either one-time filter\n>> (before or after) or part of SQL remote execution\n>> >\n>> > Why do I think the fix is correct?\n>> > The fix is simple, where we have created a new function similar to\n>> extract_actual_clause which just extracts all the conditions from the list\n>> with no checks and returns the list to the caller. As a result all\n>> conditions would be taken into consideration in the query plan.\n>>\n>> I think that the root cause for this issue would be in the\n>> create_scan_plan handling of pseudoconstant quals when creating a\n>> foreign-join (or custom-join) plan. Anyway, I will look at your patch\n>> closely, first.\n>>\n>> Best regards,\n>> Etsuro Fujita\n>>\n>\n\nHi Etsuro Fujita,Any updates? -- did you get a chance to look into this?Regards,Nishant.On Mon, Apr 17, 2023 at 11:00 AM Nishant Sharma <[email protected]> wrote:Thanks Etsuro for your response!One small typo correction in my answer to \"What is the technical issue?\"\"it is not considered a pseudo constant\" --> \"it is considered a pseudo constant\"Regards,Nishant.On Fri, Apr 14, 2023 at 6:21 PM Etsuro Fujita <[email protected]> wrote:Hi Nishant,\n\nOn Fri, Apr 14, 2023 at 8:39 PM Nishant Sharma\n<[email protected]> wrote:\n> I debugged this issue and was able to find a fix for the same. Kindly please refer to the attached fix. With the fix I am able to resolve the issue.\n\nThanks for the report and patch!\n\n> What is the technical issue?\n> The problem here is the use of extract_actual_clauses. Because of which the plan creation misses adding the second condition of AND i.e \"now() < '23-Feb-2020'::timestamp\" in the plan. Because it is not considered a pseudo constant and extract_actual_clause is passed with false as the second parameter and it gets skipped from the list. As a result that condition is never taken into consideration as either one-time filter (before or after) or part of SQL remote execution\n>\n> Why do I think the fix is correct?\n> The fix is simple, where we have created a new function similar to extract_actual_clause which just extracts all the conditions from the list with no checks and returns the list to the caller. As a result all conditions would be taken into consideration in the query plan.\n\nI think that the root cause for this issue would be in the\ncreate_scan_plan handling of pseudoconstant quals when creating a\nforeign-join (or custom-join) plan. Anyway, I will look at your patch\nclosely, first.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 24 Apr 2023 12:01:01 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 3:31 PM Nishant Sharma\n<[email protected]> wrote:\n> Any updates? -- did you get a chance to look into this?\n\nSorry, I have not looked into this yet, because I have been busy with\nsome other work recently. I plan to do so early next week.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 24 Apr 2023 19:10:39 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 8:51 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> I think that the root cause for this issue would be in the\n> create_scan_plan handling of pseudoconstant quals when creating a\n> foreign-join (or custom-join) plan.\n\n\nYes exactly. In create_scan_plan, we are supposed to extract all the\npseudoconstant clauses and use them as one-time quals in a gating Result\nnode. Currently we check against rel->baserestrictinfo and ppi_clauses\nfor the pseudoconstant clauses. But for scans of foreign joins, we do\nnot have any restriction clauses in these places and thus the gating\nResult node as well as the pseudoconstant clauses would just be lost.\n\nI looked at Nishant's patch. IIUC it treats the pseudoconstant clauses\nas local conditions. While it can fix the wrong results issue, I think\nmaybe it's better to still treat the pseudoconstant clauses as one-time\nquals in a gating node. So I wonder if we can store the restriction\nclauses for foreign joins in ForeignPath, just as what we do for normal\nJoinPath, and then check against them for pseudoconstant clauses in\ncreate_scan_plan, something like attached.\n\nBTW, while going through the codes I noticed one place in\nadd_foreign_final_paths that uses NULL for List *. I changed it to NIL.\n\nThanks\nRichard",
"msg_date": "Tue, 25 Apr 2023 18:06:01 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "+1 for fixing this in the backend code rather than FDW code.\n\nThanks, Richard, for working on this. The patch looks good to me at\na glance.\n\nOn Tue, Apr 25, 2023 at 3:36 PM Richard Guo <[email protected]> wrote:\n\n>\n> On Fri, Apr 14, 2023 at 8:51 PM Etsuro Fujita <[email protected]>\n> wrote:\n>\n>> I think that the root cause for this issue would be in the\n>> create_scan_plan handling of pseudoconstant quals when creating a\n>> foreign-join (or custom-join) plan.\n>\n>\n> Yes exactly. In create_scan_plan, we are supposed to extract all the\n> pseudoconstant clauses and use them as one-time quals in a gating Result\n> node. Currently we check against rel->baserestrictinfo and ppi_clauses\n> for the pseudoconstant clauses. But for scans of foreign joins, we do\n> not have any restriction clauses in these places and thus the gating\n> Result node as well as the pseudoconstant clauses would just be lost.\n>\n> I looked at Nishant's patch. IIUC it treats the pseudoconstant clauses\n> as local conditions. While it can fix the wrong results issue, I think\n> maybe it's better to still treat the pseudoconstant clauses as one-time\n> quals in a gating node. So I wonder if we can store the restriction\n> clauses for foreign joins in ForeignPath, just as what we do for normal\n> JoinPath, and then check against them for pseudoconstant clauses in\n> create_scan_plan, something like attached.\n>\n> BTW, while going through the codes I noticed one place in\n> add_foreign_final_paths that uses NULL for List *. I changed it to NIL.\n>\n> Thanks\n> Richard\n>\n\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\n+1 for fixing this in the backend code rather than FDW code.Thanks, Richard, for working on this. The patch looks good to me at a glance.On Tue, Apr 25, 2023 at 3:36 PM Richard Guo <[email protected]> wrote:On Fri, Apr 14, 2023 at 8:51 PM Etsuro Fujita <[email protected]> wrote:\nI think that the root cause for this issue would be in the\ncreate_scan_plan handling of pseudoconstant quals when creating a\nforeign-join (or custom-join) plan.Yes exactly. In create_scan_plan, we are supposed to extract all thepseudoconstant clauses and use them as one-time quals in a gating Resultnode. Currently we check against rel->baserestrictinfo and ppi_clausesfor the pseudoconstant clauses. But for scans of foreign joins, we donot have any restriction clauses in these places and thus the gatingResult node as well as the pseudoconstant clauses would just be lost.I looked at Nishant's patch. IIUC it treats the pseudoconstant clausesas local conditions. While it can fix the wrong results issue, I thinkmaybe it's better to still treat the pseudoconstant clauses as one-timequals in a gating node. So I wonder if we can store the restrictionclauses for foreign joins in ForeignPath, just as what we do for normalJoinPath, and then check against them for pseudoconstant clauses increate_scan_plan, something like attached.BTW, while going through the codes I noticed one place inadd_foreign_final_paths that uses NULL for List *. I changed it to NIL.ThanksRichard\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Fri, 2 Jun 2023 09:00:13 +0530",
"msg_from": "Suraj Kharage <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "I also agree that Richard's patch is better. As it fixes the issue at the\nbackend and does not treat pseudoconstant as local condition.\n\nI have tested Richard's patch and observe that it is resolving the problem.\nPatch looks good to me as well.\n\n*I only had a minor comment on below change:-*\n\n\n\n\n\n*- gating_clauses = get_gating_quals(root, scan_clauses);+ if\n(best_path->pathtype == T_ForeignScan && IS_JOIN_REL(rel))+\ngating_clauses = get_gating_quals(root, ((ForeignPath *)\nbest_path)->joinrestrictinfo);+ else+ gating_clauses =\nget_gating_quals(root, scan_clauses);*\n\n>> Instead of using 'if' and creating a special case here can't we do\nsomething in the above switch?\n\n\nRegards,\nNishant.\n\n\nP.S\nI tried something quickly but I am seeing a crash:-\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n* case T_IndexOnlyScan: scan_clauses\n= castNode(IndexPath, best_path)->indexinfo->indrestrictinfo;\n break;+ case T_ForeignScan:+\n/*+ * Note that for scans of foreign joins, we do\nnot have restriction clauses+ * stored in\nbaserestrictinfo and we do not consider parameterization.+\n * Instead we need to check against joinrestrictinfo stored in\nForeignPath.+ */+ if\n(IS_JOIN_REL(rel))+ scan_clauses =\n((ForeignPath *) best_path)->joinrestrictinfo;+ else+\n scan_clauses = rel->baserestrictinfo;+\n break; default:\nscan_clauses = rel->baserestrictinfo; break;*\n\nOn Fri, Jun 2, 2023 at 9:00 AM Suraj Kharage <[email protected]>\nwrote:\n\n> +1 for fixing this in the backend code rather than FDW code.\n>\n> Thanks, Richard, for working on this. The patch looks good to me at\n> a glance.\n>\n> On Tue, Apr 25, 2023 at 3:36 PM Richard Guo <[email protected]>\n> wrote:\n>\n>>\n>> On Fri, Apr 14, 2023 at 8:51 PM Etsuro Fujita <[email protected]>\n>> wrote:\n>>\n>>> I think that the root cause for this issue would be in the\n>>> create_scan_plan handling of pseudoconstant quals when creating a\n>>> foreign-join (or custom-join) plan.\n>>\n>>\n>> Yes exactly. In create_scan_plan, we are supposed to extract all the\n>> pseudoconstant clauses and use them as one-time quals in a gating Result\n>> node. Currently we check against rel->baserestrictinfo and ppi_clauses\n>> for the pseudoconstant clauses. But for scans of foreign joins, we do\n>> not have any restriction clauses in these places and thus the gating\n>> Result node as well as the pseudoconstant clauses would just be lost.\n>>\n>> I looked at Nishant's patch. IIUC it treats the pseudoconstant clauses\n>> as local conditions. While it can fix the wrong results issue, I think\n>> maybe it's better to still treat the pseudoconstant clauses as one-time\n>> quals in a gating node. So I wonder if we can store the restriction\n>> clauses for foreign joins in ForeignPath, just as what we do for normal\n>> JoinPath, and then check against them for pseudoconstant clauses in\n>> create_scan_plan, something like attached.\n>>\n>> BTW, while going through the codes I noticed one place in\n>> add_foreign_final_paths that uses NULL for List *. I changed it to NIL.\n>>\n>> Thanks\n>> Richard\n>>\n>\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n>\n>\n>\n> edbpostgres.com\n>\n\nI also agree that Richard's patch is better. As it fixes the issue at the backend and does not treat pseudoconstant as local condition.I have tested Richard's patch and observe that it is resolving the problem. Patch looks good to me as well.I only had a minor comment on below change:-- gating_clauses = get_gating_quals(root, scan_clauses);+ if (best_path->pathtype == T_ForeignScan && IS_JOIN_REL(rel))+ gating_clauses = get_gating_quals(root, ((ForeignPath *) best_path)->joinrestrictinfo);+ else+ gating_clauses = get_gating_quals(root, scan_clauses);>> Instead of using 'if' and creating a special case here can't we do something in the above switch?Regards,Nishant.P.SI tried something quickly but I am seeing a crash:- case T_IndexOnlyScan: scan_clauses = castNode(IndexPath, best_path)->indexinfo->indrestrictinfo; break;+ case T_ForeignScan:+ /*+ * Note that for scans of foreign joins, we do not have restriction clauses+ * stored in baserestrictinfo and we do not consider parameterization.+ * Instead we need to check against joinrestrictinfo stored in ForeignPath.+ */+ if (IS_JOIN_REL(rel))+ scan_clauses = ((ForeignPath *) best_path)->joinrestrictinfo;+ else+ scan_clauses = rel->baserestrictinfo;+ break; default: scan_clauses = rel->baserestrictinfo; break;On Fri, Jun 2, 2023 at 9:00 AM Suraj Kharage <[email protected]> wrote:+1 for fixing this in the backend code rather than FDW code.Thanks, Richard, for working on this. The patch looks good to me at a glance.On Tue, Apr 25, 2023 at 3:36 PM Richard Guo <[email protected]> wrote:On Fri, Apr 14, 2023 at 8:51 PM Etsuro Fujita <[email protected]> wrote:\nI think that the root cause for this issue would be in the\ncreate_scan_plan handling of pseudoconstant quals when creating a\nforeign-join (or custom-join) plan.Yes exactly. In create_scan_plan, we are supposed to extract all thepseudoconstant clauses and use them as one-time quals in a gating Resultnode. Currently we check against rel->baserestrictinfo and ppi_clausesfor the pseudoconstant clauses. But for scans of foreign joins, we donot have any restriction clauses in these places and thus the gatingResult node as well as the pseudoconstant clauses would just be lost.I looked at Nishant's patch. IIUC it treats the pseudoconstant clausesas local conditions. While it can fix the wrong results issue, I thinkmaybe it's better to still treat the pseudoconstant clauses as one-timequals in a gating node. So I wonder if we can store the restrictionclauses for foreign joins in ForeignPath, just as what we do for normalJoinPath, and then check against them for pseudoconstant clauses increate_scan_plan, something like attached.BTW, while going through the codes I noticed one place inadd_foreign_final_paths that uses NULL for List *. I changed it to NIL.ThanksRichard\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Fri, 2 Jun 2023 18:01:09 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Fri, Jun 2, 2023 at 11:30 AM Suraj Kharage <\[email protected]> wrote:\n\n> +1 for fixing this in the backend code rather than FDW code.\n>\n> Thanks, Richard, for working on this. The patch looks good to me at\n> a glance.\n>\n\nThank you Suraj for the review!\n\nThanks\nRichard\n\nOn Fri, Jun 2, 2023 at 11:30 AM Suraj Kharage <[email protected]> wrote:+1 for fixing this in the backend code rather than FDW code.Thanks, Richard, for working on this. The patch looks good to me at a glance.Thank you Suraj for the review!ThanksRichard",
"msg_date": "Mon, 5 Jun 2023 11:06:11 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Fri, Jun 2, 2023 at 8:31 PM Nishant Sharma <\[email protected]> wrote:\n\n> *I only had a minor comment on below change:-*\n>\n>\n>\n>\n>\n> *- gating_clauses = get_gating_quals(root, scan_clauses);+ if\n> (best_path->pathtype == T_ForeignScan && IS_JOIN_REL(rel))+\n> gating_clauses = get_gating_quals(root, ((ForeignPath *)\n> best_path)->joinrestrictinfo);+ else+ gating_clauses =\n> get_gating_quals(root, scan_clauses);*\n>\n> Instead of using 'if' and creating a special case here can't we do\n> something in the above switch?\n>\n\nI thought about that too. IIRC I did not do it in that way because\npostgresGetForeignPlan expects that there is no scan_clauses for a join\nrel. So doing that would trigger the Assert there.\n\n /*\n * For a join rel, baserestrictinfo is NIL and we are not considering\n * parameterization right now, so there should be no scan_clauses for\n * a joinrel or an upper rel either.\n */\n Assert(!scan_clauses);\n\nThanks\nRichard\n\nOn Fri, Jun 2, 2023 at 8:31 PM Nishant Sharma <[email protected]> wrote:I only had a minor comment on below change:-- gating_clauses = get_gating_quals(root, scan_clauses);+ if (best_path->pathtype == T_ForeignScan && IS_JOIN_REL(rel))+ gating_clauses = get_gating_quals(root, ((ForeignPath *) best_path)->joinrestrictinfo);+ else+ gating_clauses = get_gating_quals(root, scan_clauses);Instead of using 'if' and creating a special case here can't we do something in the above switch?I thought about that too. IIRC I did not do it in that way becausepostgresGetForeignPlan expects that there is no scan_clauses for a joinrel. So doing that would trigger the Assert there. /* * For a join rel, baserestrictinfo is NIL and we are not considering * parameterization right now, so there should be no scan_clauses for * a joinrel or an upper rel either. */ Assert(!scan_clauses);ThanksRichard",
"msg_date": "Mon, 5 Jun 2023 11:09:12 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jun 2, 2023 at 9:31 PM Nishant Sharma\n<[email protected]> wrote:\n> I also agree that Richard's patch is better. As it fixes the issue at the backend and does not treat pseudoconstant as local condition.\n>\n> I have tested Richard's patch and observe that it is resolving the problem. Patch looks good to me as well.\n\nIf the patch is intended for HEAD only, I also think it goes in the\nright direction. But if it is intended for back branches as well, I\ndo not think so, because it would cause ABI breakage due to changes\nmade to the ForeignPath struct and the create_foreign_join_path() API.\n(For the former, I think we could avoid doing so by adding the new\nmember at the end of the struct, not in the middle, though.)\n\nTo avoid this issue, I am wondering if we should modify\nadd_paths_to_joinrel() in back branches so that it just disallows the\nFDW to consider pushing down joins when the restrictlist has\npseudoconstant clauses. Attached is a patch for that.\n\nMy apologies for not reviewing your patch and the long long delay.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 5 Jun 2023 22:19:31 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 9:19 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> If the patch is intended for HEAD only, I also think it goes in the\n> right direction. But if it is intended for back branches as well, I\n> do not think so, because it would cause ABI breakage due to changes\n> made to the ForeignPath struct and the create_foreign_join_path() API.\n> (For the former, I think we could avoid doing so by adding the new\n> member at the end of the struct, not in the middle, though.)\n\n\nThanks for pointing this out. You're right. The patch has backport\nissue because of the ABI breakage. So it can only be applied on HEAD.\n\n\n> To avoid this issue, I am wondering if we should modify\n> add_paths_to_joinrel() in back branches so that it just disallows the\n> FDW to consider pushing down joins when the restrictlist has\n> pseudoconstant clauses. Attached is a patch for that.\n\n\nI think we can do that in back branches. But I'm a little concerned\nthat we'd miss a better plan if FDW cannot push down joins in such\ncases. I may be worrying over nothing though if it's not common that\nthe restrictlist has pseudoconstant clauses.\n\nThanks\nRichard\n\nOn Mon, Jun 5, 2023 at 9:19 PM Etsuro Fujita <[email protected]> wrote:\nIf the patch is intended for HEAD only, I also think it goes in the\nright direction. But if it is intended for back branches as well, I\ndo not think so, because it would cause ABI breakage due to changes\nmade to the ForeignPath struct and the create_foreign_join_path() API.\n(For the former, I think we could avoid doing so by adding the new\nmember at the end of the struct, not in the middle, though.)Thanks for pointing this out. You're right. The patch has backportissue because of the ABI breakage. So it can only be applied on HEAD. \nTo avoid this issue, I am wondering if we should modify\nadd_paths_to_joinrel() in back branches so that it just disallows the\nFDW to consider pushing down joins when the restrictlist has\npseudoconstant clauses. Attached is a patch for that.I think we can do that in back branches. But I'm a little concernedthat we'd miss a better plan if FDW cannot push down joins in suchcases. I may be worrying over nothing though if it's not common thatthe restrictlist has pseudoconstant clauses.ThanksRichard",
"msg_date": "Tue, 6 Jun 2023 11:20:26 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\n\nEtsuro's patch is also showing the correct output for \"set\nenable_nestloop=off\". Looks good to me for back branches due to backport\nissues.\n\nBut below are a few observations for the same:-\n1) I looked into the query plan for both \"set enable_nestloop\" on & off\ncase and observe that they are the same. That is, what we see with \"set\nenable_nestloop=on\".\n2) In back branches for \"set enable_nestloop\" on & off value, at least this\ntype of query execution won't make any difference. No comparison of plans\nto be selected based on total cost of two plans old (Nested Loop with\nForeign Scans) & new (Only Foreign Scan) will be done, because we are\navoiding the call to \"postgresGetForeignJoinPaths()\" up front when we have\npseudo constants.\n\n\nRegards,\nNishant.\n\nOn Tue, Jun 6, 2023 at 8:50 AM Richard Guo <[email protected]> wrote:\n\n>\n> On Mon, Jun 5, 2023 at 9:19 PM Etsuro Fujita <[email protected]>\n> wrote:\n>\n>> If the patch is intended for HEAD only, I also think it goes in the\n>> right direction. But if it is intended for back branches as well, I\n>> do not think so, because it would cause ABI breakage due to changes\n>> made to the ForeignPath struct and the create_foreign_join_path() API.\n>> (For the former, I think we could avoid doing so by adding the new\n>> member at the end of the struct, not in the middle, though.)\n>\n>\n> Thanks for pointing this out. You're right. The patch has backport\n> issue because of the ABI breakage. So it can only be applied on HEAD.\n>\n>\n>> To avoid this issue, I am wondering if we should modify\n>> add_paths_to_joinrel() in back branches so that it just disallows the\n>> FDW to consider pushing down joins when the restrictlist has\n>> pseudoconstant clauses. Attached is a patch for that.\n>\n>\n> I think we can do that in back branches. But I'm a little concerned\n> that we'd miss a better plan if FDW cannot push down joins in such\n> cases. I may be worrying over nothing though if it's not common that\n> the restrictlist has pseudoconstant clauses.\n>\n> Thanks\n> Richard\n>\n\nHi,Etsuro's patch is also showing the correct output for \"set enable_nestloop=off\". Looks good to me for back branches due to backport issues.But below are a few observations for the same:-1) I looked into the query plan for both \"set enable_nestloop\" on & off case and observe that they are the same. That is, what we see with \"set enable_nestloop=on\".2) In back branches for \"set enable_nestloop\" on & off value, at least this type of query execution won't make any difference. No comparison of plans to be selected based on total cost of two plans old (Nested Loop with Foreign Scans) & new (Only Foreign Scan) will be done, because we are avoiding the call to \"postgresGetForeignJoinPaths()\" up front when we have pseudo constants.Regards,Nishant.On Tue, Jun 6, 2023 at 8:50 AM Richard Guo <[email protected]> wrote:On Mon, Jun 5, 2023 at 9:19 PM Etsuro Fujita <[email protected]> wrote:\nIf the patch is intended for HEAD only, I also think it goes in the\nright direction. But if it is intended for back branches as well, I\ndo not think so, because it would cause ABI breakage due to changes\nmade to the ForeignPath struct and the create_foreign_join_path() API.\n(For the former, I think we could avoid doing so by adding the new\nmember at the end of the struct, not in the middle, though.)Thanks for pointing this out. You're right. The patch has backportissue because of the ABI breakage. So it can only be applied on HEAD. \nTo avoid this issue, I am wondering if we should modify\nadd_paths_to_joinrel() in back branches so that it just disallows the\nFDW to consider pushing down joins when the restrictlist has\npseudoconstant clauses. Attached is a patch for that.I think we can do that in back branches. But I'm a little concernedthat we'd miss a better plan if FDW cannot push down joins in suchcases. I may be worrying over nothing though if it's not common thatthe restrictlist has pseudoconstant clauses.ThanksRichard",
"msg_date": "Wed, 7 Jun 2023 15:58:34 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Richard,\n\nOn Tue, Jun 6, 2023 at 12:20 PM Richard Guo <[email protected]> wrote:\n> On Mon, Jun 5, 2023 at 9:19 PM Etsuro Fujita <[email protected]> wrote:\n>> To avoid this issue, I am wondering if we should modify\n>> add_paths_to_joinrel() in back branches so that it just disallows the\n>> FDW to consider pushing down joins when the restrictlist has\n>> pseudoconstant clauses. Attached is a patch for that.\n\n> I think we can do that in back branches. But I'm a little concerned\n> that we'd miss a better plan if FDW cannot push down joins in such\n> cases. I may be worrying over nothing though if it's not common that\n> the restrictlist has pseudoconstant clauses.\n\nYeah, it is unfortunate that we would not get better plans. Given\nthat it took quite a long time to find this issue, I suppose that\nusers seldom do foreign joins with pseudoconstant clauses, though.\n\nAnyway thanks for working on this, Richard!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 8 Jun 2023 19:30:52 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jun 7, 2023 at 7:28 PM Nishant Sharma\n<[email protected]> wrote:\n> Etsuro's patch is also showing the correct output for \"set enable_nestloop=off\". Looks good to me for back branches due to backport issues.\n>\n> But below are a few observations for the same:-\n> 1) I looked into the query plan for both \"set enable_nestloop\" on & off case and observe that they are the same. That is, what we see with \"set enable_nestloop=on\".\n> 2) In back branches for \"set enable_nestloop\" on & off value, at least this type of query execution won't make any difference. No comparison of plans to be selected based on total cost of two plans old (Nested Loop with Foreign Scans) & new (Only Foreign Scan) will be done, because we are avoiding the call to \"postgresGetForeignJoinPaths()\" up front when we have pseudo constants.\n\nThanks for looking!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 8 Jun 2023 19:36:48 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 10:19 PM Etsuro Fujita <[email protected]> wrote:\n> To avoid this issue, I am wondering if we should modify\n> add_paths_to_joinrel() in back branches so that it just disallows the\n> FDW to consider pushing down joins when the restrictlist has\n> pseudoconstant clauses. Attached is a patch for that.\n\nI think that custom scans have the same issue, so I modified the patch\nfurther so that it also disallows custom-scan providers to consider\njoin pushdown in add_paths_to_joinrel() if necessary. Attached is a\nnew version of the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 14 Jun 2023 15:49:37 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Looks good to me. Tested on master and it works.\nNew patch used a bool flag to avoid calls for both FDW and custom hook's\ncall. And a slight change in comment of \"has_pseudoconstant_clauses\"\nfunction.\n\nRegards,\nNishant.\n\nOn Wed, Jun 14, 2023 at 12:19 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> On Mon, Jun 5, 2023 at 10:19 PM Etsuro Fujita <[email protected]>\n> wrote:\n> > To avoid this issue, I am wondering if we should modify\n> > add_paths_to_joinrel() in back branches so that it just disallows the\n> > FDW to consider pushing down joins when the restrictlist has\n> > pseudoconstant clauses. Attached is a patch for that.\n>\n> I think that custom scans have the same issue, so I modified the patch\n> further so that it also disallows custom-scan providers to consider\n> join pushdown in add_paths_to_joinrel() if necessary. Attached is a\n> new version of the patch.\n>\n> Best regards,\n> Etsuro Fujita\n>\n\nLooks good to me. Tested on master and it works.New patch used a bool flag to avoid calls for both FDW and custom hook's call. And a slight change in comment of \"has_pseudoconstant_clauses\" function.Regards,Nishant.On Wed, Jun 14, 2023 at 12:19 PM Etsuro Fujita <[email protected]> wrote:On Mon, Jun 5, 2023 at 10:19 PM Etsuro Fujita <[email protected]> wrote:\n> To avoid this issue, I am wondering if we should modify\n> add_paths_to_joinrel() in back branches so that it just disallows the\n> FDW to consider pushing down joins when the restrictlist has\n> pseudoconstant clauses. Attached is a patch for that.\n\nI think that custom scans have the same issue, so I modified the patch\nfurther so that it also disallows custom-scan providers to consider\njoin pushdown in add_paths_to_joinrel() if necessary. Attached is a\nnew version of the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 21 Jun 2023 15:16:42 +0530",
"msg_from": "Nishant Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 2:49 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> On Mon, Jun 5, 2023 at 10:19 PM Etsuro Fujita <[email protected]>\n> wrote:\n> > To avoid this issue, I am wondering if we should modify\n> > add_paths_to_joinrel() in back branches so that it just disallows the\n> > FDW to consider pushing down joins when the restrictlist has\n> > pseudoconstant clauses. Attached is a patch for that.\n>\n> I think that custom scans have the same issue, so I modified the patch\n> further so that it also disallows custom-scan providers to consider\n> join pushdown in add_paths_to_joinrel() if necessary. Attached is a\n> new version of the patch.\n\n\nGood point. The v2 patch looks good to me for back branches.\n\nI'm wondering what the plan is for HEAD. Should we also disallow\nforeign/custom join pushdown in the case that there is any\npseudoconstant restriction clause, or instead still allow join pushdown\nin that case? If it is the latter, I think we can do something like my\npatch upthread does. But that patch needs to be revised to consider\ncustom scans, maybe by storing the restriction clauses also in\nCustomPath?\n\nThanks\nRichard\n\nOn Wed, Jun 14, 2023 at 2:49 PM Etsuro Fujita <[email protected]> wrote:On Mon, Jun 5, 2023 at 10:19 PM Etsuro Fujita <[email protected]> wrote:\n> To avoid this issue, I am wondering if we should modify\n> add_paths_to_joinrel() in back branches so that it just disallows the\n> FDW to consider pushing down joins when the restrictlist has\n> pseudoconstant clauses. Attached is a patch for that.\n\nI think that custom scans have the same issue, so I modified the patch\nfurther so that it also disallows custom-scan providers to consider\njoin pushdown in add_paths_to_joinrel() if necessary. Attached is a\nnew version of the patch.Good point. The v2 patch looks good to me for back branches.I'm wondering what the plan is for HEAD. Should we also disallowforeign/custom join pushdown in the case that there is anypseudoconstant restriction clause, or instead still allow join pushdownin that case? If it is the latter, I think we can do something like mypatch upthread does. But that patch needs to be revised to considercustom scans, maybe by storing the restriction clauses also inCustomPath?ThanksRichard",
"msg_date": "Sun, 25 Jun 2023 14:05:05 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Richard,\n\nOn Sun, Jun 25, 2023 at 3:05 PM Richard Guo <[email protected]> wrote:\n> On Wed, Jun 14, 2023 at 2:49 PM Etsuro Fujita <[email protected]> wrote:\n>> On Mon, Jun 5, 2023 at 10:19 PM Etsuro Fujita <[email protected]> wrote:\n>> > To avoid this issue, I am wondering if we should modify\n>> > add_paths_to_joinrel() in back branches so that it just disallows the\n>> > FDW to consider pushing down joins when the restrictlist has\n>> > pseudoconstant clauses. Attached is a patch for that.\n>>\n>> I think that custom scans have the same issue, so I modified the patch\n>> further so that it also disallows custom-scan providers to consider\n>> join pushdown in add_paths_to_joinrel() if necessary. Attached is a\n\n> Good point. The v2 patch looks good to me for back branches.\n\nCool! Thanks for looking!\n\n> I'm wondering what the plan is for HEAD. Should we also disallow\n> foreign/custom join pushdown in the case that there is any\n> pseudoconstant restriction clause, or instead still allow join pushdown\n> in that case? If it is the latter, I think we can do something like my\n> patch upthread does. But that patch needs to be revised to consider\n> custom scans, maybe by storing the restriction clauses also in\n> CustomPath?\n\nI think we should choose the latter, so I modified your patch as\nmentioned, after re-creating it on top of my patch. Attached is a new\nversion (0002-Allow-join-pushdown-even-if-pseudoconstant-quals-v2.patch).\nI am attaching my patch as well\n(0001-Disable-join-pushdown-if-pseudoconstant-quals-v2.patch).\n\nOther changes made to your patch:\n\n* I renamed the new member of the ForeignPath struct to\nfdw_restrictinfo. (And I named that of the CustomPath struct\ncustom_restrictinfo.)\n\n* In your patch, only for create_foreign_join_path(), the API was\nmodified so that the caller provides the new member of ForeignPath,\nbut I modified that for\ncreate_foreignscan_path()/create_foreign_upper_path() as well, for\nconsistency.\n\n* In this bit I changed the last argument to NIL, which would be\nnitpicking, though.\n\n@@ -1038,7 +1038,7 @@ postgresGetForeignPaths(PlannerInfo *root,\n add_path(baserel, (Path *) path);\n\n /* Add paths with pathkeys */\n- add_paths_with_pathkeys_for_rel(root, baserel, NULL);\n+ add_paths_with_pathkeys_for_rel(root, baserel, NULL, NULL);\n\n* I dropped this test case, because it would not be stable if the\nsystem clock was too slow.\n\n+-- bug due to sloppy handling of pseudoconstant clauses for foreign joins\n+EXPLAIN (VERBOSE, COSTS OFF)\n+ SELECT * FROM ft2 a, ft2 b\n+ WHERE b.c1 = a.c1 AND now() < '25-April-2023'::timestamp;\n+SELECT * FROM ft2 a, ft2 b\n+WHERE b.c1 = a.c1 AND now() < '25-April-2023'::timestamp;\n\nThat is it.\n\nSorry for the long long delay.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 21 Jul 2023 21:51:31 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 8:51 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> I think we should choose the latter, so I modified your patch as\n> mentioned, after re-creating it on top of my patch. Attached is a new\n> version (0002-Allow-join-pushdown-even-if-pseudoconstant-quals-v2.patch).\n> I am attaching my patch as well\n> (0001-Disable-join-pushdown-if-pseudoconstant-quals-v2.patch).\n>\n> Other changes made to your patch:\n>\n> * I renamed the new member of the ForeignPath struct to\n> fdw_restrictinfo. (And I named that of the CustomPath struct\n> custom_restrictinfo.)\n\n\nThat's much better, and more consistent with other members in\nForeignPath/CustomPath. Thanks!\n\n\n> * In your patch, only for create_foreign_join_path(), the API was\n> modified so that the caller provides the new member of ForeignPath,\n> but I modified that for\n> create_foreignscan_path()/create_foreign_upper_path() as well, for\n> consistency.\n\n\nLGTM.\n\n\n> * In this bit I changed the last argument to NIL, which would be\n> nitpicking, though.\n>\n> @@ -1038,7 +1038,7 @@ postgresGetForeignPaths(PlannerInfo *root,\n> add_path(baserel, (Path *) path);\n>\n> /* Add paths with pathkeys */\n> - add_paths_with_pathkeys_for_rel(root, baserel, NULL);\n> + add_paths_with_pathkeys_for_rel(root, baserel, NULL, NULL);\n\n\nGood catch! This was my oversight.\n\n\n> * I dropped this test case, because it would not be stable if the\n> system clock was too slow.\n\n\nAgreed. And the test case from 0001 should be sufficient.\n\nSo the two patches both look good to me now.\n\nThanks\nRichard\n\nOn Fri, Jul 21, 2023 at 8:51 PM Etsuro Fujita <[email protected]> wrote:\nI think we should choose the latter, so I modified your patch as\nmentioned, after re-creating it on top of my patch. Attached is a new\nversion (0002-Allow-join-pushdown-even-if-pseudoconstant-quals-v2.patch).\nI am attaching my patch as well\n(0001-Disable-join-pushdown-if-pseudoconstant-quals-v2.patch).\n\nOther changes made to your patch:\n\n* I renamed the new member of the ForeignPath struct to\nfdw_restrictinfo. (And I named that of the CustomPath struct\ncustom_restrictinfo.) That's much better, and more consistent with other members inForeignPath/CustomPath. Thanks! \n* In your patch, only for create_foreign_join_path(), the API was\nmodified so that the caller provides the new member of ForeignPath,\nbut I modified that for\ncreate_foreignscan_path()/create_foreign_upper_path() as well, for\nconsistency.LGTM. \n* In this bit I changed the last argument to NIL, which would be\nnitpicking, though.\n\n@@ -1038,7 +1038,7 @@ postgresGetForeignPaths(PlannerInfo *root,\n add_path(baserel, (Path *) path);\n\n /* Add paths with pathkeys */\n- add_paths_with_pathkeys_for_rel(root, baserel, NULL);\n+ add_paths_with_pathkeys_for_rel(root, baserel, NULL, NULL);Good catch! This was my oversight. \n* I dropped this test case, because it would not be stable if the\nsystem clock was too slow.Agreed. And the test case from 0001 should be sufficient.So the two patches both look good to me now.ThanksRichard",
"msg_date": "Mon, 24 Jul 2023 10:45:44 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Richard,\n\nOn Mon, Jul 24, 2023 at 11:45 AM Richard Guo <[email protected]> wrote:\n> On Fri, Jul 21, 2023 at 8:51 PM Etsuro Fujita <[email protected]> wrote:\n>> * In this bit I changed the last argument to NIL, which would be\n>> nitpicking, though.\n>>\n>> @@ -1038,7 +1038,7 @@ postgresGetForeignPaths(PlannerInfo *root,\n>> add_path(baserel, (Path *) path);\n>>\n>> /* Add paths with pathkeys */\n>> - add_paths_with_pathkeys_for_rel(root, baserel, NULL);\n>> + add_paths_with_pathkeys_for_rel(root, baserel, NULL, NULL);\n\n> This was my oversight.\n\nNo. IIUC, I think that that would work well as-proposed, but I\nchanged it as such, for readability.\n\n> So the two patches both look good to me now.\n\nCool! I pushed the first patch after polishing it a little bit, so\nhere is a rebased version of the second patch, in which I modified the\nForeignPath and CustomPath cases in reparameterize_path_by_child() to\nreflect the new members fdw_restrictinfo and custom_restrictinfo, for\nsafety, and tweaked a comment a bit.\n\nThanks for looking!\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 28 Jul 2023 17:55:52 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 4:56 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> Cool! I pushed the first patch after polishing it a little bit, so\n> here is a rebased version of the second patch, in which I modified the\n> ForeignPath and CustomPath cases in reparameterize_path_by_child() to\n> reflect the new members fdw_restrictinfo and custom_restrictinfo, for\n> safety, and tweaked a comment a bit.\n\n\nHmm, it seems that ForeignPath for a foreign join does not support\nparameterized paths for now, as in postgresGetForeignJoinPaths() we have\nthis check:\n\n /*\n * This code does not work for joins with lateral references, since those\n * must have parameterized paths, which we don't generate yet.\n */\n if (!bms_is_empty(joinrel->lateral_relids))\n return;\n\nAnd in create_foreign_join_path() we just set the path.param_info to\nNULL.\n\n pathnode->path.param_info = NULL; /* XXX see above */\n\nSo I doubt that it's necessary to adjust fdw_restrictinfo in\nreparameterize_path_by_child, because it seems to me that\nfdw_restrictinfo must be empty there. Maybe we can add an Assert there\nas below:\n\n- ADJUST_CHILD_ATTRS(fpath->fdw_restrictinfo);\n+\n+ /*\n+ * Parameterized foreign joins are not supported. So this\n+ * ForeignPath cannot be a foreign join and fdw_restrictinfo\n+ * must be empty.\n+ */\n+ Assert(fpath->fdw_restrictinfo == NIL);\n\nThat being said, it's also no harm to handle fdw_restrictinfo in\nreparameterize_path_by_child as the patch does. So I'm OK we do that\nfor safety.\n\nThanks\nRichard\n\nOn Fri, Jul 28, 2023 at 4:56 PM Etsuro Fujita <[email protected]> wrote:\nCool! I pushed the first patch after polishing it a little bit, so\nhere is a rebased version of the second patch, in which I modified the\nForeignPath and CustomPath cases in reparameterize_path_by_child() to\nreflect the new members fdw_restrictinfo and custom_restrictinfo, for\nsafety, and tweaked a comment a bit.Hmm, it seems that ForeignPath for a foreign join does not supportparameterized paths for now, as in postgresGetForeignJoinPaths() we havethis check: /* * This code does not work for joins with lateral references, since those * must have parameterized paths, which we don't generate yet. */ if (!bms_is_empty(joinrel->lateral_relids)) return;And in create_foreign_join_path() we just set the path.param_info toNULL. pathnode->path.param_info = NULL; /* XXX see above */So I doubt that it's necessary to adjust fdw_restrictinfo inreparameterize_path_by_child, because it seems to me thatfdw_restrictinfo must be empty there. Maybe we can add an Assert thereas below:- ADJUST_CHILD_ATTRS(fpath->fdw_restrictinfo);++ /*+ * Parameterized foreign joins are not supported. So this+ * ForeignPath cannot be a foreign join and fdw_restrictinfo+ * must be empty.+ */+ Assert(fpath->fdw_restrictinfo == NIL);That being said, it's also no harm to handle fdw_restrictinfo inreparameterize_path_by_child as the patch does. So I'm OK we do thatfor safety.ThanksRichard",
"msg_date": "Mon, 31 Jul 2023 16:52:16 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Richard,\n\nOn Mon, Jul 31, 2023 at 5:52 PM Richard Guo <[email protected]> wrote:\n> On Fri, Jul 28, 2023 at 4:56 PM Etsuro Fujita <[email protected]> wrote:\n>> here is a rebased version of the second patch, in which I modified the\n>> ForeignPath and CustomPath cases in reparameterize_path_by_child() to\n>> reflect the new members fdw_restrictinfo and custom_restrictinfo, for\n>> safety, and tweaked a comment a bit.\n\n> Hmm, it seems that ForeignPath for a foreign join does not support\n> parameterized paths for now, as in postgresGetForeignJoinPaths() we have\n> this check:\n>\n> /*\n> * This code does not work for joins with lateral references, since those\n> * must have parameterized paths, which we don't generate yet.\n> */\n> if (!bms_is_empty(joinrel->lateral_relids))\n> return;\n>\n> And in create_foreign_join_path() we just set the path.param_info to\n> NULL.\n>\n> pathnode->path.param_info = NULL; /* XXX see above */\n>\n> So I doubt that it's necessary to adjust fdw_restrictinfo in\n> reparameterize_path_by_child, because it seems to me that\n> fdw_restrictinfo must be empty there. Maybe we can add an Assert there\n> as below:\n>\n> - ADJUST_CHILD_ATTRS(fpath->fdw_restrictinfo);\n> +\n> + /*\n> + * Parameterized foreign joins are not supported. So this\n> + * ForeignPath cannot be a foreign join and fdw_restrictinfo\n> + * must be empty.\n> + */\n> + Assert(fpath->fdw_restrictinfo == NIL);\n>\n> That being said, it's also no harm to handle fdw_restrictinfo in\n> reparameterize_path_by_child as the patch does. So I'm OK we do that\n> for safety.\n\nOk, but maybe my explanation was not good, so let me explain. The\nreason why I modified the code as such is to make the handling of\nfdw_restrictinfo consistent with that of fdw_outerpath: we have the\ncode to reparameterize fdw_outerpath, which should be NULL though, as\nwe do not currently support parameterized foreign joins.\n\nI modified the code a bit further to use an if-test to avoid a useless\nfunction call, and added/tweaked comments and docs further. Attached\nis a new version of the patch. I am planning to commit this, if there\nare no objections.\n\nThanks!\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 8 Aug 2023 17:40:20 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 4:40 PM Etsuro Fujita <[email protected]>\nwrote:\n\n> I modified the code a bit further to use an if-test to avoid a useless\n> function call, and added/tweaked comments and docs further. Attached\n> is a new version of the patch. I am planning to commit this, if there\n> are no objections.\n\n\n+1 to the v4 patch. It looks good to me.\n\nThanks\nRichard\n\nOn Tue, Aug 8, 2023 at 4:40 PM Etsuro Fujita <[email protected]> wrote:\nI modified the code a bit further to use an if-test to avoid a useless\nfunction call, and added/tweaked comments and docs further. Attached\nis a new version of the patch. I am planning to commit this, if there\nare no objections.+1 to the v4 patch. It looks good to me.ThanksRichard",
"msg_date": "Tue, 8 Aug 2023 17:30:39 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "On Tue, Aug 8, 2023 at 6:30 PM Richard Guo <[email protected]> wrote:\n> On Tue, Aug 8, 2023 at 4:40 PM Etsuro Fujita <[email protected]> wrote:\n>> I modified the code a bit further to use an if-test to avoid a useless\n>> function call, and added/tweaked comments and docs further. Attached\n>> is a new version of the patch. I am planning to commit this, if there\n>> are no objections.\n\n> +1 to the v4 patch. It looks good to me.\n\nPushed after some copy-and-paste editing of comments/documents.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 15 Aug 2023 17:04:37 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Etsuro, all\n\nThe commit[1] seems to break some queries in Citus[2], which is an\nextension which relies on set_join_pathlist_hook.\n\nAlthough the comment says */*Finally, give extensions a chance to\nmanipulate the path list.*/ *we use it to extract lots of information\nabout the joins and do the planning based on the information.\n\nNow, for some joins where consider_join_pushdown=false, we cannot get the\ninformation that we used to get, which prevents doing distributed planning\nfor certain queries.\n\nWe wonder if it is possible to allow extensions to access the join info\nunder all circumstances, as it used to be? Basically, removing the\nadditional check:\n\ndiff --git a/src/backend/optimizer/path/joinpath.c\nb/src/backend/optimizer/path/joinpath.c\nindex 03b3185984..080e76cbe9 100644\n--- a/src/backend/optimizer/path/joinpath.c\n+++ b/src/backend/optimizer/path/joinpath.c\n@@ -349,8 +349,7 @@ add_paths_to_joinrel(PlannerInfo *root,\n /*\n * 6. Finally, give extensions a chance to manipulate the path list.\n */\n- if (set_join_pathlist_hook &&\n- consider_join_pushdown)\n+ if (set_join_pathlist_hook)\n set_join_pathlist_hook(root, joinrel, outerrel, innerrel,\n jointype,\n&extra);\n }\n\n\n\nThanks,\nOnder\n\n[1]:\nhttps://github.com/postgres/postgres/commit/b0e390e6d1d68b92e9983840941f8f6d9e083fe0\n[2]: https://github.com/citusdata/citus/issues/7119\n\n\nEtsuro Fujita <[email protected]>, 15 Ağu 2023 Sal, 11:05 tarihinde\nşunu yazdı:\n\n> On Tue, Aug 8, 2023 at 6:30 PM Richard Guo <[email protected]> wrote:\n> > On Tue, Aug 8, 2023 at 4:40 PM Etsuro Fujita <[email protected]>\n> wrote:\n> >> I modified the code a bit further to use an if-test to avoid a useless\n> >> function call, and added/tweaked comments and docs further. Attached\n> >> is a new version of the patch. I am planning to commit this, if there\n> >> are no objections.\n>\n> > +1 to the v4 patch. It looks good to me.\n>\n> Pushed after some copy-and-paste editing of comments/documents.\n>\n> Thanks!\n>\n> Best regards,\n> Etsuro Fujita\n>\n>\n>\n\nHi Etsuro, allThe commit[1] seems to break some queries in Citus[2], which is an extension which relies on set_join_pathlist_hook.Although the comment says /*Finally, give extensions a chance to manipulate the path list.*/ we use it to extract lots of information about the joins and do the planning based on the information.Now, for some joins where consider_join_pushdown=false, we cannot get the information that we used to get, which prevents doing distributed planning for certain queries.We wonder if it is possible to allow extensions to access the join info under all circumstances, as it used to be? Basically, removing the additional check:diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.cindex 03b3185984..080e76cbe9 100644--- a/src/backend/optimizer/path/joinpath.c+++ b/src/backend/optimizer/path/joinpath.c@@ -349,8 +349,7 @@ add_paths_to_joinrel(PlannerInfo *root, /* * 6. Finally, give extensions a chance to manipulate the path list. */- if (set_join_pathlist_hook &&- consider_join_pushdown)+ if (set_join_pathlist_hook) set_join_pathlist_hook(root, joinrel, outerrel, innerrel, jointype, &extra); }Thanks,Onder[1]: https://github.com/postgres/postgres/commit/b0e390e6d1d68b92e9983840941f8f6d9e083fe0[2]: https://github.com/citusdata/citus/issues/7119Etsuro Fujita <[email protected]>, 15 Ağu 2023 Sal, 11:05 tarihinde şunu yazdı:On Tue, Aug 8, 2023 at 6:30 PM Richard Guo <[email protected]> wrote:\n> On Tue, Aug 8, 2023 at 4:40 PM Etsuro Fujita <[email protected]> wrote:\n>> I modified the code a bit further to use an if-test to avoid a useless\n>> function call, and added/tweaked comments and docs further. Attached\n>> is a new version of the patch. I am planning to commit this, if there\n>> are no objections.\n\n> +1 to the v4 patch. It looks good to me.\n\nPushed after some copy-and-paste editing of comments/documents.\n\nThanks!\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Tue, 15 Aug 2023 17:02:41 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nOn Tue, Aug 15, 2023 at 11:02 PM Önder Kalacı <[email protected]> wrote:\n> The commit[1] seems to break some queries in Citus[2], which is an extension which relies on set_join_pathlist_hook.\n>\n> Although the comment says /*Finally, give extensions a chance to manipulate the path list.*/ we use it to extract lots of information about the joins and do the planning based on the information.\n>\n> Now, for some joins where consider_join_pushdown=false, we cannot get the information that we used to get, which prevents doing distributed planning for certain queries.\n>\n> We wonder if it is possible to allow extensions to access the join info under all circumstances, as it used to be? Basically, removing the additional check:\n>\n> diff --git a/src/backend/optimizer/path/joinpath.c b/src/backend/optimizer/path/joinpath.c\n> index 03b3185984..080e76cbe9 100644\n> --- a/src/backend/optimizer/path/joinpath.c\n> +++ b/src/backend/optimizer/path/joinpath.c\n> @@ -349,8 +349,7 @@ add_paths_to_joinrel(PlannerInfo *root,\n> /*\n> * 6. Finally, give extensions a chance to manipulate the path list.\n> */\n> - if (set_join_pathlist_hook &&\n> - consider_join_pushdown)\n> + if (set_join_pathlist_hook)\n> set_join_pathlist_hook(root, joinrel, outerrel, innerrel,\n> jointype, &extra);\n\nMaybe we could do so by leaving to extensions the decision whether\nthey replace joins with pseudoconstant clauses, but I am not sure that\nthat is a good idea, because that would require the authors to modify\nand recompile their extensions to fix the issue... So I fixed the\ncore side.\n\nI am not familiar with the Citus extension, but such pseudoconstant\nclauses are handled within the Citus extension?\n\nThanks for the report!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 16 Aug 2023 18:22:31 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Etsuro,\n\nThanks for the response!\n\n\n> Maybe we could do so by leaving to extensions the decision whether\n> they replace joins with pseudoconstant clauses, but I am not sure that\n> that is a good idea, because that would require the authors to modify\n> and recompile their extensions to fix the issue...\n\n\nI think I cannot easily follow this argument. The decision to push down the\njoin\n(or not) doesn't seem to be related to calling set_join_pathlist_hook. It\nseems like the\nextension should decide what to do with the hook.\n\nThat seems the generic theme of the hooks that Postgres provides. For\nexample, the extension\nis allowed to even override the whole planner/executor, and there is no\ncondition that would\nprevent it from happening. In other words, an extension can easily return\nwrong results with the\nwrong actions taken with the hooks, and that should be responsibility of\nthe extension, not Postgres\n\n\n> I am not familiar with the Citus extension, but such pseudoconstant\n> clauses are handled within the Citus extension?\n>\n>\nAs I noted earlier, Citus relies on this hook for collecting information\nabout all the joins that Postgres\nknows about, there is nothing specific to pseudoconstants. Some parts of\ncreating the (distributed)\nplan relies on the information gathered from this hook. So, if information\nabout some of the joins\nare not passed to the extension, then the decisions that the extension\ngives are broken (and as a result\nthe queries are broken).\n\nThanks,\nOnder\n\nHi Etsuro,Thanks for the response! \nMaybe we could do so by leaving to extensions the decision whether\nthey replace joins with pseudoconstant clauses, but I am not sure that\nthat is a good idea, because that would require the authors to modify\nand recompile their extensions to fix the issue... I think I cannot easily follow this argument. The decision to push down the join(or not) doesn't seem to be related to calling set_join_pathlist_hook. It seems like theextension should decide what to do with the hook. That seems the generic theme of the hooks that Postgres provides. For example, the extensionis allowed to even override the whole planner/executor, and there is no condition that wouldprevent it from happening. In other words, an extension can easily return wrong results with thewrong actions taken with the hooks, and that should be responsibility of the extension, not Postgres\n\nI am not familiar with the Citus extension, but such pseudoconstant\nclauses are handled within the Citus extension?As I noted earlier, Citus relies on this hook for collecting information about all the joins that Postgresknows about, there is nothing specific to pseudoconstants. Some parts of creating the (distributed) plan relies on the information gathered from this hook. So, if information about some of the joins are not passed to the extension, then the decisions that the extension gives are broken (and as a resultthe queries are broken).Thanks,Onder",
"msg_date": "Wed, 16 Aug 2023 16:58:29 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi Onder,\n\nOn Wed, Aug 16, 2023 at 10:58 PM Önder Kalacı <[email protected]> wrote:\n\n>> Maybe we could do so by leaving to extensions the decision whether\n>> they replace joins with pseudoconstant clauses, but I am not sure that\n>> that is a good idea, because that would require the authors to modify\n>> and recompile their extensions to fix the issue...\n\n> I think I cannot easily follow this argument. The decision to push down the join\n> (or not) doesn't seem to be related to calling set_join_pathlist_hook. It seems like the\n> extension should decide what to do with the hook.\n>\n> That seems the generic theme of the hooks that Postgres provides. For example, the extension\n> is allowed to even override the whole planner/executor, and there is no condition that would\n> prevent it from happening. In other words, an extension can easily return wrong results with the\n> wrong actions taken with the hooks, and that should be responsibility of the extension, not Postgres\n\n>> I am not familiar with the Citus extension, but such pseudoconstant\n>> clauses are handled within the Citus extension?\n\n> As I noted earlier, Citus relies on this hook for collecting information about all the joins that Postgres\n> knows about, there is nothing specific to pseudoconstants. Some parts of creating the (distributed)\n> plan relies on the information gathered from this hook. So, if information about some of the joins\n> are not passed to the extension, then the decisions that the extension gives are broken (and as a result\n> the queries are broken).\n\nThanks for the explanation!\n\nMaybe my explanation was not enough, so let me explain:\n\n* I think you could use the set_join_pathlist_hook hook as you like at\nyour own responsibility, but typical use cases of the hook that are\ndesigned to support in the core system would be just add custom paths\nfor replacing joins with scans, as described in custom-scan.sgml (this\nnote is about set_rel_pathlist_hook, but it should also apply to\nset_join_pathlist_hook):\n\n Although this hook function can be used to examine, modify, or remove\n paths generated by the core system, a custom scan provider will typically\n confine itself to generating <structname>CustomPath</structname>\nobjects and adding\n them to <literal>rel</literal> using <function>add_path</function>.\n\n* The problem we had with the set_join_pathlist_hook hook is that in\nsuch a typical use case, previously, if the replaced joins had any\npseudoconstant clauses, the planner would produce incorrect query\nplans, due to the lack of support for handling such quals in\ncreateplan.c. We could fix the extensions side, as you proposed, but\nthe cause of the issue is 100% the planner's deficiency, so it would\nbe unreasonable to force the authors to do so, which would also go\nagainst our policy of ABI compatibility. So I fixed the core side, as\nin the FDW case, so that extensions created for such a typical use\ncase, which I guess are the majority of the hook extensions, need not\nbe modified/recompiled. I think it is unfortunate that that breaks\nthe use case of the Citus extension, though.\n\nBTW: commit 9e9931d2b removed the restriction on the call to the hook\nextensions, so you might want to back-patch it. Though, I think it\nwould be better if the hook was well implemented from the beginning.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sat, 19 Aug 2023 20:09:25 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nOn 2023-08-19 20:09:25 +0900, Etsuro Fujita wrote:\n> Maybe my explanation was not enough, so let me explain:\n> \n> * I think you could use the set_join_pathlist_hook hook as you like at\n> your own responsibility, but typical use cases of the hook that are\n> designed to support in the core system would be just add custom paths\n> for replacing joins with scans, as described in custom-scan.sgml (this\n> note is about set_rel_pathlist_hook, but it should also apply to\n> set_join_pathlist_hook):\n> \n> Although this hook function can be used to examine, modify, or remove\n> paths generated by the core system, a custom scan provider will typically\n> confine itself to generating <structname>CustomPath</structname>\n> objects and adding\n> them to <literal>rel</literal> using <function>add_path</function>.\n\nThat supports citus' use more than not: \"this hook function can be used to\nexamine ... paths generated by the core system\".\n\n\n> * The problem we had with the set_join_pathlist_hook hook is that in\n> such a typical use case, previously, if the replaced joins had any\n> pseudoconstant clauses, the planner would produce incorrect query\n> plans, due to the lack of support for handling such quals in\n> createplan.c. We could fix the extensions side, as you proposed, but\n> the cause of the issue is 100% the planner's deficiency, so it would\n> be unreasonable to force the authors to do so, which would also go\n> against our policy of ABI compatibility. So I fixed the core side, as\n> in the FDW case, so that extensions created for such a typical use\n> case, which I guess are the majority of the hook extensions, need not\n> be modified/recompiled. I think it is unfortunate that that breaks\n> the use case of the Citus extension, though.\n\nI'm not neutral - I don't work on citus, but work in the same Unit as\nOnder. With that said: I don't think that's really a justification for\nbreaking a pre-existing, not absurd, use case in a minor release.\n\nExcept that this was only noticed after it was released in a set of minor\nversions, I would say that 6f80a8d9c should just straight up be reverted.\nSkimming the thread there wasn't really any analysis done about breaking\nextensions etc - and that ought to be done before a substantial semantics\nchange in a somewhat commonly used hook. I'm inclined to think that that\nmight still be the right path.\n\n\n> BTW: commit 9e9931d2b removed the restriction on the call to the hook\n> extensions, so you might want to back-patch it.\n\nCitus is an extension, not a fork, there's not really a way to just backpatch\na random commit.\n\n\n> Though, I think it would be better if the hook was well implemented from the\n> beginning.\n\nSure, but that's neither here nor there.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 19 Aug 2023 12:34:35 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nOn Sun, Aug 20, 2023 at 4:34 AM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-08-19 20:09:25 +0900, Etsuro Fujita wrote:\n> > Maybe my explanation was not enough, so let me explain:\n> >\n> > * I think you could use the set_join_pathlist_hook hook as you like at\n> > your own responsibility, but typical use cases of the hook that are\n> > designed to support in the core system would be just add custom paths\n> > for replacing joins with scans, as described in custom-scan.sgml (this\n> > note is about set_rel_pathlist_hook, but it should also apply to\n> > set_join_pathlist_hook):\n> >\n> > Although this hook function can be used to examine, modify, or remove\n> > paths generated by the core system, a custom scan provider will typically\n> > confine itself to generating <structname>CustomPath</structname>\n> > objects and adding\n> > them to <literal>rel</literal> using <function>add_path</function>.\n>\n> That supports citus' use more than not: \"this hook function can be used to\n> examine ... paths generated by the core system\".\n>\n>\n> > * The problem we had with the set_join_pathlist_hook hook is that in\n> > such a typical use case, previously, if the replaced joins had any\n> > pseudoconstant clauses, the planner would produce incorrect query\n> > plans, due to the lack of support for handling such quals in\n> > createplan.c. We could fix the extensions side, as you proposed, but\n> > the cause of the issue is 100% the planner's deficiency, so it would\n> > be unreasonable to force the authors to do so, which would also go\n> > against our policy of ABI compatibility. So I fixed the core side, as\n> > in the FDW case, so that extensions created for such a typical use\n> > case, which I guess are the majority of the hook extensions, need not\n> > be modified/recompiled. I think it is unfortunate that that breaks\n> > the use case of the Citus extension, though.\n>\n> I'm not neutral - I don't work on citus, but work in the same Unit as\n> Onder. With that said: I don't think that's really a justification for\n> breaking a pre-existing, not absurd, use case in a minor release.\n>\n> Except that this was only noticed after it was released in a set of minor\n> versions, I would say that 6f80a8d9c should just straight up be reverted.\n> Skimming the thread there wasn't really any analysis done about breaking\n> extensions etc - and that ought to be done before a substantial semantics\n> change in a somewhat commonly used hook. I'm inclined to think that that\n> might still be the right path.\n>\n>\n> > BTW: commit 9e9931d2b removed the restriction on the call to the hook\n> > extensions, so you might want to back-patch it.\n>\n> Citus is an extension, not a fork, there's not really a way to just backpatch\n> a random commit.\n>\n>\n> > Though, I think it would be better if the hook was well implemented from the\n> > beginning.\n>\n> Sure, but that's neither here nor there.\n>\n> Greetings,\n>\n> Andres Freund\n\n\n",
"msg_date": "Mon, 21 Aug 2023 20:16:33 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Sorry, I hit the send button by mistake.\n\nOn Sun, Aug 20, 2023 at 4:34 AM Andres Freund <[email protected]> wrote:\n> On 2023-08-19 20:09:25 +0900, Etsuro Fujita wrote:\n> > * The problem we had with the set_join_pathlist_hook hook is that in\n> > such a typical use case, previously, if the replaced joins had any\n> > pseudoconstant clauses, the planner would produce incorrect query\n> > plans, due to the lack of support for handling such quals in\n> > createplan.c. We could fix the extensions side, as you proposed, but\n> > the cause of the issue is 100% the planner's deficiency, so it would\n> > be unreasonable to force the authors to do so, which would also go\n> > against our policy of ABI compatibility. So I fixed the core side, as\n> > in the FDW case, so that extensions created for such a typical use\n> > case, which I guess are the majority of the hook extensions, need not\n> > be modified/recompiled. I think it is unfortunate that that breaks\n> > the use case of the Citus extension, though.\n>\n> I'm not neutral - I don't work on citus, but work in the same Unit as\n> Onder. With that said: I don't think that's really a justification for\n> breaking a pre-existing, not absurd, use case in a minor release.\n>\n> Except that this was only noticed after it was released in a set of minor\n> versions, I would say that 6f80a8d9c should just straight up be reverted.\n> Skimming the thread there wasn't really any analysis done about breaking\n> extensions etc - and that ought to be done before a substantial semantics\n> change in a somewhat commonly used hook. I'm inclined to think that that\n> might still be the right path.\n\nI think you misread the thread; actually, we did an analysis and\napplied a fix that would avoid ABI breakage (see the commit message\nfor 6f80a8d9c). It turned out that that breaks the Citus extension,\nthough.\n\nAlso, this is not such a change; it is just an optimization\ndisablement. Let me explain. This is the commit message for\ne7cb7ee14, which added the hook we are discussing:\n\n Allow FDWs and custom scan providers to replace joins with scans.\n\n Foreign data wrappers can use this capability for so-called \"join\n pushdown\"; that is, instead of executing two separate foreign scans\n and then joining the results locally, they can generate a path which\n performs the join on the remote server and then is scanned locally.\n This commit does not extend postgres_fdw to take advantage of this\n capability; it just provides the infrastructure.\n\n Custom scan providers can use this in a similar way. Previously,\n it was only possible for a custom scan provider to scan a single\n relation. Now, it can scan an entire join tree, provided of course\n that it knows how to produce the same results that the join would\n have produced if executed normally.\n\nAs described in the commit message, we assume that extensions use the\nhook in a similar way to FDWs; if they do so, the restriction added by\n6f80a8d9c just diables them to add paths for join pushdown, making the\nplanner use paths involving local joins, so any breakage (other than\nplan changes from custom joins to local joins) would never happen.\n\nSo my question is: does the Citus extension use the hook like this?\n(Sorry, I do not fully understand Onder's explanation.)\n\n> > BTW: commit 9e9931d2b removed the restriction on the call to the hook\n> > extensions, so you might want to back-patch it.\n>\n> Citus is an extension, not a fork, there's not really a way to just backpatch\n> a random commit.\n\nYeah, I was thinking that that would be your concern.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 21 Aug 2023 20:27:45 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nThanks for the explanation.\n\nAs described in the commit message, we assume that extensions use the\n> hook in a similar way to FDWs\n\n\nI'm not sure if it is fair to assume that extensions use any hook in any\nway.\n\nSo my question is: does the Citus extension use the hook like this?\n> (Sorry, I do not fully understand Onder's explanation.)\n>\n>\nI haven't gone into detail about how Citus uses this hook, but I don't\nthink we should\nneed to explain it. In general, Citus uses many hooks, and many other\nextensions\nuse this specific hook. With minor version upgrades, we haven't seen this\nkind of\nbehavior change before.\n\nIn general, Citus relies on this hook for collecting information about\njoins across\nrelations/ctes/subqueries. So, its scope is bigger than a single join for\nCitus.\n\nThe extension assigns a special marker(s) for RTE Relations, and then\nchecks whether\nall relations with these special markers joined transitively across\nsubqueries, such that\nit can decide to pushdown the whole or some parts of the (sub)query.\n\nI must admit, I have not yet looked into whether we can fix the problem\nwithin the extension.\nMaybe we can, maybe not.\n\nBut the bigger issue is that there has usually been a clear line between\nthe extensions and\nthe PG itself when it comes to hooks within the minor version upgrades.\nSadly, this change\nbreaks that line. We wanted to share our worries here and find out what\nothers think.\n\n>Except that this was only noticed after it was released in a set of minor\n> > versions, I would say that 6f80a8d9c should just straight up be reverted.\n\n\nI cannot be the one to ask for reverting a commit in PG, but I think doing\nit would be a\nfair action. We kindly ask those who handle this to think about it.\n\nThanks,\nOnder\n\nHi,Thanks for the explanation.\nAs described in the commit message, we assume that extensions use the\nhook in a similar way to FDWsI'm not sure if it is fair to assume that extensions use any hook in any way.So my question is: does the Citus extension use the hook like this?\n(Sorry, I do not fully understand Onder's explanation.)I haven't gone into detail about how Citus uses this hook, but I don't think we should need to explain it. In general, Citus uses many hooks, and many other extensions use this specific hook. With minor version upgrades, we haven't seen this kind of behavior change before.In general, Citus relies on this hook for collecting information about joins across relations/ctes/subqueries. So, its scope is bigger than a single join for Citus.The extension assigns a special marker(s) for RTE Relations, and then checks whetherall relations with these special markers joined transitively across subqueries, such that it can decide to pushdown the whole or some parts of the (sub)query.I must admit, I have not yet looked into whether we can fix the problem within the extension. Maybe we can, maybe not.But the bigger issue is that there has usually been a clear line between the extensions and the PG itself when it comes to hooks within the minor version upgrades. Sadly, this change breaks that line. We wanted to share our worries here and find out what others think.>Except that this was only noticed after it was released in a set of minor> versions, I would say that 6f80a8d9c should just straight up be reverted.I cannot be the one to ask for reverting a commit in PG, but I think doing it would be a fair action. We kindly ask those who handle this to think about it.Thanks,Onder",
"msg_date": "Mon, 21 Aug 2023 16:34:24 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
},
{
"msg_contents": "Hi,\n\nThanks for the detailed explanation!\n\nOn Mon, Aug 21, 2023 at 10:34 PM Önder Kalacı <[email protected]> wrote:\n\n>> As described in the commit message, we assume that extensions use the\n>> hook in a similar way to FDWs\n\n> I'm not sure if it is fair to assume that extensions use any hook in any way.\n\nI am not sure either, but as for the hook, I think it is an undeniable\nfact that the core system assumes that extensions will use it in that\nway.\n\n>> So my question is: does the Citus extension use the hook like this?\n>> (Sorry, I do not fully understand Onder's explanation.)\n\n> I haven't gone into detail about how Citus uses this hook, but I don't think we should\n> need to explain it. In general, Citus uses many hooks, and many other extensions\n> use this specific hook. With minor version upgrades, we haven't seen this kind of\n> behavior change before.\n>\n> In general, Citus relies on this hook for collecting information about joins across\n> relations/ctes/subqueries. So, its scope is bigger than a single join for Citus.\n>\n> The extension assigns a special marker(s) for RTE Relations, and then checks whether\n> all relations with these special markers joined transitively across subqueries, such that\n> it can decide to pushdown the whole or some parts of the (sub)query.\n\nIIUC, I think that that is going beyond what the hook supports.\n\n> But the bigger issue is that there has usually been a clear line between the extensions and\n> the PG itself when it comes to hooks within the minor version upgrades. Sadly, this change\n> breaks that line. We wanted to share our worries here and find out what others think.\n\nMy understanding is: at least for hooks with intended usages, if an\nextension uses them as intended, it is guaranteed that the extension\nas-is will work correctly with minor version upgrades; otherwise it is\nnot necessarily. I think it is unfortunate that my commit broke the\nCitus extension, though.\n\n>> >Except that this was only noticed after it was released in a set of minor\n>> > versions, I would say that 6f80a8d9c should just straight up be reverted.\n\n> I cannot be the one to ask for reverting a commit in PG, but I think doing it would be a\n> fair action. We kindly ask those who handle this to think about it.\n\nReverting the commit would resolve your issue, but re-introduce the\nissue mentioned upthread to extensions that use the hook properly, so\nI do not think that reverting the commit would be a fair action.\n\nSorry for the delay.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 29 Aug 2023 17:45:42 +0900",
"msg_from": "Etsuro Fujita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: wrong results with self join + enable_nestloop off"
}
] |
[
{
"msg_contents": "LLVM 16 is apparently released, because Fedora has started using\nit in their rawhide (development) branch, which means Postgres\nis failing to build there [1][2]:\n\n../../../../src/include/jit/llvmjit_emit.h: In function 'l_load_gep1':\n../../../../src/include/jit/llvmjit_emit.h:123:30: warning: implicit declaration of function 'LLVMBuildGEP'; did you mean 'LLVMBuildGEP2'? [-Wimplicit-function-declaration]\n 123 | LLVMValueRef v_ptr = LLVMBuildGEP(b, v, &idx, 1, \"\");\n | ^~~~~~~~~~~~\n | LLVMBuildGEP2\n... etc etc etc ...\nleading to lots of\n+ERROR: could not load library \"/builddir/build/BUILD/postgresql-15.1/tmp_install/usr/lib64/pgsql/llvmjit.so\": /builddir/build/BUILD/postgresql-15.1/tmp_install/usr/lib64/pgsql/llvmjit.so: undefined symbol: LLVMBuildGEP\n\nI know we've been letting this topic slide, but we are out of runway.\nI propose adding this as a must-fix open item for PG 16.\n\n\t\t\tregards, tom lane\n\n[1] https://bugzilla.redhat.com/show_bug.cgi?id=2186381\n[2] https://kojipkgs.fedoraproject.org/work/tasks/8408/99938408/build.log\n\n\n",
"msg_date": "Fri, 14 Apr 2023 10:31:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Where are we on supporting LLVM's opaque-pointer changes?"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 2:31 AM Tom Lane <[email protected]> wrote:\n> I know we've been letting this topic slide, but we are out of runway.\n> I propose adding this as a must-fix open item for PG 16.\n\nI had a patch that solved many of the problems[1], but it isn't all\nthe way there and I got stuck. I am going to look at it together with\nAndres in the next couple of days, get unstuck, and aim to get a patch\nout this week. More soon.\n\n[1] https://github.com/macdice/postgres/tree/llvm15\n\n\n",
"msg_date": "Sun, 16 Apr 2023 13:00:36 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where are we on supporting LLVM's opaque-pointer changes?"
}
] |
[
{
"msg_contents": "I hit this elog() while testing reports under v16 and changed to PANIC\nto help diagnose.\n\nDETAILS: PANIC: invalid memory alloc request size 18446744072967930808\nCONTEXT: PL/pgSQL function array_weight(real[],real[]) while storing call arguments into local variables\n\nI can't share the query, data, nor plpgsql functions themselves.\n\nI reproduced the problem at this commit, but not at its parent.\n\ncommit 42b746d4c982257bf3f924176632b04dc288174b (HEAD)\nAuthor: Tom Lane <[email protected]>\nDate: Thu Oct 6 13:27:34 2022 -0400\n\n Remove uses of MemoryContextContains in nodeAgg.c and\n nodeWindowAgg.c.\n\n#2 0x0000000001067af5 in errfinish (filename=filename@entry=0x168f1e0 \"../src/backend/utils/mmgr/mcxt.c\", lineno=lineno@entry=1013,\n funcname=funcname@entry=0x16901a0 <__func__.17850> \"MemoryContextAlloc\") at ../src/backend/utils/error/elog.c:604\n#3 0x00000000010c57c7 in MemoryContextAlloc (context=context@entry=0x604200032600, size=size@entry=8488348128) at ../src/backend/utils/mmgr/mcxt.c:1013\n#4 0x0000000000db49a4 in copy_byval_expanded_array (eah=eah@entry=0x604200032718, oldeah=0x604200032718) at ../src/backend/utils/adt/array_expanded.c:195\n#5 0x0000000000db5f7a in expand_array (arraydatum=105836584314672, parentcontext=<optimized out>, metacache=0x7ffcbd2d29c0, metacache@entry=0x0)\n at ../src/backend/utils/adt/array_expanded.c:104\n#6 0x00007f6c05a6b4d0 in plpgsql_exec_function (func=func@entry=0x6092004a4c58, fcinfo=fcinfo@entry=0x7f6c04f7efc8, simple_eval_estate=simple_eval_estate@entry=0x0,\n simple_eval_resowner=simple_eval_resowner@entry=0x0, procedure_resowner=procedure_resowner@entry=0x0, atomic=atomic@entry=true)\n at ../src/pl/plpgsql/src/pl_exec.c:556\n#7 0x00007f6c05a76af4 in plpgsql_call_handler (fcinfo=<optimized out>) at ../src/pl/plpgsql/src/pl_handler.c:277\n#8 0x00000000008b30cd in ExecInterpExpr (state=0x7f6c04fd6750, econtext=0x6072000712d0, isnull=0x7ffcbd2d2fa0) at ../src/backend/executor/execExprInterp.c:733\n#9 0x00000000008a6c5f in ExecInterpExprStillValid (state=0x7f6c04fd6750, econtext=0x6072000712d0, isNull=0x7ffcbd2d2fa0)\n at ../src/backend/executor/execExprInterp.c:1858\n#10 0x000000000090032b in ExecEvalExprSwitchContext (isNull=0x7ffcbd2d2fa0, econtext=0x6072000712d0, state=0x7f6c04fd6750) at ../src/include/executor/executor.h:354\n#11 ExecProject (projInfo=0x7f6c04fd6748) at ../src/include/executor/executor.h:388\n#12 project_aggregates (aggstate=aggstate@entry=0x607200070d38) at ../src/backend/executor/nodeAgg.c:1377\n#13 0x0000000000903eb6 in agg_retrieve_direct (aggstate=aggstate@entry=0x607200070d38) at ../src/backend/executor/nodeAgg.c:2520\n#14 0x0000000000904074 in ExecAgg (pstate=0x607200070d38) at ../src/backend/executor/nodeAgg.c:2172\n#15 0x00000000008d90e0 in ExecProcNodeFirst (node=0x607200070d38) at ../src/backend/executor/execProcnode.c:464\n#16 0x00000000008c1e5f in ExecProcNode (node=0x607200070d38) at ../src/include/executor/executor.h:272\n#17 ExecutePlan (estate=estate@entry=0x607200070a18, planstate=0x607200070d38, use_parallel_mode=false, operation=operation@entry=CMD_SELECT, sendTuples=true,\n numberTuples=numberTuples@entry=0, direction=direction@entry=ForwardScanDirection, dest=dest@entry=0x7f6c051abd28, execute_once=execute_once@entry=true)\n at ../src/backend/executor/execMain.c:1640\n#18 0x00000000008c3ffb in standard_ExecutorRun (queryDesc=0x604200016998, direction=ForwardScanDirection, count=0, execute_once=<optimized out>)\n at ../src/backend/executor/execMain.c:365\n#19 0x00000000008c4125 in ExecutorRun (queryDesc=queryDesc@entry=0x604200016998, direction=direction@entry=ForwardScanDirection, count=count@entry=0,\n execute_once=<optimized out>) at ../src/backend/executor/execMain.c:309\n#20 0x0000000000d5d148 in PortalRunSelect (portal=portal@entry=0x607200028a18, forward=forward@entry=true, count=0, count@entry=9223372036854775807,\n dest=dest@entry=0x7f6c051abd28) at ../src/backend/tcop/pquery.c:924\n#21 0x0000000000d60dc8 in PortalRun (portal=portal@entry=0x607200028a18, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\n run_once=run_once@entry=true, dest=dest@entry=0x7f6c051abd28, altdest=altdest@entry=0x7f6c051abd28, qc=<optimized out>, qc@entry=0x7ffcbd2d3580)\n at ../src/backend/tcop/pquery.c:768\n#22 0x0000000000d595fd in exec_simple_query (\n query_string=query_string@entry=0x6082000cf238 \"...\n#23 0x0000000000d5c72c in PostgresMain (dbname=dbname@entry=0x60820000b378 \"postgres\", username=username@entry=0x60820000b358 \"telsasoft\")\n at ../src/backend/tcop/postgres.c:4632\n#24 0x0000000000bddc19 in BackendRun (port=port@entry=0x60300000fc40) at ../src/backend/postmaster/postmaster.c:4461\n#25 0x0000000000be2583 in BackendStartup (port=port@entry=0x60300000fc40) at ../src/backend/postmaster/postmaster.c:4189\n#26 0x0000000000be2a05 in ServerLoop () at ../src/backend/postmaster/postmaster.c:1779\n#27 0x0000000000be436b in PostmasterMain (argc=argc@entry=9, argv=argv@entry=0x600e0000df40) at ../src/backend/postmaster/postmaster.c:1463\n#28 0x00000000009c33d5 in main (argc=9, argv=0x600e0000df40) at ../src/backend/main/main.c:200\n\n(gdb) fr 4\n#4 0x0000000000db49a4 in copy_byval_expanded_array (eah=eah@entry=0x604200032718, oldeah=0x604200032718) at ../src/backend/utils/adt/array_expanded.c:195\n195 eah->dims = (int *) MemoryContextAlloc(objcxt, ndims * 2 * sizeof(int));\n(gdb) p ndims\n$1 = 1061043516\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 14 Apr 2023 15:36:30 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "On Sat, 15 Apr 2023 at 08:36, Justin Pryzby <[email protected]> wrote:\n>\n> I hit this elog() while testing reports under v16 and changed to PANIC\n> to help diagnose.\n>\n> DETAILS: PANIC: invalid memory alloc request size 18446744072967930808\n> CONTEXT: PL/pgSQL function array_weight(real[],real[]) while storing call arguments into local variables\n>\n> I can't share the query, data, nor plpgsql functions themselves.\n\nWhich aggregate function is being called here? Is it a custom\naggregate written in C, by any chance?\n\nDavid\n\n\n",
"msg_date": "Sat, 15 Apr 2023 10:04:52 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 10:04:52AM +1200, David Rowley wrote:\n> On Sat, 15 Apr 2023 at 08:36, Justin Pryzby <[email protected]> wrote:\n> >\n> > I hit this elog() while testing reports under v16 and changed to PANIC\n> > to help diagnose.\n> >\n> > DETAILS: PANIC: invalid memory alloc request size 18446744072967930808\n> > CONTEXT: PL/pgSQL function array_weight(real[],real[]) while storing call arguments into local variables\n> >\n> > I can't share the query, data, nor plpgsql functions themselves.\n> \n> Which aggregate function is being called here? Is it a custom\n> aggregate written in C, by any chance?\n\nThat function is not an aggregate:\n\n ts=# \\sf array_weight\n CREATE OR REPLACE FUNCTION public.array_weight(real[], real[])\n RETURNS real\n LANGUAGE plpgsql\n IMMUTABLE PARALLEL SAFE\n\nAnd we don't have any C code loaded to postgres. We do have polymorphic\naggregate functions using anycompatiblearray [*], and array_weight is\nbeing called several times with those aggregates as its arguments.\n\n*As in:\n\n9e38c2bb5093ceb0c04d6315ccd8975bd17add66\n97f73a978fc1aca59c6ad765548ce0096d95a923\n09878cdd489ff7aca761998e7cb104f4fd98ae02\n\n\n\n",
"msg_date": "Fri, 14 Apr 2023 17:48:19 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "On Sat, 15 Apr 2023 at 10:48, Justin Pryzby <[email protected]> wrote:\n>\n> On Sat, Apr 15, 2023 at 10:04:52AM +1200, David Rowley wrote:\n> > Which aggregate function is being called here? Is it a custom\n> > aggregate written in C, by any chance?\n>\n> That function is not an aggregate:\n\nThere's an aggregate somewhere as indicated by this fragment from the\nstack trace:\n\n> #12 project_aggregates (aggstate=aggstate@entry=0x607200070d38) at ../src/backend/executor/nodeAgg.c:1377\n> #13 0x0000000000903eb6 in agg_retrieve_direct (aggstate=aggstate@entry=0x607200070d38) at ../src/backend/executor/nodeAgg.c:2520\n> #14 0x0000000000904074 in ExecAgg (pstate=0x607200070d38) at ../src/backend/executor/nodeAgg.c:2172\n\nAny chance you could try and come up with a minimal reproducer? You\nhave access to see which aggregates are being used here and what data\ntypes are being given to them and then what's being done with the\nreturn value of that aggregate that's causing the crash. Maybe you\ncan still get the crash if you mock up some data to aggregate and\nstrip out the guts from the plpgsql functions that we're crashing on?\n\nDavid\n\n\n",
"msg_date": "Sat, 15 Apr 2023 11:33:58 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> Any chance you could try and come up with a minimal reproducer?\n\nYeah --- there's an awful lot of moving parts there, and a stack\ntrace is not much to go on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 21:02:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "Maybe you'll find valgrind errors to be helpful.\n\n==17971== Source and destination overlap in memcpy(0x1eb8c078, 0x1d88cb20, 123876054)\n==17971== at 0x4C2E81D: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==17971== by 0x9C705A: memcpy (string3.h:51)\n==17971== by 0x9C705A: pg_detoast_datum_copy (fmgr.c:1823)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n==17971== by 0x871CCE: PortalRun (pquery.c:768)\n==17971== by 0x86D552: exec_simple_query (postgres.c:1274)\n\n==17971== Invalid read of size 8\n==17971== at 0x4C2EA20: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==17971== by 0x9C705A: memcpy (string3.h:51)\n==17971== by 0x9C705A: pg_detoast_datum_copy (fmgr.c:1823)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n==17971== by 0x871CCE: PortalRun (pquery.c:768)\n==17971== by 0x86D552: exec_simple_query (postgres.c:1274)\n==17971== Address 0x1eb8c038 is 8 bytes before a block of size 123,876,112 alloc'd\n==17971== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==17971== by 0x9E4204: AllocSetAlloc (aset.c:732)\n==17971== by 0x9ED5BD: palloc (mcxt.c:1224)\n==17971== by 0x9C704C: pg_detoast_datum_copy (fmgr.c:1821)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n\n==17971== Invalid read of size 8\n==17971== at 0x4C2EA28: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==17971== by 0x9C705A: memcpy (string3.h:51)\n==17971== by 0x9C705A: pg_detoast_datum_copy (fmgr.c:1823)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n==17971== by 0x871CCE: PortalRun (pquery.c:768)\n==17971== by 0x86D552: exec_simple_query (postgres.c:1274)\n==17971== Address 0x1eb8c030 is 16 bytes before a block of size 123,876,112 alloc'd\n==17971== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==17971== by 0x9E4204: AllocSetAlloc (aset.c:732)\n==17971== by 0x9ED5BD: palloc (mcxt.c:1224)\n==17971== by 0x9C704C: pg_detoast_datum_copy (fmgr.c:1821)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n\n==17971== Invalid read of size 8\n==17971== at 0x4C2EA0C: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==17971== by 0x9C705A: memcpy (string3.h:51)\n==17971== by 0x9C705A: pg_detoast_datum_copy (fmgr.c:1823)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n==17971== by 0x871CCE: PortalRun (pquery.c:768)\n==17971== by 0x86D552: exec_simple_query (postgres.c:1274)\n==17971== Address 0x1eb8c028 is 24 bytes before a block of size 123,876,112 alloc'd\n==17971== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==17971== by 0x9E4204: AllocSetAlloc (aset.c:732)\n==17971== by 0x9ED5BD: palloc (mcxt.c:1224)\n==17971== by 0x9C704C: pg_detoast_datum_copy (fmgr.c:1821)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n\n\n\n==17971== Invalid read of size 8\n==17971== at 0x4C2EA0C: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==17971== by 0x9C705A: memcpy (string3.h:51)\n==17971== by 0x9C705A: pg_detoast_datum_copy (fmgr.c:1823)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n==17971== by 0x871CCE: PortalRun (pquery.c:768)\n==17971== by 0x86D552: exec_simple_query (postgres.c:1274)\n==17971== Address 0x1eb8c028 is 24 bytes before a block of size 123,876,112 alloc'd\n==17971== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==17971== by 0x9E4204: AllocSetAlloc (aset.c:732)\n==17971== by 0x9ED5BD: palloc (mcxt.c:1224)\n==17971== by 0x9C704C: pg_detoast_datum_copy (fmgr.c:1821)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n\n==17971== Invalid read of size 8\n==17971== at 0x4C2EA18: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==17971== by 0x9C705A: memcpy (string3.h:51)\n==17971== by 0x9C705A: pg_detoast_datum_copy (fmgr.c:1823)\n==17971== by 0x8952F8: expand_array (array_expanded.c:131)\n==17971== by 0x1E971A28: plpgsql_exec_function (pl_exec.c:556)\n==17971== by 0x1E97CF83: plpgsql_call_handler (pl_handler.c:277)\n==17971== by 0x6BFA4E: ExecInterpExpr (execExprInterp.c:733)\n==17971== by 0x6D9C8C: ExecEvalExprSwitchContext (executor.h:354)\n==17971== by 0x6D9C8C: ExecProject (executor.h:388)\n==17971== by 0x6D9C8C: project_aggregates (nodeAgg.c:1377)\n==17971== by 0x6DB2B4: agg_retrieve_direct (nodeAgg.c:2520)\n==17971== by 0x6DB2B4: ExecAgg (nodeAgg.c:2172)\n==17971== by 0x6C4821: ExecProcNode (executor.h:272)\n==17971== by 0x6C4821: ExecutePlan (execMain.c:1640)\n==17971== by 0x6C4821: standard_ExecutorRun (execMain.c:365)\n==17971== by 0x870535: PortalRunSelect (pquery.c:924)\n==17971== by 0x871CCE: PortalRun (pquery.c:768)\n==17971== by 0x86D552: exec_simple_query (postgres.c:1274)\n==17971== Address 0x1eb8c020 is 32 bytes before a block of size 123,879,328 in arena \"client\"\n\n\nAnother instance (compile locally rather than PGDG RPMs, and running the broken\ncommit rather than v16 HEAD):\n\n==30181== Source and destination overlap in memcpy(0x17691078, 0x15f6f8e0, 92126790)\n==30181== at 0x4C2E81D: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==30181== by 0x98C5DA: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6A1637: ExecProcNodeFirst (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6998EC: ExecutePlan (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n\n\n==30181== Invalid read of size 8\n==30181== at 0x4C2EA0C: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==30181== by 0x98C5DA: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6A1637: ExecProcNodeFirst (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6998EC: ExecutePlan (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== Address 0x17691038 is 8 bytes before a block of size 92,126,848 alloc'd\n==30181== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==30181== by 0x9A7980: AllocSetAlloc (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x9B01A7: palloc (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x98C5C9: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n\n\n==30181== Invalid read of size 8\n==30181== at 0x4C2EA18: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==30181== by 0x98C5DA: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6A1637: ExecProcNodeFirst (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6998EC: ExecutePlan (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== Address 0x17691030 is 16 bytes before a block of size 92,126,848 alloc'd\n==30181== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==30181== by 0x9A7980: AllocSetAlloc (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x9B01A7: palloc (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x98C5C9: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n\n==30181== Invalid read of size 8\n==30181== at 0x4C2EA20: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==30181== by 0x98C5DA: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6A1637: ExecProcNodeFirst (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6998EC: ExecutePlan (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== Address 0x17691028 is 24 bytes before a block of size 92,126,848 alloc'd\n==30181== at 0x4C29F73: malloc (vg_replace_malloc.c:309)\n==30181== by 0x9A7980: AllocSetAlloc (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x9B01A7: palloc (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x98C5C9: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181==\n==30181== Invalid read of size 8\n==30181== at 0x4C2EA28: memcpy@@GLIBC_2.14 (vg_replace_strmem.c:1035)\n==30181== by 0x98C5DA: pg_detoast_datum_copy (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x875ADC: expand_array (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x174757B7: plpgsql_exec_function (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x174806B5: plpgsql_call_handler (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/lib/plpgsql.so)\n==30181== by 0x694DBD: ExecInterpExpr (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x69131A: ExecInterpExprStillValid (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6AEF2F: project_aggregates (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0169: agg_retrieve_direct (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6B0215: ExecAgg (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6A1637: ExecProcNodeFirst (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== by 0x6998EC: ExecutePlan (in /home/pryzbyj/git/postgresql/build.autoconf/tmp_install/usr/local/pgsql/bin/postgres)\n==30181== Address 0x17691020 is 32 bytes before a block of size 92,127,136 in arena \"client\"\n\n\n",
"msg_date": "Fri, 14 Apr 2023 20:03:07 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "On Sat, 15 Apr 2023 at 13:03, Justin Pryzby <[email protected]> wrote:\n> Maybe you'll find valgrind errors to be helpful.\n\nI don't think that's really going to help. The crash already tells us\nthere's a problem down the line, but if the commit you mention is to\nblame for this, then the problem is elsewhere, either in our\nassumption that we can get away without the datumCopy() or in the\naggregate function producing the state that we're no longer copying.\n\nDavid\n\n\n",
"msg_date": "Sat, 15 Apr 2023 13:20:01 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> I don't think that's really going to help. The crash already tells us\n> there's a problem down the line, but if the commit you mention is to\n> blame for this, then the problem is elsewhere, either in our\n> assumption that we can get away without the datumCopy() or in the\n> aggregate function producing the state that we're no longer copying.\n\nIt does smell like the aggregate output has been corrupted by the time\nit got to the plpgsql function. I don't particularly want to try to\nsynthesize a test case from the essentially-zero SQL-level information\nwe've been provided, though. And I doubt we can track this down without\na test case. So please try to sanitize the case you have enough that\nyou can share it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Apr 2023 23:27:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "On Sat, Apr 15, 2023 at 11:33:58AM +1200, David Rowley wrote:\n> On Sat, 15 Apr 2023 at 10:48, Justin Pryzby <[email protected]> wrote:\n> >\n> > On Sat, Apr 15, 2023 at 10:04:52AM +1200, David Rowley wrote:\n> > > Which aggregate function is being called here? Is it a custom\n> > > aggregate written in C, by any chance?\n> >\n> > That function is not an aggregate:\n> \n> There's an aggregate somewhere as indicated by this fragment from the\n> stack trace:\n> \n> > #12 project_aggregates (aggstate=aggstate@entry=0x607200070d38) at ../src/backend/executor/nodeAgg.c:1377\n> > #13 0x0000000000903eb6 in agg_retrieve_direct (aggstate=aggstate@entry=0x607200070d38) at ../src/backend/executor/nodeAgg.c:2520\n> > #14 0x0000000000904074 in ExecAgg (pstate=0x607200070d38) at ../src/backend/executor/nodeAgg.c:2172\n> \n> Any chance you could try and come up with a minimal reproducer? You\n> have access to see which aggregates are being used here and what data\n> types are being given to them and then what's being done with the\n> return value of that aggregate that's causing the crash. Maybe you\n> can still get the crash if you mock up some data to aggregate and\n> strip out the guts from the plpgsql functions that we're crashing on?\n\nTry this",
"msg_date": "Sat, 15 Apr 2023 18:19:04 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Sat, Apr 15, 2023 at 11:33:58AM +1200, David Rowley wrote:\n>> Any chance you could try and come up with a minimal reproducer?\n\n> Try this\n\nThanks. I see the problem: finalize_aggregate is no longer forcing\na R/W expanded datum returned by the finalfn into R/O form. If\nwe re-use the aggregate result in multiple places, as this query\ndoes, then the first use can clobber the value for later uses.\n(The commit message specifically mentions this concern, so I wonder\nhow we failed to actually do it :-()\n\nA minimal fix would be to force to R/O before returning from\nfinalize_aggregate, but I wonder if we should do it later.\n\nBy the by, I couldn't help noticing that ExecAggTransReparent\ncompletely fails to do what its name promises it should do, ie\nreparent a R/W datum into the proper context instead of physically\ncopying it. That looks suspiciously like something that got broken\nduring some other refactoring somewhere along the line. That'd be a\nperformance bug not a correctness bug, but it should be looked into.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 12:26:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: invalid memory alloc request size 8488348128"
}
] |
[
{
"msg_contents": "Hi there,\n\nIn my implementation of CustomScan, when I have a query that scans multiple\ntables (select c from t1,t2,t3), the planner always picks one table to be\nscanned by CustomScan and offloads the rest to SeqScan. I tried assigning a\ncost of 0 to the CustomScan path, but still not working. BeginCustomScan\ngets executed, ExecCustomScan is skipped, and then EndCustomScan is\nexecuted for all the tables that are offloaded to Seq Scan. EXPLAIN shows\nthat always only one table is picked to be executed by CustomScan. Any idea\nwhat I might be doing wrong? Like a value in a struct I might be setting\nincorrectly?\n\nThanks!\n\nHi there,In my implementation of CustomScan, when I have a query that scans multiple tables (select c from t1,t2,t3), the planner always picks one table to be scanned by CustomScan and offloads the rest to SeqScan. I tried assigning a cost of 0 to the CustomScan path, but still not working. BeginCustomScan gets executed, ExecCustomScan is skipped, and then EndCustomScan is executed for all the tables that are offloaded to Seq Scan. EXPLAIN shows that always only one table is picked to be executed by CustomScan. Any idea what I might be doing wrong? Like a value in a struct I might be setting incorrectly?Thanks!",
"msg_date": "Fri, 14 Apr 2023 16:33:05 -0700",
"msg_from": "Amin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scans are offloaded to SeqScan instead of CustomScan when there are\n multiple relations in the same query"
},
{
"msg_contents": "To simplify: Can CustomScan scan multiple relations in the same query or it\nwill always be assigned to one or zero relations?\n\nOn Fri, Apr 14, 2023 at 4:33 PM Amin <[email protected]> wrote:\n\n> Hi there,\n>\n> In my implementation of CustomScan, when I have a query that scans\n> multiple tables (select c from t1,t2,t3), the planner always picks one\n> table to be scanned by CustomScan and offloads the rest to SeqScan. I tried\n> assigning a cost of 0 to the CustomScan path, but still not working.\n> BeginCustomScan gets executed, ExecCustomScan is skipped, and then\n> EndCustomScan is executed for all the tables that are offloaded to Seq\n> Scan. EXPLAIN shows that always only one table is picked to be executed by\n> CustomScan. Any idea what I might be doing wrong? Like a value in a struct\n> I might be setting incorrectly?\n>\n> Thanks!\n>\n\nTo simplify: Can CustomScan scan multiple relations in the same query or it will always be assigned to one or zero relations?On Fri, Apr 14, 2023 at 4:33 PM Amin <[email protected]> wrote:Hi there,In my implementation of CustomScan, when I have a query that scans multiple tables (select c from t1,t2,t3), the planner always picks one table to be scanned by CustomScan and offloads the rest to SeqScan. I tried assigning a cost of 0 to the CustomScan path, but still not working. BeginCustomScan gets executed, ExecCustomScan is skipped, and then EndCustomScan is executed for all the tables that are offloaded to Seq Scan. EXPLAIN shows that always only one table is picked to be executed by CustomScan. Any idea what I might be doing wrong? Like a value in a struct I might be setting incorrectly?Thanks!",
"msg_date": "Mon, 17 Apr 2023 15:34:18 -0700",
"msg_from": "Amin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scans are offloaded to SeqScan instead of CustomScan when there\n are multiple relations in the same query"
},
{
"msg_contents": "Amin <[email protected]> writes:\n> To simplify: Can CustomScan scan multiple relations in the same query or it\n> will always be assigned to one or zero relations?\n\nThere's barely any code in the core planner that is specific to custom\nscans. Almost certainly this misbehavior is the fault of your\ncustom-path-creation code. Maybe you're labeling the paths with the\nwrong parent relation, or forgetting to submit them to add_path,\nor assigning them costs that are high enough to get them rejected?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 18:45:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scans are offloaded to SeqScan instead of CustomScan when there\n are multiple relations in the same query"
},
{
"msg_contents": "Hi Tom,\n\nI made sure EXPLAIN returns CustomScan for all scans in the query. But\nstill, ExecCustomScan is only called once while the rest of the functions\nare called for each scan separately. Is this expected behavior? How to work\naround this?\n\nThank you!\n\nOn Mon, Apr 17, 2023 at 3:45 PM Tom Lane <[email protected]> wrote:\n\n> Amin <[email protected]> writes:\n> > To simplify: Can CustomScan scan multiple relations in the same query or\n> it\n> > will always be assigned to one or zero relations?\n>\n> There's barely any code in the core planner that is specific to custom\n> scans. Almost certainly this misbehavior is the fault of your\n> custom-path-creation code. Maybe you're labeling the paths with the\n> wrong parent relation, or forgetting to submit them to add_path,\n> or assigning them costs that are high enough to get them rejected?\n>\n> regards, tom lane\n>\n\nHi Tom,I made sure EXPLAIN returns CustomScan for all scans in the query. But still, \nExecCustomScan is only called once while the rest of the functions are called for each scan separately. Is this expected behavior? How to work around this?Thank you!On Mon, Apr 17, 2023 at 3:45 PM Tom Lane <[email protected]> wrote:Amin <[email protected]> writes:\n> To simplify: Can CustomScan scan multiple relations in the same query or it\n> will always be assigned to one or zero relations?\n\nThere's barely any code in the core planner that is specific to custom\nscans. Almost certainly this misbehavior is the fault of your\ncustom-path-creation code. Maybe you're labeling the paths with the\nwrong parent relation, or forgetting to submit them to add_path,\nor assigning them costs that are high enough to get them rejected?\n\n regards, tom lane",
"msg_date": "Mon, 17 Apr 2023 18:03:08 -0700",
"msg_from": "Amin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scans are offloaded to SeqScan instead of CustomScan when there\n are multiple relations in the same query"
},
{
"msg_contents": "Amin <[email protected]> writes:\n> I made sure EXPLAIN returns CustomScan for all scans in the query. But\n> still, ExecCustomScan is only called once while the rest of the functions\n> are called for each scan separately. Is this expected behavior? How to work\n> around this?\n\n[shrug...] There's some bug in your code, which you've not shown us\n(not that I'm volunteering to review it in any detail).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 22:23:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scans are offloaded to SeqScan instead of CustomScan when there\n are multiple relations in the same query"
}
] |
[
{
"msg_contents": "Current version of postgresql don't support incremental sort using\nordered scan on access method.\n\nExample database:\n\nCREATE TABLE t (id serial, p point, PRIMARY KEY(id));\nINSERT INTO t (SELECT generate_series(1, 10000000), point(random(), random()));\nCREATE INDEX p_idx ON t USING gist(p);\nANALYZE;\n\nNow i want closest points to center:\n\nSELECT id, p <-> point(0.5, 0.5) dist FROM t ORDER BY dist LIMIT 10;\n\nEverything works good (Execution Time: 0.276 ms).\n\nNow i want predictable sorting for points with same distance:\n\nSELECT id, p <-> point(0.5, 0.5) dist FROM t ORDER BY dist, id LIMIT 10;\n\nExecution time is now 1 000 x slower (589.486 ms) and postgresql uses\nfull sort istead of incremental:\n\nSort (cost=205818.51..216235.18 rows=4166667 width=12)\n\nPostgres allows incremental sort only for ordered indexes. Function\nbuild_index_paths dont build partial order paths for access methods\nwith order support. My patch adds support for incremental ordering\nwith access method. Results with patch:\n\nIncremental Sort (cost=5522.10..1241841.02 rows=10000000 width=12)\n(actual time=0.404..0.405 rows=10 loops=1)\n Sort Key: ((p <-> '(0.5,0.5)'::point)), id\n Presorted Key: ((p <-> '(0.5,0.5)'::point))\n\nExecution Time: 0.437 ms",
"msg_date": "Sat, 15 Apr 2023 18:55:51 +0200",
"msg_from": "Miroslav Bendik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Sun, Apr 16, 2023 at 1:20 AM Miroslav Bendik <[email protected]>\nwrote:\n\n> Postgres allows incremental sort only for ordered indexes. Function\n> build_index_paths dont build partial order paths for access methods\n> with order support. My patch adds support for incremental ordering\n> with access method.\n\n\nI think this is a meaningful optimization. I reviewed the patch and\nhere are the comments from me.\n\n* I understand the new param 'match_pathkeys_length_p' is used to tell\nhow many presorted keys are useful. I think list_length(orderbyclauses)\nwill do the same. So there is no need to add the new param, thus we can\nreduce the code diffs.\n\n* Now that match_pathkeys_to_index() returns a prefix of the pathkeys\nrather than returns NIL immediately when there is a failure to match, it\nseems the two local variables 'orderby_clauses' and 'clause_columns' are\nnot necessary any more. I think we can instead lappend the matched\n'expr' and 'indexcol' to '*orderby_clauses_p' and '*clause_columns_p'\ndirectly. In this way we can still call 'return' when we come to a\nfailure to match.\n\n* In build_index_paths(), I think the diff can be reduced to\n\n- if (orderbyclauses)\n- useful_pathkeys = root->query_pathkeys;\n- else\n- useful_pathkeys = NIL;\n+ useful_pathkeys = list_truncate(list_copy(root->query_pathkeys),\n+ list_length(orderbyclauses));\n\n* Several comments in match_pathkeys_to_index() are out of date. We\nneed to revise them to cope with the change.\n\n* I think it's better to provide a test case.\n\nThanks\nRichard\n\nOn Sun, Apr 16, 2023 at 1:20 AM Miroslav Bendik <[email protected]> wrote:\nPostgres allows incremental sort only for ordered indexes. Function\nbuild_index_paths dont build partial order paths for access methods\nwith order support. My patch adds support for incremental ordering\nwith access method. I think this is a meaningful optimization. I reviewed the patch andhere are the comments from me.* I understand the new param 'match_pathkeys_length_p' is used to tellhow many presorted keys are useful. I think list_length(orderbyclauses)will do the same. So there is no need to add the new param, thus we canreduce the code diffs.* Now that match_pathkeys_to_index() returns a prefix of the pathkeysrather than returns NIL immediately when there is a failure to match, itseems the two local variables 'orderby_clauses' and 'clause_columns' arenot necessary any more. I think we can instead lappend the matched'expr' and 'indexcol' to '*orderby_clauses_p' and '*clause_columns_p'directly. In this way we can still call 'return' when we come to afailure to match.* In build_index_paths(), I think the diff can be reduced to- if (orderbyclauses)- useful_pathkeys = root->query_pathkeys;- else- useful_pathkeys = NIL;+ useful_pathkeys = list_truncate(list_copy(root->query_pathkeys),+ list_length(orderbyclauses));* Several comments in match_pathkeys_to_index() are out of date. Weneed to revise them to cope with the change.* I think it's better to provide a test case.ThanksRichard",
"msg_date": "Mon, 17 Apr 2023 21:25:54 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "po 17. 4. 2023 o 15:26 Richard Guo <[email protected]> napísal(a):\n>\n>\n> On Sun, Apr 16, 2023 at 1:20 AM Miroslav Bendik <[email protected]> wrote:\n>>\n>> Postgres allows incremental sort only for ordered indexes. Function\n>> build_index_paths dont build partial order paths for access methods\n>> with order support. My patch adds support for incremental ordering\n>> with access method.\n>\n>\n> I think this is a meaningful optimization. I reviewed the patch and\n> here are the comments from me.\n>\n> * I understand the new param 'match_pathkeys_length_p' is used to tell\n> how many presorted keys are useful. I think list_length(orderbyclauses)\n> will do the same. So there is no need to add the new param, thus we can\n> reduce the code diffs.\n>\n> * Now that match_pathkeys_to_index() returns a prefix of the pathkeys\n> rather than returns NIL immediately when there is a failure to match, it\n> seems the two local variables 'orderby_clauses' and 'clause_columns' are\n> not necessary any more. I think we can instead lappend the matched\n> 'expr' and 'indexcol' to '*orderby_clauses_p' and '*clause_columns_p'\n> directly. In this way we can still call 'return' when we come to a\n> failure to match.\n>\n> * In build_index_paths(), I think the diff can be reduced to\n>\n> - if (orderbyclauses)\n> - useful_pathkeys = root->query_pathkeys;\n> - else\n> - useful_pathkeys = NIL;\n> + useful_pathkeys = list_truncate(list_copy(root->query_pathkeys),\n> + list_length(orderbyclauses));\n>\n> * Several comments in match_pathkeys_to_index() are out of date. We\n> need to revise them to cope with the change.\n>\n> * I think it's better to provide a test case.\n>\n> Thanks\n> Richard\n\nThank you for advice,\nhere is an updated patch with proposed changes.\n\n-- \nBest regards\nMiroslav",
"msg_date": "Mon, 17 Apr 2023 21:41:25 +0200",
"msg_from": "Miroslav Bendik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 19:29, Miroslav Bendik <[email protected]> wrote:\n> here is an updated patch with proposed changes.\n\nHere's a quick review:\n\n1. I don't think this is required. match_pathkeys_to_index() sets\nthese to NIL and they're set accordingly by the other code paths.\n\n- List *orderbyclauses;\n- List *orderbyclausecols;\n+ List *orderbyclauses = NIL;\n+ List *orderbyclausecols = NIL;\n\n2. You can use list_copy_head(root->query_pathkeys,\nlist_length(orderbyclauses)); instead of:\n\n+ useful_pathkeys = list_truncate(list_copy(root->query_pathkeys),\n+ list_length(orderbyclauses));\n\n3. The following 2 changes don't seem to be needed:\n\n@@ -3104,11 +3100,11 @@ match_pathkeys_to_index(IndexOptInfo *index,\nList *pathkeys,\n /* Pathkey must request default sort order for the target opfamily */\n if (pathkey->pk_strategy != BTLessStrategyNumber ||\n pathkey->pk_nulls_first)\n- return;\n+ break;\n\n /* If eclass is volatile, no hope of using an indexscan */\n if (pathkey->pk_eclass->ec_has_volatile)\n- return;\n+ break;\n\nThere's no code after the loop you're breaking out of, so it seems to\nme that return is the same as break and there's no reason to change\nit.\n\nDavid\n\n\n",
"msg_date": "Tue, 18 Apr 2023 21:50:21 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "Thanks for feedback\n\n> 2. You can use list_copy_head(root->query_pathkeys,\n> list_length(orderbyclauses)); instead of:\n>\n> + useful_pathkeys = list_truncate(list_copy(root->query_pathkeys),\n> + list_length(orderbyclauses));\n\nThis code will crash if query_pathkeys is NIL. I need either modify\nlist_copy_head (v3.1) or add checks before call (v3.2).\n\nI don't know if it's a good idea to modify list_copy_head. It will add\nadditional overhead to every call.\n\n-- \nBest regards\nMiroslav",
"msg_date": "Wed, 19 Apr 2023 06:52:59 +0200",
"msg_from": "Miroslav Bendik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "Sorry for spamming, but I found a more elegant way to check if\nquery_paths is NIL without modified list_copy_head.\n\nHere is a third iteration of this patch.\n\n-- \nMiroslav",
"msg_date": "Wed, 19 Apr 2023 07:25:00 +0200",
"msg_from": "Miroslav Bendik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Wed, 19 Apr 2023 at 16:53, Miroslav Bendik <[email protected]> wrote:\n> > 2. You can use list_copy_head(root->query_pathkeys,\n> > list_length(orderbyclauses)); instead of:\n> >\n> > + useful_pathkeys = list_truncate(list_copy(root->query_pathkeys),\n> > + list_length(orderbyclauses));\n>\n> This code will crash if query_pathkeys is NIL. I need either modify\n> list_copy_head (v3.1) or add checks before call (v3.2).\n>\n> I don't know if it's a good idea to modify list_copy_head. It will add\n> additional overhead to every call.\n\nThat's a bug in list_copy_head(). Since NIL is how we represent empty\nLists, crashing on some valid representation of a List is not how it\nshould work.\n\nThat function is pretty new and was exactly added so we didn't have to\nwrite list_truncate(list_copy(...), n) anymore. That gets pretty\nwasteful when the input List is long and we only need a small portion\nof it.\n\nI've just pushed a fix to master for this. See [1]. If you base your\npatch atop of that you should be able to list list_copy_head() without\nany issues.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e35ded29566f679e52888a8d34468bb51bc78bed\n\n\n",
"msg_date": "Thu, 20 Apr 2023 10:37:49 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 6:38 AM David Rowley <[email protected]> wrote:\n\n> That function is pretty new and was exactly added so we didn't have to\n> write list_truncate(list_copy(...), n) anymore. That gets pretty\n> wasteful when the input List is long and we only need a small portion\n> of it.\n\n\nI searched the codes and found some other places where the manipulation\nof lists can be improved in a similar way.\n\n* lappend(list_copy(list), datum) as in get_required_extension().\nThis is not very efficient as after list_copy it would need to enlarge\nthe list immediately. It can be improved by inventing a new function,\nmaybe called list_append_copy, that do the copy and append all together.\n\n* lcons(datum, list_copy(list)) as in get_query_def().\nThis is also not efficient. Immediately after list_copy, we'd need to\nenlarge the list and move all the entries. It can also be improved by\ndoing all these things all together in one function.\n\n* lcons(datum, list_delete_nth_cell(list_copy(list), n)) as in\nsort_inner_and_outer.\nIt'd need to copy all the elements, and then delete the n'th entry which\nwould cause all following entries be moved, and then move all the\nremaining entries for lcons. Maybe we can invent a new function for it?\n\nSo is it worthwhile to improve these places?\n\nBesides, I found one place that can be improved the same way as what we\ndid in 9d299a49.\n\n--- a/src/backend/rewrite/rewriteSearchCycle.c\n+++ b/src/backend/rewrite/rewriteSearchCycle.c\n@@ -523,7 +523,7 @@ rewriteSearchAndCycle(CommonTableExpr *cte)\n\n fexpr = makeFuncExpr(F_INT8INC, INT8OID, list_make1(fs),\nInvalidOid, InvalidOid, COERCE_EXPLICIT_CALL);\n\n- lfirst(list_head(search_col_rowexpr->args)) = fexpr;\n+ linitial(search_col_rowexpr->args) = fexpr;\n\n\nAlso, in applyparallelworker.c we have the usage as\n\n TransactionId xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));\n\nI wonder if we can invent function list_nth_xid to do it, to keep\nconsistent with list_nth/list_nth_int/list_nth_oid.\n\nThanks\nRichard\n\nOn Thu, Apr 20, 2023 at 6:38 AM David Rowley <[email protected]> wrote:\nThat function is pretty new and was exactly added so we didn't have to\nwrite list_truncate(list_copy(...), n) anymore. That gets pretty\nwasteful when the input List is long and we only need a small portion\nof it.I searched the codes and found some other places where the manipulationof lists can be improved in a similar way.* lappend(list_copy(list), datum) as in get_required_extension().This is not very efficient as after list_copy it would need to enlargethe list immediately. It can be improved by inventing a new function,maybe called list_append_copy, that do the copy and append all together.* lcons(datum, list_copy(list)) as in get_query_def().This is also not efficient. Immediately after list_copy, we'd need toenlarge the list and move all the entries. It can also be improved bydoing all these things all together in one function.* lcons(datum, list_delete_nth_cell(list_copy(list), n)) as insort_inner_and_outer.It'd need to copy all the elements, and then delete the n'th entry whichwould cause all following entries be moved, and then move all theremaining entries for lcons. Maybe we can invent a new function for it?So is it worthwhile to improve these places?Besides, I found one place that can be improved the same way as what wedid in 9d299a49.--- a/src/backend/rewrite/rewriteSearchCycle.c+++ b/src/backend/rewrite/rewriteSearchCycle.c@@ -523,7 +523,7 @@ rewriteSearchAndCycle(CommonTableExpr *cte) fexpr = makeFuncExpr(F_INT8INC, INT8OID, list_make1(fs), InvalidOid, InvalidOid, COERCE_EXPLICIT_CALL);- lfirst(list_head(search_col_rowexpr->args)) = fexpr;+ linitial(search_col_rowexpr->args) = fexpr;Also, in applyparallelworker.c we have the usage as TransactionId xid_tmp = lfirst_xid(list_nth_cell(subxactlist, i));I wonder if we can invent function list_nth_xid to do it, to keepconsistent with list_nth/list_nth_int/list_nth_oid.ThanksRichard",
"msg_date": "Thu, 20 Apr 2023 14:45:52 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "> I've just pushed a fix to master for this. See [1]. If you base your\n> patch atop of that you should be able to list list_copy_head() without\n> any issues.\n\nThanks for this fix. Now the version\nam_orderbyop_incremental_sort_v3.1.patch [1] works without issues\nusing the master branch.\n\n[1] https://www.postgresql.org/message-id/attachment/146450/am_orderbyop_incremental_sort_v3.1.patch\n\n\n",
"msg_date": "Thu, 20 Apr 2023 15:36:52 +0200",
"msg_from": "Miroslav Bendik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Thu, 20 Apr 2023 at 18:46, Richard Guo <[email protected]> wrote:\n>\n>\n> On Thu, Apr 20, 2023 at 6:38 AM David Rowley <[email protected]> wrote:\n>>\n>> That function is pretty new and was exactly added so we didn't have to\n>> write list_truncate(list_copy(...), n) anymore. That gets pretty\n>> wasteful when the input List is long and we only need a small portion\n>> of it.\n>\n> I searched the codes and found some other places where the manipulation\n> of lists can be improved in a similar way.\n\nI'd be happy to discuss our thought about List inefficiencies, but I\nthink to be fair to Miroslav, we should do that somewhere else. The\nlist_copy_head() discussion was directly related to his patch due to\nthe list of list_truncate(list_copy(..), ..). The other things you've\nmentioned are not. Feel free to start a thread and copy me in.\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:42:48 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 5:43 AM David Rowley <[email protected]> wrote:\n\n> On Thu, 20 Apr 2023 at 18:46, Richard Guo <[email protected]> wrote:\n> > I searched the codes and found some other places where the manipulation\n> > of lists can be improved in a similar way.\n> I'd be happy to discuss our thought about List inefficiencies, but I\n> think to be fair to Miroslav, we should do that somewhere else. The\n> list_copy_head() discussion was directly related to his patch due to\n> the list of list_truncate(list_copy(..), ..). The other things you've\n> mentioned are not. Feel free to start a thread and copy me in.\n\n\nYeah, that's right. Thank you for the suggestion. I started a new\nthread here:\n\nhttps://www.postgresql.org/message-id/flat/CAMbWs49dJnpezDQDDxCPKq7%2B%3D_3NyqLqGqnhqCjd%2BdYe4MS15w%40mail.gmail.com\n\nThanks\nRichard\n\nOn Fri, Apr 21, 2023 at 5:43 AM David Rowley <[email protected]> wrote:On Thu, 20 Apr 2023 at 18:46, Richard Guo <[email protected]> wrote:\n> I searched the codes and found some other places where the manipulation\n> of lists can be improved in a similar way.\nI'd be happy to discuss our thought about List inefficiencies, but I\nthink to be fair to Miroslav, we should do that somewhere else. The\nlist_copy_head() discussion was directly related to his patch due to\nthe list of list_truncate(list_copy(..), ..). The other things you've\nmentioned are not. Feel free to start a thread and copy me in.Yeah, that's right. Thank you for the suggestion. I started a newthread here:https://www.postgresql.org/message-id/flat/CAMbWs49dJnpezDQDDxCPKq7%2B%3D_3NyqLqGqnhqCjd%2BdYe4MS15w%40mail.gmail.comThanksRichard",
"msg_date": "Fri, 21 Apr 2023 15:49:53 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 9:37 PM Miroslav Bendik <[email protected]>\nwrote:\n\n> Thanks for this fix. Now the version\n> am_orderbyop_incremental_sort_v3.1.patch [1] works without issues\n> using the master branch.\n\n\nThe v3.1 patch looks good to me, except that the comments around\nmatch_pathkeys_to_index still need some polish.\n\n1. For comment \"On success, the result list is ordered by pathkeys.\", I\nthink it'd be more accurate if we say something like \"On success, the\nresult list is ordered by pathkeys or a prefix list of pathkeys.\"\nconsidering the possibility of incremental sort.\n\n2. The comment below is not true anymore.\n\n /*\n * Note: for any failure to match, we just return NIL immediately.\n * There is no value in matching just some of the pathkeys.\n */\n\nWe should either remove it or change it to emphasize that we may return\na prefix of the pathkeys for incremental sort.\n\nBTW, would you please add the patch to the CF to not lose track of it?\n\nThanks\nRichard\n\nOn Thu, Apr 20, 2023 at 9:37 PM Miroslav Bendik <[email protected]> wrote:\nThanks for this fix. Now the version\nam_orderbyop_incremental_sort_v3.1.patch [1] works without issues\nusing the master branch.The v3.1 patch looks good to me, except that the comments aroundmatch_pathkeys_to_index still need some polish.1. For comment \"On success, the result list is ordered by pathkeys.\", Ithink it'd be more accurate if we say something like \"On success, theresult list is ordered by pathkeys or a prefix list of pathkeys.\"considering the possibility of incremental sort.2. The comment below is not true anymore. /* * Note: for any failure to match, we just return NIL immediately. * There is no value in matching just some of the pathkeys. */We should either remove it or change it to emphasize that we may returna prefix of the pathkeys for incremental sort.BTW, would you please add the patch to the CF to not lose track of it?ThanksRichard",
"msg_date": "Sun, 25 Jun 2023 16:18:33 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "Thanks, for suggestions.\n\nOn Sun 02. 07. 2023 at 10:18 Richard Guo <[email protected]> wrote:\n> 1. For comment \"On success, the result list is ordered by pathkeys.\", I\n> think it'd be more accurate if we say something like \"On success, the\n> result list is ordered by pathkeys or a prefix list of pathkeys.\"\n> considering the possibility of incremental sort.\n>\n> 2. The comment below is not true anymore.\n>\n> /*\n> * Note: for any failure to match, we just return NIL immediately.\n> * There is no value in matching just some of the pathkeys.\n> */\n> We should either remove it or change it to emphasize that we may return\n> a prefix of the pathkeys for incremental sort.\n\nComments are updated now.\n\n> BTW, would you please add the patch to the CF to not lose track of it?\n\nSubmitted <https://commitfest.postgresql.org/43/4433/>\n\n-- \nBest regards\nMiroslav",
"msg_date": "Sun, 2 Jul 2023 06:02:08 +0200",
"msg_from": "Miroslav Bendik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Sun, Jul 2, 2023 at 12:02 PM Miroslav Bendik <[email protected]>\nwrote:\n\n> Thanks, for suggestions.\n>\n> On Sun 02. 07. 2023 at 10:18 Richard Guo <[email protected]> wrote:\n> > 1. For comment \"On success, the result list is ordered by pathkeys.\", I\n> > think it'd be more accurate if we say something like \"On success, the\n> > result list is ordered by pathkeys or a prefix list of pathkeys.\"\n> > considering the possibility of incremental sort.\n> >\n> > 2. The comment below is not true anymore.\n> >\n> > /*\n> > * Note: for any failure to match, we just return NIL immediately.\n> > * There is no value in matching just some of the pathkeys.\n> > */\n> > We should either remove it or change it to emphasize that we may return\n> > a prefix of the pathkeys for incremental sort.\n>\n> Comments are updated now.\n>\n> > BTW, would you please add the patch to the CF to not lose track of it?\n>\n> Submitted <https://commitfest.postgresql.org/43/4433/>\n\n\nThe v4 patch looks good to me (maybe some cosmetic tweaks are still\nneeded for the comments). I think it's now 'Ready for Committer'.\n\nThanks\nRichard\n\nOn Sun, Jul 2, 2023 at 12:02 PM Miroslav Bendik <[email protected]> wrote:Thanks, for suggestions.\n\nOn Sun 02. 07. 2023 at 10:18 Richard Guo <[email protected]> wrote:\n> 1. For comment \"On success, the result list is ordered by pathkeys.\", I\n> think it'd be more accurate if we say something like \"On success, the\n> result list is ordered by pathkeys or a prefix list of pathkeys.\"\n> considering the possibility of incremental sort.\n>\n> 2. The comment below is not true anymore.\n>\n> /*\n> * Note: for any failure to match, we just return NIL immediately.\n> * There is no value in matching just some of the pathkeys.\n> */\n> We should either remove it or change it to emphasize that we may return\n> a prefix of the pathkeys for incremental sort.\n\nComments are updated now.\n\n> BTW, would you please add the patch to the CF to not lose track of it?\n\nSubmitted <https://commitfest.postgresql.org/43/4433/>The v4 patch looks good to me (maybe some cosmetic tweaks are stillneeded for the comments). I think it's now 'Ready for Committer'.ThanksRichard",
"msg_date": "Tue, 4 Jul 2023 16:12:39 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Tue, 4 Jul 2023 at 20:12, Richard Guo <[email protected]> wrote:\n> The v4 patch looks good to me (maybe some cosmetic tweaks are still\n> needed for the comments). I think it's now 'Ready for Committer'.\n\nI agree. I went and hit the comments with a large hammer and while\nthere also adjusted the regression tests. I didn't think having \"t\" as\na table name was a good idea as it seems like a name with a high risk\nof conflicting with a concurrently running test. Also, there didn't\nseem to be much need to insert data into that table as the tests\ndidn't query any of it.\n\nThe only other small tweak I made was to not call list_copy_head()\nwhen the list does not need to be shortened. It's likely not that\nimportant, but if the majority of cases are not partial matches, then\nwe'd otherwise be needlessly making copies of the list.\n\nI pushed the adjusted patch.\n\nDavid\n\n\n",
"msg_date": "Tue, 4 Jul 2023 23:15:45 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 7:15 PM David Rowley <[email protected]> wrote:\n\n> On Tue, 4 Jul 2023 at 20:12, Richard Guo <[email protected]> wrote:\n> > The v4 patch looks good to me (maybe some cosmetic tweaks are still\n> > needed for the comments). I think it's now 'Ready for Committer'.\n>\n> I agree. I went and hit the comments with a large hammer and while\n> there also adjusted the regression tests. I didn't think having \"t\" as\n> a table name was a good idea as it seems like a name with a high risk\n> of conflicting with a concurrently running test. Also, there didn't\n> seem to be much need to insert data into that table as the tests\n> didn't query any of it.\n>\n> The only other small tweak I made was to not call list_copy_head()\n> when the list does not need to be shortened. It's likely not that\n> important, but if the majority of cases are not partial matches, then\n> we'd otherwise be needlessly making copies of the list.\n>\n> I pushed the adjusted patch.\n\n\nThe adjustments improve the patch a lot. Thanks for adjusting and\npushing the patch.\n\nThanks\nRichard\n\nOn Tue, Jul 4, 2023 at 7:15 PM David Rowley <[email protected]> wrote:On Tue, 4 Jul 2023 at 20:12, Richard Guo <[email protected]> wrote:\n> The v4 patch looks good to me (maybe some cosmetic tweaks are still\n> needed for the comments). I think it's now 'Ready for Committer'.\n\nI agree. I went and hit the comments with a large hammer and while\nthere also adjusted the regression tests. I didn't think having \"t\" as\na table name was a good idea as it seems like a name with a high risk\nof conflicting with a concurrently running test. Also, there didn't\nseem to be much need to insert data into that table as the tests\ndidn't query any of it.\n\nThe only other small tweak I made was to not call list_copy_head()\nwhen the list does not need to be shortened. It's likely not that\nimportant, but if the majority of cases are not partial matches, then\nwe'd otherwise be needlessly making copies of the list.\n\nI pushed the adjusted patch.The adjustments improve the patch a lot. Thanks for adjusting andpushing the patch.ThanksRichard",
"msg_date": "Wed, 5 Jul 2023 14:15:48 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
},
{
"msg_contents": "On 7/5/23 2:15 AM, Richard Guo wrote:\r\n> \r\n> On Tue, Jul 4, 2023 at 7:15 PM David Rowley <[email protected] \r\n> <mailto:[email protected]>> wrote:\r\n> \r\n> On Tue, 4 Jul 2023 at 20:12, Richard Guo <[email protected]\r\n> <mailto:[email protected]>> wrote:\r\n> > The v4 patch looks good to me (maybe some cosmetic tweaks are still\r\n> > needed for the comments). I think it's now 'Ready for Committer'.\r\n> \r\n> I agree. I went and hit the comments with a large hammer and while\r\n> there also adjusted the regression tests. I didn't think having \"t\" as\r\n> a table name was a good idea as it seems like a name with a high risk\r\n> of conflicting with a concurrently running test. Also, there didn't\r\n> seem to be much need to insert data into that table as the tests\r\n> didn't query any of it.\r\n> \r\n> The only other small tweak I made was to not call list_copy_head()\r\n> when the list does not need to be shortened. It's likely not that\r\n> important, but if the majority of cases are not partial matches, then\r\n> we'd otherwise be needlessly making copies of the list.\r\n> \r\n> I pushed the adjusted patch.\r\n> \r\n> \r\n> The adjustments improve the patch a lot. Thanks for adjusting and\r\n> pushing the patch.\r\n\r\nThanks for working on this! While it allows the planner to consider \r\nchoosing an incremental sort for indexes that implement \r\n\"amcanorderbyop\", it also has a positive side-effect that the planner \r\nwill also consider choosing a plan for spawning parallel workers!\r\n\r\nBecause of that, I'd like to open the discussion that we consider \r\nbackpatching this. Currently, extensions that implement index access \r\nmethods (e.g. pgvector[1]) that are built primarily around \r\n\"amcanorderbyop\" are unable to get the planner to consider choosing a \r\nparallel scan, i.e. at this point in \"create_order_paths\"[2]:\r\n\r\n/*\r\n* If cheapest partial path doesn't need a sort, this is redundant\r\n* with what's already been tried.\r\n*/\r\nif (!pathkeys_contained_in(root->sort_pathkeys,\r\n cheapest_partial_path->pathkeys))\r\n\r\nHowever, 625d5b3c does unlock this path for these types of indexes to \r\nallow for a parallel index scan to be chosen, which would allow \r\nextensions that implement a \"amcanorderbyop\" scan to use it. I would \r\nargue that this is a bug, given we offer the ability for index access \r\nmethods to implement parallel index scans.\r\n\r\nThat said, I do think they may still need to be one planner tweak to \r\nproperly support parallel index scan in this case, as I have yet to see \r\ncosts generated where the parallel index scan is cheaper. However, I \r\nhave not yet narrowed what/where that is.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://github.com/pgvector/pgvector\r\n[2] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/optimizer/plan/planner.c;#l5188",
"msg_date": "Thu, 13 Jul 2023 09:20:33 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
}
] |
[
{
"msg_contents": "I noticed that prion recently failed [1] with\n\n SELECT schema_to_xml_and_xmlschema('testxmlschema', true, true, 'foo');\n...\n+ERROR: relation with OID 30019 does not exist\n+CONTEXT: SQL statement \"SELECT oid FROM pg_catalog.pg_class WHERE relnamespace = 29949 AND relkind IN ('r','m','v') AND pg_catalog.has_table_privilege (oid, 'SELECT') ORDER BY relname;\"\n\nWhat evidently happened here is that has_table_privilege got applied\nto some relation (in a schema different from 'testxmlschema', which\nshould be stable here) that was in the middle of getting dropped.\nThis'd require the relnamespace condition to get applied after the\nhas_table_privilege condition, which is quite possible (although it seems\nto require that auto-analyze update pg_class's statistics while this\ntest runs). Even then, has_table_privilege is supposed to survive the\nsituation, but it has a race condition:\n\n\tif (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(tableoid)))\n\t\tPG_RETURN_NULL();\n\n\taclresult = pg_class_aclcheck(tableoid, roleid, mode);\n\nThe fact that SearchSysCacheExists1 succeeds doesn't guarantee that\nwhen pg_class_aclcheck fetches the same row a moment later, it'll\nstill be there. The race-condition window is pretty narrow (and\nindeed I don't think we've seen this buildfarm symptom before),\nbut it exists.\n\nNow, it looks to me like this case is trivial to fix, using the\npg_class_aclcheck_ext function introduced in b12bd4869 (in v14).\nWe just need to drop the separate SearchSysCacheExists1 call and do\n\n\taclresult = pg_class_aclcheck_ext(tableoid, roleid, mode, &is_missing);\n\nwhich should be a trifle faster as well as safer.\n\nHowever, to cover the remaining risk spots in acl.c, it looks like\nwe'd need \"_ext\" versions of pg_attribute_aclcheck_all and\nobject_aclcheck, which were not introduced by b12bd4869.\nobject_aclcheck_ext in particular looks like it'd be a bit invasive.\n\nWhat I'm inclined to do for now is clean up the table-related cases\nand leave the code paths using object_aclcheck for another day.\nWe've always been much more concerned about DDL race conditions for\ntables than other kinds of objects, so this approach seems to fit\nwith past decisions. I haven't written any code yet, but this looks\nlike it might amount to a couple hundred lines of fairly simple\nchanges.\n\nI wonder if we should consider this a bug and back-patch to v14,\nor maybe HEAD only; or is it okay to let it slide to v17?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2023-04-15%2017%3A17%3A09\n\n\n",
"msg_date": "Sat, 15 Apr 2023 15:47:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Race conditions in has_table_privilege() and friends"
}
] |
[
{
"msg_contents": "Hi Team,\n\nPostgres Version:- 13.8\nIssue:- Logical replication failing with SSL SYSCALL error\nPriority:-High\n\nWe are migrating our database through logical replications, and all of\nsudden below error pops up in the source and target logs which leads us to\nnowhere.\n\n*Logs from Source:-*\nLOG: could not send data to client: Connection reset by peer\nSTATEMENT: COPY public.test TO STDOUT\nFATAL: connection to client lost\nSTATEMENT: COPY public.test TO STDOUT\n\n*Logs from Target:-*\n2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from WAL\nstream: SSL SYSCALL error: Connection timed out\n2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 365326932\n2023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical\nreplication worker\" (PID 1250) exited with exit code 1\n2023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table\nsynchronization worker for subscription \" sub_tables_2_180\", table \"test\"\nhas started\n2023-04-15 19:12:05\nUTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING:\nthere is no transaction in progress\n2023-04-15 19:14:08\nUTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG:\ncould not receive data from client: Connection reset by peer\n2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from WAL\nstream: SSL SYSCALL error: Connection timed out\n2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from WAL\nstream: SSL SYSCALL error: Connection timed out\n2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from WAL\nstream: SSL SYSCALL error: Connection timed out\n2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\nreplication worker\" (PID 2556) exited with exit code 1\n2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\nreplication worker\" (PID 2112) exited with exit code 1\n2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\nreplication worker\" (PID 1089) exited with exit code 1\n2023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply worker for\nsubscription \"sub_tables_2_180\" has started\n2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply worker for\nsubscription \"sub_tables_3_192\" has started\n2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply worker for\nsubscription \"sub_tables_1_180\" has started\n\nJust after this error, all other replication slots get disabled for some\ntime and come back online along with COPY command with the new PID in\npg_stat_activity.\n\nI have a few queries regarding this:-\n\n 1. The exact reason for disconnection (Few articles claim memory and few\n network)\n 2. Will it lead to data inconsistency?\n 3. Does this new PID COPY command again migrate the whole data of the\n test table once again?\n\nPlease help we got stuck here.\n-- \nThanks and Regards,\nShaurya Jain\nemail:- [email protected]\n*Mobile:- +91-8802809405*\nLinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n\nHi Team,Postgres Version:- 13.8Issue:- Logical replication failing with SSL SYSCALL errorPriority:-HighWe are migrating our database through logical replications, and all of sudden below error pops up in the source and target logs which leads us to nowhere.Logs from Source:-LOG: could not send data to client: Connection reset by peerSTATEMENT: COPY public.test TO STDOUTFATAL: connection to client lostSTATEMENT: COPY public.test TO STDOUTLogs from Target:-2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 3653269322023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1250) exited with exit code 12023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table synchronization worker for subscription \"\n\nsub_tables_2_180\", table \"test\" has started2023-04-15 19:12:05 UTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING: there is no transaction in progress2023-04-15 19:14:08 UTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG: could not receive data from client: Connection reset by peer2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2556) exited with exit code 12023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2112) exited with exit code 12023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1089) exited with exit code 12023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply worker for subscription \"sub_tables_2_180\" has started2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply worker for subscription \"sub_tables_3_192\" has started2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply worker for subscription \"sub_tables_1_180\" has startedJust after this error, all other replication slots get disabled for some time and come back online along with COPY command with the new PID in pg_stat_activity.I have a few queries regarding this:-The exact reason for disconnection (Few articles claim memory and few network)Will it lead to data inconsistency?Does this new PID COPY command again migrate the whole data of the test table once again?Please help we got stuck here.-- Thanks and Regards,Shaurya Jainemail:- [email protected]:- +91-8802809405LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023",
"msg_date": "Sun, 16 Apr 2023 02:40:57 +0530",
"msg_from": "shaurya jain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logical replication failed with SSL SYSCALL error"
},
{
"msg_contents": "Hi Team,\n\nCould you please help me with this, It's urgent for the production\nenvironment.\n\nOn Wed, Apr 19, 2023 at 3:44 PM shaurya jain <[email protected]> wrote:\n\n> Hi Team,\n>\n> Could you please help, It's urgent for the production env?\n>\n> On Sun, Apr 16, 2023 at 2:40 AM shaurya jain <[email protected]>\n> wrote:\n>\n>> Hi Team,\n>>\n>> Postgres Version:- 13.8\n>> Issue:- Logical replication failing with SSL SYSCALL error\n>> Priority:-High\n>>\n>> We are migrating our database through logical replications, and all of\n>> sudden below error pops up in the source and target logs which leads us to\n>> nowhere.\n>>\n>> *Logs from Source:-*\n>> LOG: could not send data to client: Connection reset by peer\n>> STATEMENT: COPY public.test TO STDOUT\n>> FATAL: connection to client lost\n>> STATEMENT: COPY public.test TO STDOUT\n>>\n>> *Logs from Target:-*\n>> 2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from WAL\n>> stream: SSL SYSCALL error: Connection timed out\n>> 2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 365326932\n>> 2023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical\n>> replication worker\" (PID 1250) exited with exit code 1\n>> 2023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table\n>> synchronization worker for subscription \" sub_tables_2_180\", table \"test\"\n>> has started\n>> 2023-04-15 19:12:05 UTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING:\n>> there is no transaction in progress\n>> 2023-04-15 19:14:08 UTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG:\n>> could not receive data from client: Connection reset by peer\n>> 2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from WAL\n>> stream: SSL SYSCALL error: Connection timed out\n>> 2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from WAL\n>> stream: SSL SYSCALL error: Connection timed out\n>> 2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from WAL\n>> stream: SSL SYSCALL error: Connection timed out\n>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\n>> replication worker\" (PID 2556) exited with exit code 1\n>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\n>> replication worker\" (PID 2112) exited with exit code 1\n>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\n>> replication worker\" (PID 1089) exited with exit code 1\n>> 2023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply worker\n>> for subscription \"sub_tables_2_180\" has started\n>> 2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply worker\n>> for subscription \"sub_tables_3_192\" has started\n>> 2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply worker\n>> for subscription \"sub_tables_1_180\" has started\n>>\n>> Just after this error, all other replication slots get disabled for some\n>> time and come back online along with COPY command with the new PID in\n>> pg_stat_activity.\n>>\n>> I have a few queries regarding this:-\n>>\n>> 1. The exact reason for disconnection (Few articles claim memory and\n>> few network)\n>> 2. Will it lead to data inconsistency?\n>> 3. Does this new PID COPY command again migrate the whole data of the\n>> test table once again?\n>>\n>> Please help we got stuck here.\n>> --\n>> Thanks and Regards,\n>> Shaurya Jain\n>> email:- [email protected]\n>> *Mobile:- +91-8802809405*\n>> LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n>>\n>>\n>\n> --\n> Thanks and Regards,\n> Shaurya Jain\n> email:- [email protected]\n> *Mobile:- +91-8802809405*\n> LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n>\n>\n\n-- \nThanks and Regards,\nShaurya Jain\nemail:- [email protected]\n*Mobile:- +91-8802809405*\nLinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n\nHi Team,Could you please help me with this, It's urgent for the production environment.On Wed, Apr 19, 2023 at 3:44 PM shaurya jain <[email protected]> wrote:Hi Team,Could you please help, It's urgent for the production env?On Sun, Apr 16, 2023 at 2:40 AM shaurya jain <[email protected]> wrote:Hi Team,Postgres Version:- 13.8Issue:- Logical replication failing with SSL SYSCALL errorPriority:-HighWe are migrating our database through logical replications, and all of sudden below error pops up in the source and target logs which leads us to nowhere.Logs from Source:-LOG: could not send data to client: Connection reset by peerSTATEMENT: COPY public.test TO STDOUTFATAL: connection to client lostSTATEMENT: COPY public.test TO STDOUTLogs from Target:-2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 3653269322023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1250) exited with exit code 12023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table synchronization worker for subscription \"\n\nsub_tables_2_180\", table \"test\" has started2023-04-15 19:12:05 UTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING: there is no transaction in progress2023-04-15 19:14:08 UTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG: could not receive data from client: Connection reset by peer2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2556) exited with exit code 12023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2112) exited with exit code 12023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1089) exited with exit code 12023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply worker for subscription \"sub_tables_2_180\" has started2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply worker for subscription \"sub_tables_3_192\" has started2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply worker for subscription \"sub_tables_1_180\" has startedJust after this error, all other replication slots get disabled for some time and come back online along with COPY command with the new PID in pg_stat_activity.I have a few queries regarding this:-The exact reason for disconnection (Few articles claim memory and few network)Will it lead to data inconsistency?Does this new PID COPY command again migrate the whole data of the test table once again?Please help we got stuck here.-- Thanks and Regards,Shaurya Jainemail:- [email protected]:- +91-8802809405LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n-- Thanks and Regards,Shaurya Jainemail:- [email protected]:- +91-8802809405LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n-- Thanks and Regards,Shaurya Jainemail:- [email protected]:- +91-8802809405LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023",
"msg_date": "Wed, 19 Apr 2023 17:26:06 +0530",
"msg_from": "shaurya jain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication failed with SSL SYSCALL error"
},
{
"msg_contents": "On Wed, 19 Apr 2023 at 17:26, shaurya jain <[email protected]> wrote:\n>\n> Hi Team,\n>\n> Could you please help me with this, It's urgent for the production environment.\n>\n> On Wed, Apr 19, 2023 at 3:44 PM shaurya jain <[email protected]> wrote:\n>>\n>> Hi Team,\n>>\n>> Could you please help, It's urgent for the production env?\n>>\n>> On Sun, Apr 16, 2023 at 2:40 AM shaurya jain <[email protected]> wrote:\n>>>\n>>> Hi Team,\n>>>\n>>> Postgres Version:- 13.8\n>>> Issue:- Logical replication failing with SSL SYSCALL error\n>>> Priority:-High\n>>>\n>>> We are migrating our database through logical replications, and all of sudden below error pops up in the source and target logs which leads us to nowhere.\n>>>\n>>> Logs from Source:-\n>>> LOG: could not send data to client: Connection reset by peer\n>>> STATEMENT: COPY public.test TO STDOUT\n>>> FATAL: connection to client lost\n>>> STATEMENT: COPY public.test TO STDOUT\n>>>\n>>> Logs from Target:-\n>>> 2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 365326932\n>>> 2023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1250) exited with exit code 1\n>>> 2023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table synchronization worker for subscription \" sub_tables_2_180\", table \"test\" has started\n>>> 2023-04-15 19:12:05 UTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING: there is no transaction in progress\n>>> 2023-04-15 19:14:08 UTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG: could not receive data from client: Connection reset by peer\n>>> 2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2556) exited with exit code 1\n>>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2112) exited with exit code 1\n>>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1089) exited with exit code 1\n>>> 2023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply worker for subscription \"sub_tables_2_180\" has started\n>>> 2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply worker for subscription \"sub_tables_3_192\" has started\n>>> 2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply worker for subscription \"sub_tables_1_180\" has started\n>>>\n>>> Just after this error, all other replication slots get disabled for some time and come back online along with COPY command with the new PID in pg_stat_activity.\n>>>\n>>> I have a few queries regarding this:-\n>>>\n>>> The exact reason for disconnection (Few articles claim memory and few network)\nThis might be because of network failure, did you notice any network\ninstability, could you check the TCP settings.\nYou could check the following configurations tcp_keepalives_idle,\ntcp_keepalives_interval and tcp_keepalives_count.\nThis means it will connect the server based on tcp_keepalives_idle\nseconds specified , if the server does not respond in\ntcp_keepalives_interval seconds it'll try again, and will consider the\nconnection gone after tcp_keepalives_count failures.\n\n>>> Will it lead to data inconsistency?\nIt will not lead to inconsistency. In case of failure the failed\ntransaction will be rolled back.\n\n>>> Does this new PID COPY command again migrate the whole data of the test table once again?\nYes, it will migrate the whole table data again in case of failures.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 20 Apr 2023 11:49:13 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication failed with SSL SYSCALL error"
},
{
"msg_contents": "Hi Vignesh,\n\nThat's really prompt and solves our problem. Thank you buddy.\n\nPlease go through my inline comments:-\n\n\nOn Thu, Apr 20, 2023 at 11:49 AM vignesh C <[email protected]> wrote:\n\n> On Wed, 19 Apr 2023 at 17:26, shaurya jain <[email protected]> wrote:\n> >\n> > Hi Team,\n> >\n> > Could you please help me with this, It's urgent for the production\n> environment.\n> >\n> > On Wed, Apr 19, 2023 at 3:44 PM shaurya jain <[email protected]>\n> wrote:\n> >>\n> >> Hi Team,\n> >>\n> >> Could you please help, It's urgent for the production env?\n> >>\n> >> On Sun, Apr 16, 2023 at 2:40 AM shaurya jain <[email protected]>\n> wrote:\n> >>>\n> >>> Hi Team,\n> >>>\n> >>> Postgres Version:- 13.8\n> >>> Issue:- Logical replication failing with SSL SYSCALL error\n> >>> Priority:-High\n> >>>\n> >>> We are migrating our database through logical replications, and all of\n> sudden below error pops up in the source and target logs which leads us to\n> nowhere.\n> >>>\n> >>> Logs from Source:-\n> >>> LOG: could not send data to client: Connection reset by peer\n> >>> STATEMENT: COPY public.test TO STDOUT\n> >>> FATAL: connection to client lost\n> >>> STATEMENT: COPY public.test TO STDOUT\n> >>>\n> >>> Logs from Target:-\n> >>> 2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from\n> WAL stream: SSL SYSCALL error: Connection timed out\n> >>> 2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 365326932\n> >>> 2023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical\n> replication worker\" (PID 1250) exited with exit code 1\n> >>> 2023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table\n> synchronization worker for subscription \" sub_tables_2_180\", table \"test\"\n> has started\n> >>> 2023-04-15 19:12:05 UTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING:\n> there is no transaction in progress\n> >>> 2023-04-15 19:14:08 UTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG:\n> could not receive data from client: Connection reset by peer\n> >>> 2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from\n> WAL stream: SSL SYSCALL error: Connection timed out\n> >>> 2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from\n> WAL stream: SSL SYSCALL error: Connection timed out\n> >>> 2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from\n> WAL stream: SSL SYSCALL error: Connection timed out\n> >>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\n> replication worker\" (PID 2556) exited with exit code 1\n> >>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\n> replication worker\" (PID 2112) exited with exit code 1\n> >>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical\n> replication worker\" (PID 1089) exited with exit code 1\n> >>> 2023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply\n> worker for subscription \"sub_tables_2_180\" has started\n> >>> 2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply\n> worker for subscription \"sub_tables_3_192\" has started\n> >>> 2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply\n> worker for subscription \"sub_tables_1_180\" has started\n> >>>\n> >>> Just after this error, all other replication slots get disabled for\n> some time and come back online along with COPY command with the new PID in\n> pg_stat_activity.\n> >>>\n> >>> I have a few queries regarding this:-\n> >>>\n> >>> The exact reason for disconnection (Few articles claim memory and few\n> network)\n> This might be because of network failure, did you notice any network\n> instability, could you check the TCP settings.\n> You could check the following configurations tcp_keepalives_idle,\n> tcp_keepalives_interval and tcp_keepalives_count.\n> This means it will connect the server based on tcp_keepalives_idle\n> seconds specified , if the server does not respond in\n> tcp_keepalives_interval seconds it'll try again, and will consider the\n> connection gone after tcp_keepalives_count failures. ---Yes you were\n> correct, that ssue was related to network where VPN tunnel got restarted\n> because of some miss configuration at tunnel side. By fixing that it\n> stands resolved so far. These params were set to below values:-\n\n\n 1. keepalives_idle 60\n 2. keepalives_interval 100\n 3. keepalives_count 60\n\n\n> >>> Will it lead to data inconsistency?\n> It will not lead to inconsistency. In case of failure the failed\n> transaction will be rolled back. Yes, Migration was up to the mark after\n> fixing network.\n>\n> >>> Does this new PID COPY command again migrate the whole data of the\n> test table once again?\n> Yes, it will migrate the whole table data again in case of failures. Yes,\n> I follow you on that. Is there any way to rsync instead of simple copy?\n>\n> Regards,\n> Vignesh\n>\n\n\n-- \nThanks and Regards,\nShaurya Jain\nemail:- [email protected]\n*Mobile:- +91-8802809405*\nLinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023\n\nHi Vignesh,That's really prompt and solves our problem. Thank you buddy.Please go through my inline comments:-On Thu, Apr 20, 2023 at 11:49 AM vignesh C <[email protected]> wrote:On Wed, 19 Apr 2023 at 17:26, shaurya jain <[email protected]> wrote:\n>\n> Hi Team,\n>\n> Could you please help me with this, It's urgent for the production environment.\n>\n> On Wed, Apr 19, 2023 at 3:44 PM shaurya jain <[email protected]> wrote:\n>>\n>> Hi Team,\n>>\n>> Could you please help, It's urgent for the production env?\n>>\n>> On Sun, Apr 16, 2023 at 2:40 AM shaurya jain <[email protected]> wrote:\n>>>\n>>> Hi Team,\n>>>\n>>> Postgres Version:- 13.8\n>>> Issue:- Logical replication failing with SSL SYSCALL error\n>>> Priority:-High\n>>>\n>>> We are migrating our database through logical replications, and all of sudden below error pops up in the source and target logs which leads us to nowhere.\n>>>\n>>> Logs from Source:-\n>>> LOG: could not send data to client: Connection reset by peer\n>>> STATEMENT: COPY public.test TO STDOUT\n>>> FATAL: connection to client lost\n>>> STATEMENT: COPY public.test TO STDOUT\n>>>\n>>> Logs from Target:-\n>>> 2023-04-15 19:07:02 UTC::@:[1250]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:07:02 UTC::@:[1250]:CONTEXT: COPY test, line 365326932\n>>> 2023-04-15 19:07:03 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1250) exited with exit code 1\n>>> 2023-04-15 19:07:03 UTC::@:[7155]:LOG: logical replication table synchronization worker for subscription \" sub_tables_2_180\", table \"test\" has started\n>>> 2023-04-15 19:12:05 UTC:10.144.19.34(33276):postgres@webadmit_staging:[7112]:WARNING: there is no transaction in progress\n>>> 2023-04-15 19:14:08 UTC:10.144.19.34(33324):postgres@webadmit_staging:[6052]:LOG: could not receive data from client: Connection reset by peer\n>>> 2023-04-15 19:17:23 UTC::@:[2112]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:17:23 UTC::@:[1089]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:17:23 UTC::@:[2556]:ERROR: could not receive data from WAL stream: SSL SYSCALL error: Connection timed out\n>>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2556) exited with exit code 1\n>>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 2112) exited with exit code 1\n>>> 2023-04-15 19:17:23 UTC::@:[505]:LOG: background worker \"logical replication worker\" (PID 1089) exited with exit code 1\n>>> 2023-04-15 19:17:23 UTC::@:[7287]:LOG: logical replication apply worker for subscription \"sub_tables_2_180\" has started\n>>> 2023-04-15 19:17:23 UTC::@:[7288]:LOG: logical replication apply worker for subscription \"sub_tables_3_192\" has started\n>>> 2023-04-15 19:17:23 UTC::@:[7289]:LOG: logical replication apply worker for subscription \"sub_tables_1_180\" has started\n>>>\n>>> Just after this error, all other replication slots get disabled for some time and come back online along with COPY command with the new PID in pg_stat_activity.\n>>>\n>>> I have a few queries regarding this:-\n>>>\n>>> The exact reason for disconnection (Few articles claim memory and few network)\nThis might be because of network failure, did you notice any network\ninstability, could you check the TCP settings.\nYou could check the following configurations tcp_keepalives_idle,\ntcp_keepalives_interval and tcp_keepalives_count.\nThis means it will connect the server based on tcp_keepalives_idle\nseconds specified , if the server does not respond in\ntcp_keepalives_interval seconds it'll try again, and will consider the\nconnection gone after tcp_keepalives_count failures. ---Yes you were correct, that ssue was related to network where VPN tunnel got restarted because of some miss configuration at tunnel side. By fixing that it stands resolved so far. These params were set to below values:-keepalives_idle 60\nkeepalives_interval 100\nkeepalives_count 60\n\n>>> Will it lead to data inconsistency?\nIt will not lead to inconsistency. In case of failure the failed\ntransaction will be rolled back. Yes, Migration was up to the mark after fixing network.\n\n>>> Does this new PID COPY command again migrate the whole data of the test table once again?\nYes, it will migrate the whole table data again in case of failures. Yes, I follow you on that. Is there any way to rsync instead of simple copy?\n\nRegards,\nVignesh\n-- Thanks and Regards,Shaurya Jainemail:- [email protected]:- +91-8802809405LinkedIn:- https://www.linkedin.com/in/shaurya-jain-74353023",
"msg_date": "Mon, 24 Apr 2023 08:29:44 +0530",
"msg_from": "shaurya jain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication failed with SSL SYSCALL error"
}
] |
[
{
"msg_contents": "Hi\n\nI missing some variants of to_regclass\n\nto_regclass(schemaname, objectname)\nto_regclass(catalogname, schemaname, objectname)\n\nIt can helps with object identification, when I have separated schema and\nname\n\nWhat do you think about this?\n\nRegards\n\nPavel\n\nHiI missing some variants of to_regclass to_regclass(schemaname, objectname)to_regclass(catalogname, schemaname, objectname)It can helps with object identification, when I have separated schema and nameWhat do you think about this?RegardsPavel",
"msg_date": "Sun, 16 Apr 2023 06:28:13 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "idea: multiple arguments to_regclass function"
},
{
"msg_contents": "Pavel Stehule <[email protected]> writes:\n> I missing some variants of to_regclass\n\n> to_regclass(schemaname, objectname)\n> to_regclass(catalogname, schemaname, objectname)\n\nCan do that already:\n\nto_regclass(quote_ident(schemaname) || '.' || quote_ident(objectname))\n\nI'm not eager to build nine more to_reg* functions to do the equivalent\nof that, and even less eager to build eighteen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 10:23:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: idea: multiple arguments to_regclass function"
},
{
"msg_contents": "ne 16. 4. 2023 v 16:23 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > I missing some variants of to_regclass\n>\n> > to_regclass(schemaname, objectname)\n> > to_regclass(catalogname, schemaname, objectname)\n>\n> Can do that already:\n>\n> to_regclass(quote_ident(schemaname) || '.' || quote_ident(objectname))\n>\n> I'm not eager to build nine more to_reg* functions to do the equivalent\n> of that, and even less eager to build eighteen.\n>\n\nYes, I can. But there is overhead with escaping and string concatenation.\nAnd it is a little bit sad, so immediately the parser has to do an inverse\nprocess.\n\nMaybe we can introduce only three functions\n\nanyelement get_object(catalogname name, schemaname name, objectname name,\nreturntype anyelement)\nanyelement get_object(schemaname name, objectname name, returntype\nanyelement)\nanyelement get_object(objectname name, returntype anyelement)\n\nso usage can be like\n\nDECLATE _tab regclass;\nBEGIN\n _tab := get_object('public', 'mytab', _tab);\n ..\n\n?\n\nRegards\n\nPavel\n\n\n\n\n\n\n\n> regards, tom lane\n>\n\nne 16. 4. 2023 v 16:23 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> I missing some variants of to_regclass\n\n> to_regclass(schemaname, objectname)\n> to_regclass(catalogname, schemaname, objectname)\n\nCan do that already:\n\nto_regclass(quote_ident(schemaname) || '.' || quote_ident(objectname))\n\nI'm not eager to build nine more to_reg* functions to do the equivalent\nof that, and even less eager to build eighteen.Yes, I can. But there is overhead with escaping and string concatenation. And it is a little bit sad, so immediately the parser has to do an inverse process.Maybe we can introduce only three functionsanyelement get_object(catalogname name, schemaname name, objectname name, returntype anyelement)anyelement get_object(schemaname name, objectname name, returntype anyelement)anyelement get_object(objectname name, returntype anyelement)so usage can be like DECLATE _tab regclass;BEGIN _tab := get_object('public', 'mytab', _tab); ..?RegardsPavel\n\n regards, tom lane",
"msg_date": "Sun, 16 Apr 2023 17:55:43 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: idea: multiple arguments to_regclass function"
}
] |
[
{
"msg_contents": "I was just looking at the vacuumdb docs and noticed that I had\nneglected to follow the tradition of adding a note to mention which\nversion we added the new option in when I committed the\n--buffer-usage-limit patch.\n\nThere are 3 notes there that read \"This option is only available for\nservers running PostgreSQL 9.6 and later.\". Since 9.6 is a few years\nout of support, can we get rid of these 3?\n\nOr better yet, can we just delete them all? Is it really worth doing\nthis in case someone is using a new vacuumdb on an older server?\n\nI just tried compiling the HTML with all the notes removed, I see from\nlooking at a print preview that it's now ~1 full A4 page shorter than\nit was previously. 5 pages down to 4.\n\nDoes anyone think we should keep these?\n\nDavid\n\n\n",
"msg_date": "Sun, 16 Apr 2023 22:14:35 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can we delete the vacuumdb.sgml notes about which version each option\n was added in?"
},
{
"msg_contents": "On Sun, Apr 16, 2023 at 10:14:35PM +1200, David Rowley wrote:\n> There are 3 notes there that read \"This option is only available for\n> servers running PostgreSQL 9.6 and later.\". Since 9.6 is a few years\n> out of support, can we get rid of these 3?\n\n+1\n\n> Or better yet, can we just delete them all? Is it really worth doing\n> this in case someone is using a new vacuumdb on an older server?\n> \n> I just tried compiling the HTML with all the notes removed, I see from\n> looking at a print preview that it's now ~1 full A4 page shorter than\n> it was previously. 5 pages down to 4.\n> \n> Does anyone think we should keep these?\n\nI'm +0.5 for removing all of them. While they are still relevant and could\npotentially help users, these notes are taking up a rather big portion of\nthe vacuumdb page, and it should print a nice error message if you try to\nuse an option on and older server, anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 16 Apr 2023 06:01:16 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we delete the vacuumdb.sgml notes about which version each\n option was added in?"
},
{
"msg_contents": "On Sun, Apr 16, 2023 at 10:14:35PM +1200, David Rowley wrote:\n> I was just looking at the vacuumdb docs and noticed that I had\n> neglected to follow the tradition of adding a note to mention which\n> version we added the new option in when I committed the\n> --buffer-usage-limit patch.\n> \n> There are 3 notes there that read \"This option is only available for\n> servers running PostgreSQL 9.6 and later.\". Since 9.6 is a few years\n> out of support, can we get rid of these 3?\n> \n> Or better yet, can we just delete them all? Is it really worth doing\n> this in case someone is using a new vacuumdb on an older server?\n> \n> I just tried compiling the HTML with all the notes removed, I see from\n> looking at a print preview that it's now ~1 full A4 page shorter than\n> it was previously. 5 pages down to 4.\n> \n> Does anyone think we should keep these?\n\nI don't know if I'd support removing the notes, but I agree that they\ndon't need to take up anywhere near as much space as they do (especially\nsince the note is now repeated 10 times).\n\nhttps://www.postgresql.org/docs/devel/app-vacuumdb.html\n\nI suggest to remove the <note> markup and preserve the annotation about\nversion compatibility. It's normal, technical writing to repeat the\nsame language like that.\n\nAnother, related improvement I suggested would be to group the\nclient-side options separately from the server-side options.\nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 16 Apr 2023 08:02:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we delete the vacuumdb.sgml notes about which version each\n option was added in?"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Sun, Apr 16, 2023 at 10:14:35PM +1200, David Rowley wrote:\n>> Does anyone think we should keep these?\n\n> I don't know if I'd support removing the notes, but I agree that they\n> don't need to take up anywhere near as much space as they do (especially\n> since the note is now repeated 10 times).\n\nI agree with removing the notes. It has always been our policy that\nyou should read the version of the manual that applies to the version\nyou're running. I can see leaving a compatibility note around for a\nlong time when it's warning you that some behavior changed compared\nto what the same syntax used to do. But if a switch simply isn't\nthere in some older version, that's not terribly dangerous or hard to\nfigure out.\n\n> I suggest to remove the <note> markup and preserve the annotation about\n> version compatibility. It's normal, technical writing to repeat the\n> same language like that.\n\nAnother way could be to move them all into a \"Compatibility\" section.\nBut +1 for just dropping them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 10:29:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can we delete the vacuumdb.sgml notes about which version each\n option was added in?"
},
{
"msg_contents": "On Mon, 17 Apr 2023 at 02:29, Tom Lane <[email protected]> wrote:\n> But +1 for just dropping them.\n\nThanks. I just pushed the patch to drop them all.\n\nDavid\n\n\n",
"msg_date": "Mon, 17 Apr 2023 09:31:24 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can we delete the vacuumdb.sgml notes about which version each\n option was added in?"
}
] |
[
{
"msg_contents": "The PGDATABASE is documented as behaving the same as the dbname connection\nparameter but they differ in the support for postgres:// URIs: the\nPGDATABASE will never be expanded even thought expand_dbname is set:\n\n\t$ psql postgres://localhost/test -c 'select 1' >/dev/null # Works\n\t$ PGDATABASE=postgres://localhost/test psql -c 'select 1' >/dev/null # Doesn't work\n\tpsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"postgres://localhost/test\" does not exist\n\nIn the second command the postgres://localhost/test string has not been\ninterpreted and the connection fails.\n\nIn order to make PGDATABASE and dbname behave the same this patch adds\nURIs support to the environment variable. This makes it convenient for\nusers to pass a single connection string when environment variables are\nused.\n\nWhen both PGDATABASE and dbname are a connection string, the values of\ndbname will override the ones of PGDATABASE, e.g.\n\n\t$ PGDATABASE=postgres://localhost/test?sslmode=require psql postgres://host/db\n\nis equivalent to\n\n\t$ psql postgres://host/db?sslmode=require\n\nI did not write tests for this patch as I am not sure where to put them\nsince libpq_uri_regress uses PQconninfoParse() that does not read the\nenvironment variables.\n---\n src/interfaces/libpq/fe-connect.c | 52 +++++++++++++++++++++++++++----\n 1 file changed, 46 insertions(+), 6 deletions(-)\n\ndiff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c\nindex fcd3d0d9a3..64d6685ad8 100644\n--- a/src/interfaces/libpq/fe-connect.c\n+++ b/src/interfaces/libpq/fe-connect.c\n@@ -412,7 +412,7 @@ static PQconninfoOption *conninfo_array_parse(const char *const *keywords,\n \t\t\t\t\t\t\t\t\t\t\t const char *const *values, PQExpBuffer errorMessage,\n \t\t\t\t\t\t\t\t\t\t\t bool use_defaults, int expand_dbname);\n static bool conninfo_add_defaults(PQconninfoOption *options,\n-\t\t\t\t\t\t\t\t PQExpBuffer errorMessage);\n+\t\t\t\t\t\t\t\t PQExpBuffer errorMessage, int expand_dbname);\n static PQconninfoOption *conninfo_uri_parse(const char *uri,\n \t\t\t\t\t\t\t\t\t\t\tPQExpBuffer errorMessage, bool use_defaults);\n static bool conninfo_uri_parse_options(PQconninfoOption *options,\n@@ -1791,7 +1791,7 @@ PQconndefaults(void)\n \tif (connOptions != NULL)\n \t{\n \t\t/* pass NULL errorBuf to ignore errors */\n-\t\tif (!conninfo_add_defaults(connOptions, NULL))\n+\t\tif (!conninfo_add_defaults(connOptions, NULL, false))\n \t\t{\n \t\t\tPQconninfoFree(connOptions);\n \t\t\tconnOptions = NULL;\n@@ -6084,7 +6084,7 @@ conninfo_parse(const char *conninfo, PQExpBuffer errorMessage,\n \t */\n \tif (use_defaults)\n \t{\n-\t\tif (!conninfo_add_defaults(options, errorMessage))\n+\t\tif (!conninfo_add_defaults(options, errorMessage, false))\n \t\t{\n \t\t\tPQconninfoFree(options);\n \t\t\treturn NULL;\n@@ -6250,7 +6250,7 @@ conninfo_array_parse(const char *const *keywords, const char *const *values,\n \t */\n \tif (use_defaults)\n \t{\n-\t\tif (!conninfo_add_defaults(options, errorMessage))\n+\t\tif (!conninfo_add_defaults(options, errorMessage, expand_dbname))\n \t\t{\n \t\t\tPQconninfoFree(options);\n \t\t\treturn NULL;\n@@ -6272,7 +6272,7 @@ conninfo_array_parse(const char *const *keywords, const char *const *values,\n * NULL.\n */\n static bool\n-conninfo_add_defaults(PQconninfoOption *options, PQExpBuffer errorMessage)\n+conninfo_add_defaults(PQconninfoOption *options, PQExpBuffer errorMessage, int expand_dbname)\n {\n \tPQconninfoOption *option;\n \tPQconninfoOption *sslmode_default = NULL,\n@@ -6296,6 +6296,46 @@ conninfo_add_defaults(PQconninfoOption *options, PQExpBuffer errorMessage)\n \t\tif (strcmp(option->keyword, \"sslrootcert\") == 0)\n \t\t\tsslrootcert = option;\t/* save for later */\n \n+\t\tif (expand_dbname && strcmp(option->keyword, \"dbname\") == 0)\n+\t\t{\n+\t\t\tif ((tmp = getenv(option->envvar)) != NULL && recognized_connection_string(tmp))\n+\t\t\t{\n+\t\t\t\tPQconninfoOption *str_option,\n+\t\t\t\t\t\t\t\t *dbname_options = parse_connection_string(tmp, errorMessage, false);\n+\n+\t\t\t\tif (dbname_options == NULL)\n+\t\t\t\t\treturn false;\n+\n+\t\t\t\tfor (str_option = dbname_options; str_option->keyword != NULL; str_option++)\n+\t\t\t\t{\n+\t\t\t\t\tif (str_option->val != NULL)\n+\t\t\t\t\t{\n+\t\t\t\t\t\tint\t\t\tk;\n+\n+\t\t\t\t\t\tfor (k = 0; options[k].keyword; k++)\n+\t\t\t\t\t\t{\n+\t\t\t\t\t\t\tif (strcmp(options[k].keyword, str_option->keyword) == 0)\n+\t\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\tif (options[k].val != NULL)\n+\t\t\t\t\t\t\t\t\tcontinue;\n+\n+\t\t\t\t\t\t\t\toptions[k].val = strdup(str_option->val);\n+\t\t\t\t\t\t\t\tif (!options[k].val)\n+\t\t\t\t\t\t\t\t{\n+\t\t\t\t\t\t\t\t\tlibpq_append_error(errorMessage, \"out of memory\");\n+\t\t\t\t\t\t\t\t\tPQconninfoFree(dbname_options);\n+\t\t\t\t\t\t\t\t\treturn false;\n+\t\t\t\t\t\t\t\t}\n+\t\t\t\t\t\t\t\tbreak;\n+\t\t\t\t\t\t\t}\n+\t\t\t\t\t\t}\n+\t\t\t\t\t}\n+\t\t\t\t}\n+\t\t\t\tPQconninfoFree(dbname_options);\n+\t\t\t\tcontinue;\n+\t\t\t}\n+\t\t}\n+\n \t\tif (option->val != NULL)\n \t\t\tcontinue;\t\t\t/* Value was in conninfo or service */\n \n@@ -6428,7 +6468,7 @@ conninfo_uri_parse(const char *uri, PQExpBuffer errorMessage,\n \t */\n \tif (use_defaults)\n \t{\n-\t\tif (!conninfo_add_defaults(options, errorMessage))\n+\t\tif (!conninfo_add_defaults(options, errorMessage, true))\n \t\t{\n \t\t\tPQconninfoFree(options);\n \t\t\treturn NULL;\n-- \n2.39.2 (Apple Git-143)\n\n\n\n",
"msg_date": "Sun, 16 Apr 2023 20:42:17 +0200",
"msg_from": "=?UTF-8?q?R=C3=A9mi=20Lapeyre?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add support for postgres:// URIs to PGDATABASE environment\n variable"
},
{
"msg_contents": "=?UTF-8?q?R=C3=A9mi=20Lapeyre?= <[email protected]> writes:\n> The PGDATABASE is documented as behaving the same as the dbname connection\n> parameter but they differ in the support for postgres:// URIs: the\n> PGDATABASE will never be expanded even thought expand_dbname is set:\n\nI think you have misunderstood the documentation. What you are\nproposing is equivalent to saying that this should work:\n\n$ psql -d \"dbname=postgres://localhost/test\"\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"postgres://localhost/test\" does not exist\n\nThat doesn't work, never has, and I think it would be a serious\ncompatibility break and possibly a security hazard if it did.\nThe argument of \"dbname=\" should not be subject to another round\nof interpretation, and neither should the content of the PGDATABASE\nenvironment variable.\n\nYou can do this:\n\n$ psql -d \"postgres://localhost/test\"\n\nbut that's not the same thing as reinterpreting the dbname field\nof what we have already determined to be a connection string.\n\nPerhaps there is a case for inventing a new environment variable\nthat can do what you're suggesting. But you would have to make\na case that it's worth doing, and also define how it interacts\nwith all the other PGxxx environment variables. (The lack of\nclarity about how that should work is an important part of why\nI don't like the idea of letting dbname/PGDATABASE supply anything\nbut the database name.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Apr 2023 21:25:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for postgres:// URIs to PGDATABASE\n environment variable"
},
{
"msg_contents": "\r\n\r\n> Le 17 avr. 2023 à 03:25, Tom Lane <[email protected]> a écrit :\r\n> \r\n> You can do this:\r\n> \r\n> $ psql -d \"postgres://localhost/test\"\r\n> \r\n> but that's not the same thing as reinterpreting the dbname field\r\n> of what we have already determined to be a connection string.\r\n> \r\n\r\nYes, I know see the difference, I got confused by the way the code reuses the dbname keyword to pass the connection string and the fact that \r\n\r\n$ psql --dbname \"postgres://localhost/test\"\r\n\r\nworks, but the dbname parameter of psql is not the same as the dbname used by libpq.\r\n\r\n\r\n> Perhaps there is a case for inventing a new environment variable\r\n> that can do what you're suggesting. But you would have to make\r\n> a case that it's worth doing, and also define how it interacts\r\n> with all the other PGxxx environment variables.\r\n\r\nI think it could be convenient to have such an environment variable but given your feedback I will just make it specific to my application rather than a part of libpq.\r\n\r\nBest,\r\nRémi",
"msg_date": "Mon, 17 Apr 2023 09:07:05 +0000",
"msg_from": "=?utf-8?B?UsOpbWkgTGFwZXlyZQ==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add support for postgres:// URIs to PGDATABASE\n environment variable"
}
] |
[
{
"msg_contents": "I hit this assertion while pg_restoring data into a v16 instance.\npostgresql16-server-16-alpha_20230417_PGDG.rhel7.x86_64\n\nwal_level=minimal and pg_dump --single-transaction both seem to be\nrequired to hit the issue.\n\n$ /usr/pgsql-16/bin/postgres -D ./pg16test -c maintenance_work_mem=1GB -c max_wal_size=16GB -c wal_level=minimal -c max_wal_senders=0 -c port=5678 -c logging_collector=no &\n\n$ time sudo -u postgres /usr/pgsql-16/bin/pg_restore -d postgres -p 5678 --single-transaction --no-tablespace ./curtables\n\nTRAP: failed Assert(\"size > SizeOfXLogRecord\"), File: \"xlog.c\", Line: 1055, PID: 13564\n\nCore was generated by `postgres: postgres postgres [local] COMMIT '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f28b8bd5387 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install postgresql16-server-16-alpha_20230417_PGDG.rhel7.x86_64\n(gdb) bt\n#0 0x00007f28b8bd5387 in raise () from /lib64/libc.so.6\n#1 0x00007f28b8bd6a78 in abort () from /lib64/libc.so.6\n#2 0x00000000009bc8c9 in ExceptionalCondition (conditionName=conditionName@entry=0xa373e1 \"size > SizeOfXLogRecord\", fileName=fileName@entry=0xa31b13 \"xlog.c\", lineNumber=lineNumber@entry=1055) at assert.c:66\n#3 0x000000000057b049 in ReserveXLogInsertLocation (PrevPtr=0x2e3d750, EndPos=<synthetic pointer>, StartPos=<synthetic pointer>, size=24) at xlog.c:1055\n#4 XLogInsertRecord (rdata=rdata@entry=0xf187a0 <hdr_rdt>, fpw_lsn=fpw_lsn@entry=0, flags=<optimized out>, num_fpi=num_fpi@entry=0, topxid_included=topxid_included@entry=false) at xlog.c:844\n#5 0x000000000058210c in XLogInsert (rmid=rmid@entry=0 '\\000', info=info@entry=176 '\\260') at xloginsert.c:510\n#6 0x0000000000582b09 in log_newpage_range (rel=rel@entry=0x2e1f628, forknum=forknum@entry=FSM_FORKNUM, startblk=startblk@entry=0, endblk=endblk@entry=3, page_std=page_std@entry=false) at xloginsert.c:1317\n#7 0x00000000005d02f9 in smgrDoPendingSyncs (isCommit=isCommit@entry=true, isParallelWorker=isParallelWorker@entry=false) at storage.c:837\n#8 0x0000000000571637 in CommitTransaction () at xact.c:2225\n#9 0x0000000000572b25 in CommitTransactionCommand () at xact.c:3201\n#10 0x000000000086afc7 in finish_xact_command () at postgres.c:2782\n#11 0x000000000086d7e1 in exec_simple_query (query_string=0x2dec4f8 \"COMMIT\") at postgres.c:1307\n\n\n",
"msg_date": "Mon, 17 Apr 2023 09:53:30 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"), File:\n \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> I hit this assertion while pg_restoring data into a v16 instance.\n> postgresql16-server-16-alpha_20230417_PGDG.rhel7.x86_64\n\n> wal_level=minimal and pg_dump --single-transaction both seem to be\n> required to hit the issue.\n\nHmm. I wonder if log_newpages() is confused here:\n\n\tXLogEnsureRecordSpace(XLR_MAX_BLOCK_ID - 1, 0);\n\nWhy is XLR_MAX_BLOCK_ID - 1 enough, rather than XLR_MAX_BLOCK_ID?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 11:16:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"),\n File: \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "On Mon, 17 Apr 2023 at 17:53, Justin Pryzby <[email protected]> wrote:\n>\n> I hit this assertion while pg_restoring data into a v16 instance.\n> postgresql16-server-16-alpha_20230417_PGDG.rhel7.x86_64\n>\n> wal_level=minimal and pg_dump --single-transaction both seem to be\n> required to hit the issue.\n>\n> $ /usr/pgsql-16/bin/postgres -D ./pg16test -c maintenance_work_mem=1GB -c max_wal_size=16GB -c wal_level=minimal -c max_wal_senders=0 -c port=5678 -c logging_collector=no &\n>\n> $ time sudo -u postgres /usr/pgsql-16/bin/pg_restore -d postgres -p 5678 --single-transaction --no-tablespace ./curtables\n>\n> TRAP: failed Assert(\"size > SizeOfXLogRecord\"), File: \"xlog.c\", Line: 1055, PID: 13564\n>\n> Core was generated by `postgres: postgres postgres [local] COMMIT '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f28b8bd5387 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install postgresql16-server-16-alpha_20230417_PGDG.rhel7.x86_64\n> (gdb) bt\n> #0 0x00007f28b8bd5387 in raise () from /lib64/libc.so.6\n> #1 0x00007f28b8bd6a78 in abort () from /lib64/libc.so.6\n> #2 0x00000000009bc8c9 in ExceptionalCondition (conditionName=conditionName@entry=0xa373e1 \"size > SizeOfXLogRecord\", fileName=fileName@entry=0xa31b13 \"xlog.c\", lineNumber=lineNumber@entry=1055) at assert.c:66\n> #3 0x000000000057b049 in ReserveXLogInsertLocation (PrevPtr=0x2e3d750, EndPos=<synthetic pointer>, StartPos=<synthetic pointer>, size=24) at xlog.c:1055\n> #4 XLogInsertRecord (rdata=rdata@entry=0xf187a0 <hdr_rdt>, fpw_lsn=fpw_lsn@entry=0, flags=<optimized out>, num_fpi=num_fpi@entry=0, topxid_included=topxid_included@entry=false) at xlog.c:844\n> #5 0x000000000058210c in XLogInsert (rmid=rmid@entry=0 '\\000', info=info@entry=176 '\\260') at xloginsert.c:510\n> #6 0x0000000000582b09 in log_newpage_range (rel=rel@entry=0x2e1f628, forknum=forknum@entry=FSM_FORKNUM, startblk=startblk@entry=0, endblk=endblk@entry=3, page_std=page_std@entry=false) at xloginsert.c:1317\n\n\nLooking at log_newpage_range, it seems like we're always trying to log\na record if startblk < endblk; but don't register the PageIsNew()\nbuffers in the range. That means that if the last buffers in the range\nare new, this can result in no buffers being registered in the last\niteration of the main loop (if the number of non-new buffers in the\nrange is 0 (mod 32)).\n\nA change like attached should fix the issue; or alternatively we could\nforce log the last (new) buffer when we detect this edge case.\n\nKind regards,\n\nMatthias van de Meent",
"msg_date": "Mon, 17 Apr 2023 18:50:40 +0300",
"msg_from": "Matthias van de Meent <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"), File:\n \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "I wrote:\n> Hmm. I wonder if log_newpages() is confused here:\n> \tXLogEnsureRecordSpace(XLR_MAX_BLOCK_ID - 1, 0);\n\nOh, no, it's simpler than that: log_newpage_range is trying to\nlog zero page images, and ReserveXLogInsertLocation doesn't\nlike that because every WAL record should contain some data.\nWill fix, thanks for report.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 11:53:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"),\n File: \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "Matthias van de Meent <[email protected]> writes:\n> Looking at log_newpage_range, it seems like we're always trying to log\n> a record if startblk < endblk; but don't register the PageIsNew()\n> buffers in the range. That means that if the last buffers in the range\n> are new, this can result in no buffers being registered in the last\n> iteration of the main loop (if the number of non-new buffers in the\n> range is 0 (mod 32)).\n\nYeah, I just came to the same conclusion. One thing I don't understand\nyet: log_newpage_range is old (it looks like this back to v12), and\nthat Assert is older, so why doesn't this reproduce further back?\nMaybe the state where all the pages are new didn't happen before?\nIs that telling us there's a bug somewhere else? Seems like a job\nfor git bisect.\n\nTo be clear: log_newpage_range is certainly broken, and your fix looks\nappropriate. I'm just wondering what else we need to learn here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 12:13:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"),\n File: \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "I wrote:\n> Yeah, I just came to the same conclusion. One thing I don't understand\n> yet: log_newpage_range is old (it looks like this back to v12), and\n> that Assert is older, so why doesn't this reproduce further back?\n> Maybe the state where all the pages are new didn't happen before?\n\nBingo: bisecting shows the failure started at\n\ncommit 3d6a98457d8e21d85bed86cfd3e1d1df1b260721\nAuthor: Andres Freund <[email protected]>\nDate: Wed Apr 5 08:19:39 2023 -0700\n\n Don't initialize page in {vm,fsm}_extend(), not needed\n\nSo previously, log_newpage_range could only have failed in very\nunlikely circumstances, whereas now it's not hard to hit when\ncommitting a table creation. I wonder what other bugs may be\nlurking.\n\nI'll patch it back to v12 anyway, since that function is\nclearly wrong in isolation.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 13:50:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"),\n File: \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-17 12:13:41 -0400, Tom Lane wrote:\n> Matthias van de Meent <[email protected]> writes:\n> > Looking at log_newpage_range, it seems like we're always trying to log\n> > a record if startblk < endblk; but don't register the PageIsNew()\n> > buffers in the range. That means that if the last buffers in the range\n> > are new, this can result in no buffers being registered in the last\n> > iteration of the main loop (if the number of non-new buffers in the\n> > range is 0 (mod 32)).\n> \n> Yeah, I just came to the same conclusion. One thing I don't understand\n> yet: log_newpage_range is old (it looks like this back to v12), and\n> that Assert is older, so why doesn't this reproduce further back?\n> Maybe the state where all the pages are new didn't happen before?\n> Is that telling us there's a bug somewhere else? Seems like a job\n> for git bisect.\n\nOne plausible explanation is that bulk relation extension has made it more\nlikely to encounter this scenario. We had some bulk extension code before, but\nit was triggered purely based on contention - quite unlikely in simple test\nscenarios - but now we also bulk extend if we know that we'll insert multiple\npages (when coming from heap_multi_insert(), with sufficient data).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Apr 2023 10:54:41 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"), File:\n \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-17 13:50:30 -0400, Tom Lane wrote:\n> I wrote:\n> > Yeah, I just came to the same conclusion. One thing I don't understand\n> > yet: log_newpage_range is old (it looks like this back to v12), and\n> > that Assert is older, so why doesn't this reproduce further back?\n> > Maybe the state where all the pages are new didn't happen before?\n> \n> Bingo: bisecting shows the failure started at\n> \n> commit 3d6a98457d8e21d85bed86cfd3e1d1df1b260721\n> Author: Andres Freund <[email protected]>\n> Date: Wed Apr 5 08:19:39 2023 -0700\n> \n> Don't initialize page in {vm,fsm}_extend(), not needed\n> \n> So previously, log_newpage_range could only have failed in very\n> unlikely circumstances, whereas now it's not hard to hit when\n> committing a table creation. I wonder what other bugs may be\n> lurking.\n\nOh, interesting. We haven't initialized the extra pages added by\nRelationAddExtraBlocks() (in <= 15) for quite a while now, so I'm a bit\nsurprised it causes more issues for the VM / FSM. I guess it's that it's quite\ncommon in real workloads to contend on the extension lock and add extra\nblocks, but not in simple single-threaded tests?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Apr 2023 11:00:09 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"), File:\n \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-04-17 13:50:30 -0400, Tom Lane wrote:\n>> So previously, log_newpage_range could only have failed in very\n>> unlikely circumstances, whereas now it's not hard to hit when\n>> committing a table creation. I wonder what other bugs may be\n>> lurking.\n\n> Oh, interesting. We haven't initialized the extra pages added by\n> RelationAddExtraBlocks() (in <= 15) for quite a while now, so I'm a bit\n> surprised it causes more issues for the VM / FSM. I guess it's that it's quite\n> common in real workloads to contend on the extension lock and add extra\n> blocks, but not in simple single-threaded tests?\n\nI haven't tried hard to run it to ground, but maybe log_newpage_range\nisn't used in that code path? Seems like we'd have detected this\nbefore now if the case were reachable without any crash involved.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 14:26:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"),\n File: \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 01:50:30PM -0400, Tom Lane wrote:\n> I wrote:\n> > Yeah, I just came to the same conclusion. One thing I don't understand\n> > yet: log_newpage_range is old (it looks like this back to v12), and\n> > that Assert is older, so why doesn't this reproduce further back?\n> > Maybe the state where all the pages are new didn't happen before?\n> \n> Bingo: bisecting shows the failure started at\n\nJust curious: what \"test\" did you use to bisect with ?\n\n\n",
"msg_date": "Mon, 17 Apr 2023 17:00:08 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"), File:\n \"xlog.c\", Line: 1055, PID: 13564"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Mon, Apr 17, 2023 at 01:50:30PM -0400, Tom Lane wrote:\n>> Bingo: bisecting shows the failure started at\n\n> Just curious: what \"test\" did you use to bisect with ?\n\nThe test case I used looked like\n\nstart postmaster with -c wal_level=minimal -c max_wal_senders=0\nmake installcheck-parallel\npsql -d regression -c \"do 'begin for i in 1..1000 loop execute ''create table lots''||i||'' as select * from onek''; end loop; end';\"\npg_dump -Fc -Z0 regression >~/regression.dump\ncreatedb r2\npg_restore -d r2 --single-transaction --no-tablespace ~/regression.dump\n\nDumping the regression database as-is didn't reproduce it for me,\nbut after I added a bunch more tables it did reproduce.\n\n(I added the -Z0 bit after some of the bisection test points hit the\ninterval where somebody had broken pg_dump's compression features.\nIt didn't seem relevant to the problem so I just disabled that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 18:09:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: v16dev: TRAP: failed Assert(\"size > SizeOfXLogRecord\"),\n File: \"xlog.c\", Line: 1055, PID: 13564"
}
] |
[
{
"msg_contents": "Further cleanup of autoconf output files for GSSAPI changes.\n\nRunning autoheader was missed in f7431bca8. This is cosmetic since\nwe aren't using these HAVE_ symbols, but let's get everything in\nsync while we're looking at this.\n\nDiscussion: https://postgr.es/m/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d48ac0070cff197125088959e5644ed051322f5e\n\nModified Files\n--------------\nsrc/include/pg_config.h.in | 6 ++++++\n1 file changed, 6 insertions(+)",
"msg_date": "Mon, 17 Apr 2023 15:22:01 +0000",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql: Further cleanup of autoconf output files for GSSAPI changes."
},
{
"msg_contents": "On 2023-04-17 Mo 11:22, Tom Lane wrote:\n> Further cleanup of autoconf output files for GSSAPI changes.\n>\n> Running autoheader was missed in f7431bca8. This is cosmetic since\n> we aren't using these HAVE_ symbols, but let's get everything in\n> sync while we're looking at this.\n>\n> Discussion:https://postgr.es/m/[email protected]\n>\n\nI think this also needs a fix in src/tools/msvc/Solution.pm, see \n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2023-04-17%2016%3A30%3A03>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-17 Mo 11:22, Tom Lane wrote:\n\n\nFurther cleanup of autoconf output files for GSSAPI changes.\n\nRunning autoheader was missed in f7431bca8. This is cosmetic since\nwe aren't using these HAVE_ symbols, but let's get everything in\nsync while we're looking at this.\n\nDiscussion: https://postgr.es/m/[email protected]\n\n\n\n\n\nI think this also needs a fix in src/tools/msvc/Solution.pm, see\n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2023-04-17%2016%3A30%3A03>\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 17 Apr 2023 14:57:27 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Further cleanup of autoconf output files for GSSAPI\n changes."
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> I think this also needs a fix in src/tools/msvc/Solution.pm, see \n> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2023-04-17%2016%3A30%3A03>\n\nArgh, forgot about that. Will fix.\n\n(This three-build-system business can't go away soon enough.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 15:53:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Further cleanup of autoconf output files for GSSAPI\n changes."
},
{
"msg_contents": "On 2023-04-17 Mo 15:53, Tom Lane wrote:\n> Andrew Dunstan<[email protected]> writes:\n>> I think this also needs a fix in src/tools/msvc/Solution.pm, see\n>> <https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2023-04-17%2016%3A30%3A03>\n> Argh, forgot about that. Will fix.\n>\n> (This three-build-system business can't go away soon enough.)\n>\n> \t\t\t\n\n\n From my POV we can remove it any time - I am still having Windows \nissues with meson, but only with MSYS2. The MSVC meson build on drongo \nis perfectly well behaved.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-17 Mo 15:53, Tom Lane wrote:\n\n\nAndrew Dunstan <[email protected]> writes:\n\n\nI think this also needs a fix in src/tools/msvc/Solution.pm, see \n<https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bowerbird&dt=2023-04-17%2016%3A30%3A03>\n\n\n\nArgh, forgot about that. Will fix.\n\n(This three-build-system business can't go away soon enough.)\n\n\t\t\t\n\n\n\nFrom my POV we can remove it any time - I am still having Windows\n issues with meson, but only with MSYS2. The MSVC meson build on\n drongo is perfectly well behaved.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 17 Apr 2023 16:22:30 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Further cleanup of autoconf output files for GSSAPI\n changes."
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 2023-04-17 Mo 15:53, Tom Lane wrote:\n>> (This three-build-system business can't go away soon enough.)\n\n> From my POV we can remove it any time - I am still having Windows \n> issues with meson, but only with MSYS2. The MSVC meson build on drongo \n> is perfectly well behaved.\n\nI think the outcome of the discussion a week or so ago was that\nwe want the MSVC scripts in v16, but we can nuke them from HEAD\nas soon as the branch is made.\n\nautoconf unfortunately will have to live a good bit longer ...\nI don't think we're anywhere near the point where the meson\nsystem is mature enough to drop that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 16:25:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Further cleanup of autoconf output files for GSSAPI\n changes."
},
{
"msg_contents": "On 2023-04-17 16:22:30 -0400, Andrew Dunstan wrote:\n> I am still having Windows issues with meson, but only with MSYS2.\n\nAny more details on that? I might be able to help out / improve things.\n\n\n",
"msg_date": "Thu, 20 Apr 2023 08:06:52 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Further cleanup of autoconf output files for GSSAPI\n changes."
},
{
"msg_contents": "On 2023-04-20 Th 11:06, Andres Freund wrote:\n> On 2023-04-17 16:22:30 -0400, Andrew Dunstan wrote:\n>> I am still having Windows issues with meson, but only with MSYS2.\n> Any more details on that? I might be able to help out / improve things.\n\n\nFor some reason which makes no sense to me the buildfarm animal fails at \nthe first Stop-Db step. The DB is actually stopped, but pg_ctl returns a \nnon-zero status. The thing that's really odd is that meson isn't at all \ninvolved in this step. But it's happened enough that I've had to back \noff using meson builds on fairywren - I'm going to do more testing on a \nnew Windows instance.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-20 Th 11:06, Andres Freund\n wrote:\n\n\nOn 2023-04-17 16:22:30 -0400, Andrew Dunstan wrote:\n\n\nI am still having Windows issues with meson, but only with MSYS2.\n\n\n\nAny more details on that? I might be able to help out / improve things.\n\n\n\nFor some reason which makes no sense to me the buildfarm animal\n fails at the first Stop-Db step. The DB is actually stopped, but\n pg_ctl returns a non-zero status. The thing that's really odd is\n that meson isn't at all involved in this step. But it's happened\n enough that I've had to back off using meson builds on fairywren -\n I'm going to do more testing on a new Windows instance.\n\n\ncheers\n\n\nandrew\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 20 Apr 2023 15:37:43 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Further cleanup of autoconf output files for GSSAPI\n changes."
},
{
"msg_contents": "[redirecting to -hackers]\n\n\nOn 2023-04-20 Th 15:37, Andrew Dunstan wrote:\n>\n>\n> On 2023-04-20 Th 11:06, Andres Freund wrote:\n>> On 2023-04-17 16:22:30 -0400, Andrew Dunstan wrote:\n>>> I am still having Windows issues with meson, but only with MSYS2.\n>> Any more details on that? I might be able to help out / improve things.\n>\n>\n> For some reason which makes no sense to me the buildfarm animal fails \n> at the first Stop-Db step. The DB is actually stopped, but pg_ctl \n> returns a non-zero status. The thing that's really odd is that meson \n> isn't at all involved in this step. But it's happened enough that I've \n> had to back off using meson builds on fairywren - I'm going to do more \n> testing on a new Windows instance.\n>\n\n\nStill running into this, and I am rather stumped. This is a blocker for \nbuildfarm support for meson:\n\nHere's a simple illustration of the problem. If I do the identical test \nwith a non-meson build there is no problem:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ export PGCTLTIMEOUT=300\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \nsystem(\"bin/pg_ctl -D data-C -l logfile start\") ; print \"fail\\n\" if $?; '\nwaiting for server to start.... done\nserver started\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \nsystem(\"bin/pg_ctl -D data-C -l logfile stop\") ; print \"fail\\n\" if $?; '\nwaiting for server to shut down....fail\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ tail root/HEAD/instkeep.2023-04-25_11-09-41/logfile\n2023-04-26 12:44:50.188 UTC [5132:2] LOG: listening on Unix socket \n\"C:/tools/msys64/tmp/buildfarm-jaWBkm/.s.PGSQL.5678\"\n2023-04-26 12:44:50.249 UTC [5388:1] LOG: database system was shut down \nat 2023-04-26 12:43:02 UTC\n2023-04-26 12:44:50.260 UTC [5132:3] LOG: database system is ready to \naccept connections\n2023-04-26 12:45:01.542 UTC [5132:4] LOG: received fast shutdown request\n2023-04-26 12:45:01.542 UTC [5132:5] LOG: aborting any active transactions\n2023-04-26 12:45:01.547 UTC [5132:6] LOG: background worker \"logical \nreplication launcher\" (PID 3876) exited with exit code 1\n2023-04-26 12:45:01.550 UTC [6032:1] LOG: shutting down\n2023-04-26 12:45:01.551 UTC [6032:2] LOG: checkpoint starting: shutdown \nimmediate\n2023-04-26 12:45:01.557 UTC [6032:3] LOG: checkpoint complete: wrote 2 \nbuffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.003 \ns, sync=0.001 s, total=0.007 s; sync files=0, longest=0.000 s, \naverage=0.000 s; distance=0 kB, estimate=0 kB; lsn=0/1034E7F8, redo \nlsn=0/1034E7F8\n2023-04-26 12:45:01.568 UTC [5132:7] LOG: database system is shut down\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\n[redirecting to -hackers]\n\n\n\nOn 2023-04-20 Th 15:37, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-04-20 Th 11:06, Andres Freund\n wrote:\n\n\nOn 2023-04-17 16:22:30 -0400, Andrew Dunstan wrote:\n\n\nI am still having Windows issues with meson, but only with MSYS2.\n\n\nAny more details on that? I might be able to help out / improve things.\n\n\n\nFor some reason which makes no sense to me the buildfarm animal\n fails at the first Stop-Db step. The DB is actually stopped, but\n pg_ctl returns a non-zero status. The thing that's really odd is\n that meson isn't at all involved in this step. But it's happened\n enough that I've had to back off using meson builds on fairywren\n - I'm going to do more testing on a new Windows instance.\n\n\n\n\n\nStill running into this, and I am rather stumped. This is a\n blocker for buildfarm support for meson:\nHere's a simple illustration of the problem. If I do the\n identical test with a non-meson build there is no problem:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ export PGCTLTIMEOUT=300\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ /usr/bin/perl -e 'chdir\n \"root/HEAD/instkeep.2023-04-25_11-09-41\"; system(\"bin/pg_ctl -D\n data-C -l logfile start\") ; print \"fail\\n\" if $?; '\n waiting for server to start.... done\n server started\n\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ /usr/bin/perl -e 'chdir\n \"root/HEAD/instkeep.2023-04-25_11-09-41\"; system(\"bin/pg_ctl -D\n data-C -l logfile stop\") ; print \"fail\\n\" if $?; '\n waiting for server to shut down....fail\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ tail root/HEAD/instkeep.2023-04-25_11-09-41/logfile\n 2023-04-26 12:44:50.188 UTC [5132:2] LOG: listening on Unix\n socket \"C:/tools/msys64/tmp/buildfarm-jaWBkm/.s.PGSQL.5678\"\n 2023-04-26 12:44:50.249 UTC [5388:1] LOG: database system was\n shut down at 2023-04-26 12:43:02 UTC\n 2023-04-26 12:44:50.260 UTC [5132:3] LOG: database system is\n ready to accept connections\n 2023-04-26 12:45:01.542 UTC [5132:4] LOG: received fast shutdown\n request\n 2023-04-26 12:45:01.542 UTC [5132:5] LOG: aborting any active\n transactions\n 2023-04-26 12:45:01.547 UTC [5132:6] LOG: background worker\n \"logical replication launcher\" (PID 3876) exited with exit code 1\n 2023-04-26 12:45:01.550 UTC [6032:1] LOG: shutting down\n 2023-04-26 12:45:01.551 UTC [6032:2] LOG: checkpoint starting:\n shutdown immediate\n 2023-04-26 12:45:01.557 UTC [6032:3] LOG: checkpoint complete:\n wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0\n recycled; write=0.003 s, sync=0.001 s, total=0.007 s; sync\n files=0, longest=0.000 s, average=0.000 s; distance=0 kB,\n estimate=0 kB; lsn=0/1034E7F8, redo lsn=0/1034E7F8\n 2023-04-26 12:45:01.568 UTC [5132:7] LOG: database system is shut\n down\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 26 Apr 2023 09:59:05 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "issue with meson builds on msys2"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n>> For some reason which makes no sense to me the buildfarm animal fails \n>> at the first Stop-Db step. The DB is actually stopped, but pg_ctl \n>> returns a non-zero status. The thing that's really odd is that meson \n>> isn't at all involved in this step. But it's happened enough that I've \n>> had to back off using meson builds on fairywren - I'm going to do more \n>> testing on a new Windows instance.\n\n> Here's a simple illustration of the problem. If I do the identical test \n> with a non-meson build there is no problem:\n\n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n> $ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \n> system(\"bin/pg_ctl -D data-C -l logfile stop\") ; print \"fail\\n\" if $?; '\n> waiting for server to shut down....fail\n\nLooking at the pg_ctl source code, the only way I can explain that\nprintout is that do_stop called wait_for_postmaster_stop which,\nafter one or more loops, exited via one of its exit() calls.\nThe lack of any message can be explained if we imagine that\nwrite_stderr() output is going to the bit bucket. I'd start by\nchanging those write_stderr's to print_msg(), which visibly\ndoes work; that should confirm the existence of the stderr\nissue and show you how wait_for_postmaster_stop is failing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Apr 2023 10:32:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "I wrote:\n> Looking at the pg_ctl source code, the only way I can explain that\n> printout is that do_stop called wait_for_postmaster_stop which,\n> after one or more loops, exited via one of its exit() calls.\n\nAh, a little too hasty there: it's get_pgpid() that has to be\nreaching an exit().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Apr 2023 10:58:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-04-26 We 10:58, Tom Lane wrote:\n> I wrote:\n>> Looking at the pg_ctl source code, the only way I can explain that\n>> printout is that do_stop called wait_for_postmaster_stop which,\n>> after one or more loops, exited via one of its exit() calls.\n> Ah, a little too hasty there: it's get_pgpid() that has to be\n> reaching an exit().\n\n\n\nIf I redirect the output to a file (which is what the buildfarm client \nactually does), it seems like it completes successfully, but I still get \na non-zero exit:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \nsystem(\"bin/pg_ctl -D data-C -l logfile stop > stoplog 2>&1\") ; print \n\"BANG\\n\" if $?; '\nBANG\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog\nwaiting for server to shut down.... done\nserver stopped\n\n\nIt seems more than odd that we get to where the \"server stopped\" massage \nis printed but we get a failure.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-26 We 10:58, Tom Lane wrote:\n\n\nI wrote:\n\n\nLooking at the pg_ctl source code, the only way I can explain that\nprintout is that do_stop called wait_for_postmaster_stop which,\nafter one or more loops, exited via one of its exit() calls.\n\n\n\nAh, a little too hasty there: it's get_pgpid() that has to be\nreaching an exit().\n\n\n\n\n\nIf I redirect the output to a file (which is what the buildfarm\n client actually does), it seems like it completes successfully,\n but I still get a non-zero exit:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ /usr/bin/perl -e 'chdir\n \"root/HEAD/instkeep.2023-04-25_11-09-41\"; system(\"bin/pg_ctl -D\n data-C -l logfile stop > stoplog 2>&1\") ; print \"BANG\\n\"\n if $?; '\n BANG\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog\n waiting for server to shut down.... done\n server stopped\n\n\nIt seems more than odd that we get to where the \"server stopped\"\n massage is printed but we get a failure.\n\n\ncheers\n\n\nandrew\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 26 Apr 2023 11:11:33 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> If I redirect the output to a file (which is what the buildfarm client \n> actually does), it seems like it completes successfully, but I still get \n> a non-zero exit:\n\n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n> $ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \n> system(\"bin/pg_ctl -D data-C -l logfile stop > stoplog 2>&1\") ; print \n> \"BANG\\n\" if $?; '\n> BANG\n\n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n> $ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog\n> waiting for server to shut down.... done\n> server stopped\n\nThats ... just wacko. do_stop() emits \"waiting for server to shut\ndown...\", \"done\", and \"server stopped\" in the same way (via print_msg).\nHow is it that all three messages show up in one context but not the\nother? Could wait_for_postmaster_stop or get_pgpid be bollixing the\nstdout channel somehow? Try redirecting stdout and stderr separately\nto see if that proves anything.\n\n> It seems more than odd that we get to where the \"server stopped\" massage \n> is printed but we get a failure.\n\nIndeed, that's even weirder. do_stop() returns directly to the\nexit(0) in main().\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Apr 2023 11:30:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-04-26 We 11:30, Tom Lane wrote:\n> Andrew Dunstan<[email protected]> writes:\n>> If I redirect the output to a file (which is what the buildfarm client\n>> actually does), it seems like it completes successfully, but I still get\n>> a non-zero exit:\n>> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n>> $ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\";\n>> system(\"bin/pg_ctl -D data-C -l logfile stop > stoplog 2>&1\") ; print\n>> \"BANG\\n\" if $?; '\n>> BANG\n>> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n>> $ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog\n>> waiting for server to shut down.... done\n>> server stopped\n> Thats ... just wacko. do_stop() emits \"waiting for server to shut\n> down...\", \"done\", and \"server stopped\" in the same way (via print_msg).\n> How is it that all three messages show up in one context but not the\n> other? Could wait_for_postmaster_stop or get_pgpid be bollixing the\n> stdout channel somehow? Try redirecting stdout and stderr separately\n> to see if that proves anything.\n\n\nDoesn't seem to prove much:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \nsystem(\"bin/pg_ctl -D data-C -l logfile stop > stoplog.out 2> \nstoplog.err\") ; print \"BANG\\n\" if $?; '\nBANG\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog.out\nwaiting for server to shut down.... done\nserver stopped\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog.err\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n\n\n>\n>> It seems more than odd that we get to where the \"server stopped\" massage\n>> is printed but we get a failure.\n> Indeed, that's even weirder. do_stop() returns directly to the\n> exit(0) in main().\n>\n> \t\t\t\n\n\nAnd if I call it via IPC::Run it works:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; use \nIPC::Run; my ($out, $err) = (\"\",\"\"); IPC::Run::run [\"bin/pg_ctl\", \"-D\", \n\"data-C\", \"stop\"], \\\"\",\\$out,\\$err ; print \"BANG\\n\" if $?; print \"out: \n$out\\n\" if $out; print \"err: $err\\n\" if $err;'\nout: waiting for server to shut down.... done\nserver stopped\n\n\nIt seems there is something odd in how msys perl (not ucrt perl) \nimplements system() that is tickled by this, but why that should only \noccur when it's built using meson is completely beyond me. It should be \njust another executable. And pg_ctl is behaving properly as far as we \ncan see. I'm not quite sure where to go from here. I guess I can try to \nsee if we have IPC::Run and if so use it. That would probably get me \nover the hurdle for fairywren. This has already consumed far too much of \nmy time.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-26 We 11:30, Tom Lane wrote:\n\n\nAndrew Dunstan <[email protected]> writes:\n\n\nIf I redirect the output to a file (which is what the buildfarm client \nactually does), it seems like it completes successfully, but I still get \na non-zero exit:\n\n\n\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\"; \nsystem(\"bin/pg_ctl -D data-C -l logfile stop > stoplog 2>&1\") ; print \n\"BANG\\n\" if $?; '\nBANG\n\n\n\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n$ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog\nwaiting for server to shut down.... done\nserver stopped\n\n\n\nThats ... just wacko. do_stop() emits \"waiting for server to shut\ndown...\", \"done\", and \"server stopped\" in the same way (via print_msg).\nHow is it that all three messages show up in one context but not the\nother? Could wait_for_postmaster_stop or get_pgpid be bollixing the\nstdout channel somehow? Try redirecting stdout and stderr separately\nto see if that proves anything.\n\n\n\nDoesn't seem to prove much:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ /usr/bin/perl -e 'chdir\n \"root/HEAD/instkeep.2023-04-25_11-09-41\"; system(\"bin/pg_ctl -D\n data-C -l logfile stop > stoplog.out 2> stoplog.err\") ;\n print \"BANG\\n\" if $?; '\n BANG\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog.out\n waiting for server to shut down.... done\n server stopped\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ cat root/HEAD/instkeep.2023-04-25_11-09-41/stoplog.err\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n\n\n\n\n\n\n\n\nIt seems more than odd that we get to where the \"server stopped\" massage \nis printed but we get a failure.\n\n\n\nIndeed, that's even weirder. do_stop() returns directly to the\nexit(0) in main().\n\n\t\t\t\n\n\n\nAnd if I call it via IPC::Run it works:\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n $ /usr/bin/perl -e 'chdir\n \"root/HEAD/instkeep.2023-04-25_11-09-41\"; use IPC::Run; my ($out,\n $err) = (\"\",\"\"); IPC::Run::run [\"bin/pg_ctl\", \"-D\", \"data-C\",\n \"stop\"], \\\"\",\\$out,\\$err ; print \"BANG\\n\" if $?; print \"out:\n $out\\n\" if $out; print \"err: $err\\n\" if $err;'\n out: waiting for server to shut down.... done\n server stopped\n\n\n\nIt seems there is something odd in how msys perl (not ucrt perl)\n implements system() that is tickled by this, but why that should\n only occur when it's built using meson is completely beyond me. It\n should be just another executable. And pg_ctl is behaving properly\n as far as we can see. I'm not quite sure where to go from here. I\n guess I can try to see if we have IPC::Run and if so use it. That\n would probably get me over the hurdle for fairywren. This has\n already consumed far too much of my time.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 26 Apr 2023 15:10:09 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n> Still running into this, and I am rather stumped. This is a blocker for\n> buildfarm support for meson:\n> \n> Here's a simple illustration of the problem. If I do the identical test with\n> a non-meson build there is no problem:\n\nThis happens 100% reproducible?\n\n\n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n> $ export PGCTLTIMEOUT=300\n> \n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n> $ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\";\n> system(\"bin/pg_ctl -D data-C -l logfile start\") ; print \"fail\\n\" if $?; '\n> waiting for server to start.... done\n> server started\n\nDoes it happen as well if you use ucrt perl? Not because I think we should\nrequire it, just to narrow the space.\n\nAny chance that doing export MSYS=winjitdebug changes something? There's quite\na bit of similarity with the python issue you've also encountered - python\nwould just exit with the a failure indicating exit code.\n\n\n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf\n> $ /usr/bin/perl -e 'chdir \"root/HEAD/instkeep.2023-04-25_11-09-41\";\n> system(\"bin/pg_ctl -D data-C -l logfile stop\") ; print \"fail\\n\" if $?; '\n> waiting for server to shut down....fail\n\nHm. I don't remember the details, but in the python case I was able to get\nsome additional error code somehow, which then indicated that the\nchild-process failed with the NT status code indicating the equivalent of a\nsegfault.\n\nI guess system() in msys perl will invoke bash as a shell to execute the\nproblem. Perhaps the failing program isn't actually pg_ctl, but the shell? If\nit is indeed bash, what does the shell report as the exit code of pg_ctl?\nE.g. doing something like\n system('bin/pg_ctl -D data-C -l logfile stop; echo $?');\n\n\nCould you do ldd (with mingw's ldd, which understands PE binaries) of meson\nand autoconf built pg_ctl on your machine? I wonder if we end up with a\ndifferent windows runtime or such. In the python case I had some\ncircumstantial evidence that the problem was dependent on the windows runtime\nversion.\n\nDownthread you mention that the issue doesn't happen with IPC::Run - the\nbiggest difference I can see is that IPC::Run would IIRC not use a shell? Does\nthe problem \"re-appear\" if you make IPC::Run use a shell?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 27 Apr 2023 15:18:28 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-04-27 Th 18:18, Andres Freund wrote:\n> Hi,\n>\n> On 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n>> Still running into this, and I am rather stumped. This is a blocker for\n>> buildfarm support for meson:\n>>\n>> Here's a simple illustration of the problem. If I do the identical test with\n>> a non-meson build there is no problem:\n> This happens 100% reproducible?\n\n\nFor a sufficiently modern installation of msys2 (20230318 version) this \nis reproducible on autoconf builds as well.\n\nFor now it's off my list of meson blockers. I will pursue the issue when \nI have time, but for now the IPC::Run workaround is sufficient.\n\nThe main thing that's now an issue on Windows is support for various \noptions like libxml2. I installed the libxml2 distro from the package \nmanager scoop, generated .lib files for the libxml2 and libxslt DLLs, \nand was able to build with autoconf on msys2, and with our MSVC support, \nbut not with meson in either case. It looks like we need to expand the \nlogic in meson.build for a number of these, just as we have done for \nperl, python, openssl, ldap etc.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-27 Th 18:18, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n\n\nStill running into this, and I am rather stumped. This is a blocker for\nbuildfarm support for meson:\n\nHere's a simple illustration of the problem. If I do the identical test with\na non-meson build there is no problem:\n\n\n\nThis happens 100% reproducible?\n\n\n\nFor a sufficiently modern installation of msys2 (20230318\n version) this is reproducible on autoconf builds as well.\nFor now it's off my list of meson blockers. I will pursue the\n issue when I have time, but for now the IPC::Run workaround is\n sufficient.\nThe main thing that's now an issue on Windows is support for\n various options like libxml2. I installed the libxml2 distro from\n the package manager scoop, generated .lib files for the libxml2\n and libxslt DLLs, and was able to build with autoconf on msys2,\n and with our MSVC support, but not with meson in either case. It\n looks like we need to expand the logic in meson.build for a number\n of these, just as we have done for perl, python, openssl, ldap\n etc.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 3 May 2023 09:20:28 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-03 09:20:28 -0400, Andrew Dunstan wrote:\n> On 2023-04-27 Th 18:18, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n> > > Still running into this, and I am rather stumped. This is a blocker for\n> > > buildfarm support for meson:\n> > > \n> > > Here's a simple illustration of the problem. If I do the identical test with\n> > > a non-meson build there is no problem:\n> > This happens 100% reproducible?\n\n> For a sufficiently modern installation of msys2 (20230318 version) this is\n> reproducible on autoconf builds as well.\n\nOh. Seems like something we need to dig into independent of meson then :(\n\n\n> The main thing that's now an issue on Windows is support for various options\n> like libxml2. I installed the libxml2 distro from the package manager scoop,\n> generated .lib files for the libxml2 and libxslt DLLs, and was able to build\n> with autoconf on msys2, and with our MSVC support, but not with meson in\n> either case. It looks like we need to expand the logic in meson.build for a\n> number of these, just as we have done for perl, python, openssl, ldap etc.\n\nI seriously doubt that trying to support every possible packaging thing on\nwindows is a good idea. What's the point of building against libraries from a\npackaging solution that doesn't even come with .lib files? Windows already is\na massive pain to support for postgres, making it even more complicated / less\npredictable is a really bad idea.\n\nIMO, for windows, the path we should go down is to provide one documented way\nto build the dependencies (e.g. using vcpkg or conan, perhaps also supporting\nmsys distributed libs), and define using something else to be unsupported (in\nthe \"we don't help you\", not in the \"we explicitly try to break things\"\nsense). And it should be something that understands needing to build debug\nand non-debug libraries.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 May 2023 11:26:15 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-03 We 09:20, Andrew Dunstan wrote:\n>\n>\n> On 2023-04-27 Th 18:18, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n>>> Still running into this, and I am rather stumped. This is a blocker for\n>>> buildfarm support for meson:\n>>>\n>>> Here's a simple illustration of the problem. If I do the identical test with\n>>> a non-meson build there is no problem:\n>> This happens 100% reproducible?\n>\n>\n> For a sufficiently modern installation of msys2 (20230318 version) \n> this is reproducible on autoconf builds as well.\n>\n> For now it's off my list of meson blockers. I will pursue the issue \n> when I have time, but for now the IPC::Run workaround is sufficient.\n>\n> The main thing that's now an issue on Windows is support for various \n> options like libxml2. I installed the libxml2 distro from the package \n> manager scoop, generated .lib files for the libxml2 and libxslt DLLs, \n> and was able to build with autoconf on msys2, and with our MSVC \n> support, but not with meson in either case. It looks like we need to \n> expand the logic in meson.build for a number of these, just as we have \n> done for perl, python, openssl, ldap etc.\n>\n>\n>\n\nI've actually made some progress on this front. I grabbed and built \nhttps://github.com/pkgconf/pkgconf.git (with meson :-) )\n\nAfter that I set PKG_CONFIG_PATH to point to where the libxml .pc files \nare installed, and lo and behold the meson/msvc build worked with libxml \n/ libxslt. I did have to move libxml's openssl.pc file aside, as the \ndistro's version of openssl is extremely old, and we don't want to use \nit (I'm using 3.1.0).\n\nOf course, this imposes an extra build dependency for Windows, but it's \nnot too onerous.\n\nIt also means that if anyone wants to use some dependency without a .pc \nfile they would need to create one. I'll keep trying to expand the list \nof things I configure with.\n\nNext targets will include ldap, lz4 and zstd.\n\nI also need to test this with msys2, so fat I have only tested with MSVC.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-03 We 09:20, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-04-27 Th 18:18, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n\n\nStill running into this, and I am rather stumped. This is a blocker for\nbuildfarm support for meson:\n\nHere's a simple illustration of the problem. If I do the identical test with\na non-meson build there is no problem:\n\n\nThis happens 100% reproducible?\n\n\n\nFor a sufficiently modern installation of msys2 (20230318\n version) this is reproducible on autoconf builds as well.\nFor now it's off my list of meson blockers. I will pursue the\n issue when I have time, but for now the IPC::Run workaround is\n sufficient.\nThe main thing that's now an issue on Windows is support for\n various options like libxml2. I installed the libxml2 distro\n from the package manager scoop, generated .lib files for the\n libxml2 and libxslt DLLs, and was able to build with autoconf on\n msys2, and with our MSVC support, but not with meson in either\n case. It looks like we need to expand the logic in meson.build\n for a number of these, just as we have done for perl, python,\n openssl, ldap etc.\n\n\n\n\n\n\nI've actually made some progress on this front. I grabbed and\n built https://github.com/pkgconf/pkgconf.git (with meson :-) )\nAfter that I set PKG_CONFIG_PATH to point to where the libxml .pc\n files are installed, and lo and behold the meson/msvc build worked\n with libxml / libxslt. I did have to move libxml's openssl.pc file\n aside, as the distro's version of openssl is extremely old, and we\n don't want to use it (I'm using 3.1.0).\nOf course, this imposes an extra build dependency for Windows,\n but it's not too onerous.\n\nIt also means that if anyone wants to use some dependency without\n a .pc file they would need to create one. I'll keep trying to\n expand the list of things I configure with. \n\nNext targets will include ldap, lz4 and zstd.\nI also need to test this with msys2, so fat I have only tested\n with MSVC.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 3 May 2023 15:55:28 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-03 We 14:26, Andres Freund wrote:\n> Hi,\n>\n> On 2023-05-03 09:20:28 -0400, Andrew Dunstan wrote:\n>> On 2023-04-27 Th 18:18, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n>>>> Still running into this, and I am rather stumped. This is a blocker for\n>>>> buildfarm support for meson:\n>>>>\n>>>> Here's a simple illustration of the problem. If I do the identical test with\n>>>> a non-meson build there is no problem:\n>>> This happens 100% reproducible?\n>> For a sufficiently modern installation of msys2 (20230318 version) this is\n>> reproducible on autoconf builds as well.\n> Oh. Seems like something we need to dig into independent of meson then :(\n>\n>\n>> The main thing that's now an issue on Windows is support for various options\n>> like libxml2. I installed the libxml2 distro from the package manager scoop,\n>> generated .lib files for the libxml2 and libxslt DLLs, and was able to build\n>> with autoconf on msys2, and with our MSVC support, but not with meson in\n>> either case. It looks like we need to expand the logic in meson.build for a\n>> number of these, just as we have done for perl, python, openssl, ldap etc.\n> I seriously doubt that trying to support every possible packaging thing on\n> windows is a good idea. What's the point of building against libraries from a\n> packaging solution that doesn't even come with .lib files? Windows already is\n> a massive pain to support for postgres, making it even more complicated / less\n> predictable is a really bad idea.\n>\n> IMO, for windows, the path we should go down is to provide one documented way\n> to build the dependencies (e.g. using vcpkg or conan, perhaps also supporting\n> msys distributed libs), and define using something else to be unsupported (in\n> the \"we don't help you\", not in the \"we explicitly try to break things\"\n> sense). And it should be something that understands needing to build debug\n> and non-debug libraries.\n>\n\nI'm not familiar with conan. I have struggled considerably with vcpkg in \nthe past.\n\nI don't think there is any one perfect answer.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-03 We 14:26, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-05-03 09:20:28 -0400, Andrew Dunstan wrote:\n\n\nOn 2023-04-27 Th 18:18, Andres Freund wrote:\n\n\nHi,\n\nOn 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n\n\nStill running into this, and I am rather stumped. This is a blocker for\nbuildfarm support for meson:\n\nHere's a simple illustration of the problem. If I do the identical test with\na non-meson build there is no problem:\n\n\nThis happens 100% reproducible?\n\n\n\n\n\n\nFor a sufficiently modern installation of msys2 (20230318 version) this is\nreproducible on autoconf builds as well.\n\n\n\nOh. Seems like something we need to dig into independent of meson then :(\n\n\n\n\nThe main thing that's now an issue on Windows is support for various options\nlike libxml2. I installed the libxml2 distro from the package manager scoop,\ngenerated .lib files for the libxml2 and libxslt DLLs, and was able to build\nwith autoconf on msys2, and with our MSVC support, but not with meson in\neither case. It looks like we need to expand the logic in meson.build for a\nnumber of these, just as we have done for perl, python, openssl, ldap etc.\n\n\n\nI seriously doubt that trying to support every possible packaging thing on\nwindows is a good idea. What's the point of building against libraries from a\npackaging solution that doesn't even come with .lib files? Windows already is\na massive pain to support for postgres, making it even more complicated / less\npredictable is a really bad idea.\n\nIMO, for windows, the path we should go down is to provide one documented way\nto build the dependencies (e.g. using vcpkg or conan, perhaps also supporting\nmsys distributed libs), and define using something else to be unsupported (in\nthe \"we don't help you\", not in the \"we explicitly try to break things\"\nsense). And it should be something that understands needing to build debug\nand non-debug libraries.\n\n\n\n\n\nI'm not familiar with conan. I have struggled considerably with\n vcpkg in the past.\n\nI don't think there is any one perfect answer.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 3 May 2023 18:39:55 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-03 09:20:28 -0400, Andrew Dunstan wrote:\n> On 2023-04-27 Th 18:18, Andres Freund wrote:\n> > On 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n> > > Still running into this, and I am rather stumped. This is a blocker for\n> > > buildfarm support for meson:\n> > >\n> > > Here's a simple illustration of the problem. If I do the identical test with\n> > > a non-meson build there is no problem:\n> > This happens 100% reproducible?\n>\n> For a sufficiently modern installation of msys2 (20230318 version) this is\n> reproducible on autoconf builds as well.\n>\n> For now it's off my list of meson blockers. I will pursue the issue when I\n> have time, but for now the IPC::Run workaround is sufficient.\n\nHm. I can't reproduce this in my test win10 VM, unfortunately. What OS / OS\nversion is the host? Any chance to get systeminfo.exe output or something like\nthat?\n\nI think we ought to do something here. If newer environments cause failures\nlike this, it seems likely that this will spread to more and more applications\nover time...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 May 2023 16:54:13 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-04 Th 19:54, Andres Freund wrote:\n> Hi,\n>\n> On 2023-05-03 09:20:28 -0400, Andrew Dunstan wrote:\n>> On 2023-04-27 Th 18:18, Andres Freund wrote:\n>>> On 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n>>>> Still running into this, and I am rather stumped. This is a blocker for\n>>>> buildfarm support for meson:\n>>>>\n>>>> Here's a simple illustration of the problem. If I do the identical test with\n>>>> a non-meson build there is no problem:\n>>> This happens 100% reproducible?\n>> For a sufficiently modern installation of msys2 (20230318 version) this is\n>> reproducible on autoconf builds as well.\n>>\n>> For now it's off my list of meson blockers. I will pursue the issue when I\n>> have time, but for now the IPC::Run workaround is sufficient.\n> Hm. I can't reproduce this in my test win10 VM, unfortunately. What OS / OS\n> version is the host? Any chance to get systeminfo.exe output or something like\n> that?\n\n\nIts a Windows Server 2019 (v 1809) instance running on AWS.\n\n\nHere's an extract from systeminfo:\n\n\nOS Name: Microsoft Windows Server 2019 Datacenter\nOS Version: 10.0.17763 N/A Build 17763\nOS Manufacturer: Microsoft Corporation\nOS Configuration: Standalone Server\nOS Build Type: Multiprocessor Free\nRegistered Owner: EC2\nRegistered Organization: Amazon.com\nProduct ID: 00430-00000-00000-AA796\nOriginal Install Date: 4/24/2023, 10:28:31 AM\nSystem Boot Time: 4/24/2023, 1:49:59 PM\nSystem Manufacturer: Amazon EC2\nSystem Model: t3.large\nSystem Type: x64-based PC\nProcessor(s): 1 Processor(s) Installed.\n [01]: Intel64 Family 6 Model 85 Stepping 7 \nGenuineIntel ~2500 Mhz\nBIOS Version: Amazon EC2 1.0, 10/16/2017\nWindows Directory: C:\\Windows\nSystem Directory: C:\\Windows\\system32\nBoot Device: \\Device\\HarddiskVolume1\nSystem Locale: en-us;English (United States)\nInput Locale: en-us;English (United States)\nTime Zone: (UTC) Coordinated Universal Time\nTotal Physical Memory: 8,090 MB\nAvailable Physical Memory: 4,843 MB\nVirtual Memory: Max Size: 10,010 MB\nVirtual Memory: Available: 7,405 MB\nVirtual Memory: In Use: 2,605 MB\n\n\n>\n> I think we ought to do something here. If newer environments cause failures\n> like this, it seems likely that this will spread to more and more applications\n> over time...\n>\n\nJust to reassure myself I have not been hallucinating, I repeated the test.\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf/root/HEAD/inst\n$ /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start \n > startlog 2>&1}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\nOK\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf/root/HEAD/inst\n$ /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop \n > stoplog 2>&1}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\nBANG: 33280\n\n\nIf you want to play I can arrange access.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-04 Th 19:54, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-05-03 09:20:28 -0400, Andrew Dunstan wrote:\n\n\nOn 2023-04-27 Th 18:18, Andres Freund wrote:\n\n\nOn 2023-04-26 09:59:05 -0400, Andrew Dunstan wrote:\n\n\nStill running into this, and I am rather stumped. This is a blocker for\nbuildfarm support for meson:\n\nHere's a simple illustration of the problem. If I do the identical test with\na non-meson build there is no problem:\n\n\nThis happens 100% reproducible?\n\n\n\nFor a sufficiently modern installation of msys2 (20230318 version) this is\nreproducible on autoconf builds as well.\n\nFor now it's off my list of meson blockers. I will pursue the issue when I\nhave time, but for now the IPC::Run workaround is sufficient.\n\n\n\nHm. I can't reproduce this in my test win10 VM, unfortunately. What OS / OS\nversion is the host? Any chance to get systeminfo.exe output or something like\nthat?\n\n\n\nIts a Windows Server 2019 (v 1809) instance running on AWS. \n\n\n\nHere's an extract from systeminfo:\n\n\nOS Name: Microsoft Windows Server 2019\n Datacenter\n OS Version: 10.0.17763 N/A Build 17763\n OS Manufacturer: Microsoft Corporation\n OS Configuration: Standalone Server\n OS Build Type: Multiprocessor Free\n Registered Owner: EC2\n Registered Organization: Amazon.com\n Product ID: 00430-00000-00000-AA796\n Original Install Date: 4/24/2023, 10:28:31 AM\n System Boot Time: 4/24/2023, 1:49:59 PM\n System Manufacturer: Amazon EC2\n System Model: t3.large\n System Type: x64-based PC\n Processor(s): 1 Processor(s) Installed.\n [01]: Intel64 Family 6 Model 85\n Stepping 7 GenuineIntel ~2500 Mhz\n BIOS Version: Amazon EC2 1.0, 10/16/2017\n Windows Directory: C:\\Windows\n System Directory: C:\\Windows\\system32\n Boot Device: \\Device\\HarddiskVolume1\n System Locale: en-us;English (United States)\n Input Locale: en-us;English (United States)\n Time Zone: (UTC) Coordinated Universal Time\n Total Physical Memory: 8,090 MB\n Available Physical Memory: 4,843 MB\n Virtual Memory: Max Size: 10,010 MB\n Virtual Memory: Available: 7,405 MB\n Virtual Memory: In Use: 2,605 MB\n\n\n\n\n\nI think we ought to do something here. If newer environments cause failures\nlike this, it seems likely that this will spread to more and more applications\nover time...\n\n\n\n\n\nJust to reassure myself I have not been hallucinating, I repeated\n the test. \n\n\n\npgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf/root/HEAD/inst\n $ /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile\n start > startlog 2>&1}) ; print $? ? \"BANG: $?\\n\" :\n \"OK\\n\";'\n OK\n\n pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf/root/HEAD/inst\n $ /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile\n stop > stoplog 2>&1}) ; print $? ? \"BANG: $?\\n\" :\n \"OK\\n\";'\n BANG: 33280\n\n\n\nIf you want to play I can arrange access.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 5 May 2023 07:08:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-05 07:08:39 -0400, Andrew Dunstan wrote:\n> On 2023-05-04 Th 19:54, Andres Freund wrote:\n> > Hm. I can't reproduce this in my test win10 VM, unfortunately. What OS / OS\n> > version is the host? Any chance to get systeminfo.exe output or something like\n> > that?\n> \n> \n> Its a Windows Server 2019 (v 1809) instance running on AWS.\n\nHm. When I hit the python issue I also couldn't repro it on windows 10. Cirrus\nwas also using Windows Server 2019...\n\n\n> > I think we ought to do something here. If newer environments cause failures\n> > like this, it seems likely that this will spread to more and more applications\n> > over time...\n> > \n> \n> Just to reassure myself I have not been hallucinating, I repeated the test.\n> \n> \n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf/root/HEAD/inst\n> $ /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start >\n> startlog 2>&1}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n> OK\n> \n> pgrunner@EC2AMAZ-GCB871B UCRT64 ~/bf/root/HEAD/inst\n> $ /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop >\n> stoplog 2>&1}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n> BANG: 33280\n\nOh, so it only happens when stopping, never when starting? That's\ninteresting...\n\n\n> If you want to play I can arrange access.\n\nThat'd be very helpful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 8 May 2023 12:58:22 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-05 07:08:39 -0400, Andrew Dunstan wrote:\n> If you want to play I can arrange access.\n\nAndrew did - thanks!\n\n\nA first observeration is that making the shell command slightly more\ncomplicated, by echoing $? after pg_ctl, prevents the error:\n\n/usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1;}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\nBANG: 33280\n\n/usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1; echo $?}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n0\nOK\n\nSo does manually or or via a subshell adding another layer of shell.\n\n\nAs Andrew observed earlier, the issue does not occur when not performing\nredirection of the output. One interesting bit there is that the perl docs for\nsystem include:\nhttps://perldoc.perl.org/functions/system\n\n> If there are no shell metacharacters in the argument, it is split into words\n> and passed directly to execvp, which is more efficient. On Windows, only the\n> system PROGRAM LIST syntax will reliably avoid using the shell; system LIST,\n> even with more than one element, will fall back to the shell if the first\n> spawn fails.\n\nMy guesss is that the issue somehow is triggered around the shell handling.\n\n\nOne relevant bit: If I use strace (from msys) within system, the subprograms\n(shell and pg_ctl) actually exit with 0, from what I can tell - but 33280\nstill is returned. Unfortunately, if I use strace for all of perl, the error\nvanishes.\n\n\nPerhaps are some odd interactions with the stuff that InheritstdHandles()\ndoes?\n\nAndrew, is it ok if modify pg_ctl.c and rebuild? I don't know how \"detached\"\nfrom the actual buildfarm animal the system you gave me access to is...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 May 2023 12:38:17 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-15 Mo 15:38, Andres Freund wrote:\n> Hi,\n>\n> On 2023-05-05 07:08:39 -0400, Andrew Dunstan wrote:\n>> If you want to play I can arrange access.\n> Andrew did - thanks!\n>\n>\n> A first observeration is that making the shell command slightly more\n> complicated, by echoing $? after pg_ctl, prevents the error:\n>\n> /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1;}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n> BANG: 33280\n>\n> /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1; echo $?}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n> 0\n> OK\n\n\nYou're now testing something else, namely the return of the echo rather \nthan the call to pg_ctl, so I don't think this is any kind of answer. It \nwould just be ignoring the result of pg_ctl.\n\n\n>\n> So does manually or or via a subshell adding another layer of shell.\n>\n>\n> As Andrew observed earlier, the issue does not occur when not performing\n> redirection of the output. One interesting bit there is that the perl docs for\n> system include:\n> https://perldoc.perl.org/functions/system\n>\n>> If there are no shell metacharacters in the argument, it is split into words\n>> and passed directly to execvp, which is more efficient. On Windows, only the\n>> system PROGRAM LIST syntax will reliably avoid using the shell; system LIST,\n>> even with more than one element, will fall back to the shell if the first\n>> spawn fails.\n> My guesss is that the issue somehow is triggered around the shell handling.\n>\n>\n> One relevant bit: If I use strace (from msys) within system, the subprograms\n> (shell and pg_ctl) actually exit with 0, from what I can tell - but 33280\n> still is returned. Unfortunately, if I use strace for all of perl, the error\n> vanishes.\n>\n>\n> Perhaps are some odd interactions with the stuff that InheritstdHandles()\n> does?\n\n\nI observed the same thing with strace. Kind of a Heisenbug.\n\n\n>\n> Andrew, is it ok if modify pg_ctl.c and rebuild? I don't know how \"detached\"\n> from the actual buildfarm animal the system you gave me access to is...\n>\n\nFeel free to do anything you want. This is a completely separate \ninstance from the buildfarm animals. When we're done with this issue the \nEC2 instance will go away.\n\nIf you use the script just run in test mode or from-source mode, so it \ndoesn't try to report results (that would fail anyway, as it doesn't \nhave a registered secret). You might have to force have_ipc_run to 0. Or \nyou can just build / install manually.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-15 Mo 15:38, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-05-05 07:08:39 -0400, Andrew Dunstan wrote:\n\n\nIf you want to play I can arrange access.\n\n\n\nAndrew did - thanks!\n\n\nA first observeration is that making the shell command slightly more\ncomplicated, by echoing $? after pg_ctl, prevents the error:\n\n/usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1;}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\nBANG: 33280\n\n/usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1; echo $?}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n0\nOK\n\n\n\nYou're now testing something else, namely the return of the echo\n rather than the call to pg_ctl, so I don't think this is any kind\n of answer. It would just be ignoring the result of pg_ctl.\n\n\n\n\n\n\nSo does manually or or via a subshell adding another layer of shell.\n\n\nAs Andrew observed earlier, the issue does not occur when not performing\nredirection of the output. One interesting bit there is that the perl docs for\nsystem include:\nhttps://perldoc.perl.org/functions/system\n\n\n\nIf there are no shell metacharacters in the argument, it is split into words\nand passed directly to execvp, which is more efficient. On Windows, only the\nsystem PROGRAM LIST syntax will reliably avoid using the shell; system LIST,\neven with more than one element, will fall back to the shell if the first\nspawn fails.\n\n\n\nMy guesss is that the issue somehow is triggered around the shell handling.\n\n\nOne relevant bit: If I use strace (from msys) within system, the subprograms\n(shell and pg_ctl) actually exit with 0, from what I can tell - but 33280\nstill is returned. Unfortunately, if I use strace for all of perl, the error\nvanishes.\n\n\nPerhaps are some odd interactions with the stuff that InheritstdHandles()\ndoes?\n\n\n\nI observed the same thing with strace. Kind of a Heisenbug.\n\n\n\n\n\n\nAndrew, is it ok if modify pg_ctl.c and rebuild? I don't know how \"detached\"\nfrom the actual buildfarm animal the system you gave me access to is...\n\n\n\n\n\nFeel free to do anything you want. This is a completely separate\n instance from the buildfarm animals. When we're done with this\n issue the EC2 instance will go away.\nIf you use the script just run in test mode or from-source mode,\n so it doesn't try to report results (that would fail anyway, as it\n doesn't have a registered secret). You might have to force\n have_ipc_run to 0. Or you can just build / install manually.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 15 May 2023 16:01:39 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-15 16:01:39 -0400, Andrew Dunstan wrote:\n> On 2023-05-15 Mo 15:38, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-05-05 07:08:39 -0400, Andrew Dunstan wrote:\n> > > If you want to play I can arrange access.\n> > Andrew did - thanks!\n> > \n> > \n> > A first observeration is that making the shell command slightly more\n> > complicated, by echoing $? after pg_ctl, prevents the error:\n> > \n> > /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1;}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n> > BANG: 33280\n> > \n> > /usr/bin/perl -e 'system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile start > startlog 2>&1}) ;system(qq{\"bin/pg_ctl\" -D data-C -w -l logfile stop > stoplog 2>&1; echo $?}) ; print $? ? \"BANG: $?\\n\" : \"OK\\n\";'\n> > 0\n> > OK\n> \n> \n> You're now testing something else, namely the return of the echo rather than\n> the call to pg_ctl, so I don't think this is any kind of answer. It would\n> just be ignoring the result of pg_ctl.\n\nIt wouldn't really - the echo $? inside the system() would report the\nerror. Which it doesn't - note the \"0\" in the second output.\n\n\n> > Andrew, is it ok if modify pg_ctl.c and rebuild? I don't know how \"detached\"\n> > from the actual buildfarm animal the system you gave me access to is...\n> > \n> \n> Feel free to do anything you want. This is a completely separate instance\n> from the buildfarm animals. When we're done with this issue the EC2 instance\n> will go away.\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 May 2023 13:13:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-15 13:13:26 -0700, Andres Freund wrote:\n> It wouldn't really - the echo $? inside the system() would report the\n> error. Which it doesn't - note the \"0\" in the second output.\n\nAh. Interesting. Part of the issue is perl (or msys?) swalling some error\ndetails.\n\nI could see more details in strace once I added another layer of shell\nevaluation inside the system() call.\n\n 190 478261 [main] bash 44432 frok::parent: CreateProcessW (C:\\tools\\nmsys64\\usr\\bin\\bash.exe, C:\\tools\\nmsys64\\usr\\bin\\bash.exe, 0, 0, 1, 0x420, 0, 0, 0x7FFFFBE10, 0x7FFFF\nBDB0)\n--- Process 7152 created\n[...]\n 1556 196093 [main] bash 44433 child_info_spawn::worker: pid 44433, prog_arg ./tmp_install/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/pg_ctl, cmd line C:\\tools\\nmsys6\n4\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build\\tmp_install\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\HEAD\\inst\\bin\\pg_ctl.exe -D t -w -l logfile stop)\n 128 196221 [main] bash 44433! child_info_spawn::worker: new process name \\\\?\\C:\\tools\\nmsys64\\home\\pgrunner\\bf\\root\\HEAD\\pgsql.build\\tmp_install\\tools\\nmsys64\\home\\pgrunne\nr\\bf\\root\\HEAD\\inst\\bin\\pg_ctl.exe\n[...]\n--- Process 6136 (pid: 44433) exited with status 0x0\n[...]\n--- Process 7152 exited with status 0xc000013a\n5292450 5816310 [waitproc] bash 44432 pinfo::maybe_set_exit_code_from_windows: pid 44433, exit value - old 0x0, windows 0xC000013A, MSYS 0x8000002\n\nSo indeed, pg_ctl exits with 0, but bash ends up with a different exit code.\n\nWhat's very interesting here is that the error is 0xC000013A, which is quite\ndifferent from the 33280 that perl then reports. From what I can see bash\nactually returns 0xC000013A - I don't know how perl ends up with 33280 /\n0x8200 from that.\n\nEither way, 0xC000013A is interesting - that's 0xC000013A,\nSTATUS_CONTROL_C_EXIT.\n\n\nVery interestingly the problem vanishes as soon as I add a redirection for\nstandard input into the mix. Notably it suffices to redirect stdin in the\npg_ctl *start*, even if not done for pg_ctl stop. There also is no issue if\nperl's stdin is redirected from /dev/null.\n\nMy guess is that msys has an issue with refcounting consoles across multiple\nprocesses.\n\n\nAfter that I was able to reproduce the issue without really involving perl:\n\nbash -c './tmp_install/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/pg_ctl -D t -w -l logfile start > startlog 2>&1; ./tmp_install/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/pg_ctl -D t -w -l logfile stop > stoplog 2>&1; echo inner: $?'; echo outer: $?\n\n+ bash -c './tmp_install/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/pg_ctl -D t -w -l logfile start > startlog 2>&1; ./tmp_install/tools/nmsys64/home/pgrunner/bf/root/HEAD/inst/bin/pg_ctl -D t -w -l logfile stop > stoplog 2>&1; echo inner: $?'\ninner: 130\n+ echo outer: 0\nouter: 0\n\nIf you add -e, the inner: is obviously \"transferred\" to the outer: output.\n\nAs soon as either the pg_ctl for the start, or the whole bash invocation, has\nstdin redirected, the problem vanishes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 May 2023 15:30:28 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-15 15:30:28 -0700, Andres Freund wrote:\n> As soon as either the pg_ctl for the start, or the whole bash invocation, has\n> stdin redirected, the problem vanishes.\n\nFor a moment I thought this could be related to InheritStdHandles() - but no,\nit doesn't make a difference.\n\nThere's loads of handles referencing cygwin alive in pg_ctl.\n\nBased on difference in strace output for bash -c \"pg_ctl stop\" for the case\nwhere start redirected stdin (#1) and where not (#2), it looks like some part\nof msys / cygwin sees that stdin is alive when preparing to execute \"pg_ctl\nstop\", and then runs into trouble.\n\nThe way we start the child process on windows makes the use of cmd.exe for\nredirection pretty odd.\n\n\nI couldn't trivially reproduce this with a much simpler case (just nohup\nsleep). Perhaps it's dependent on a wrapper cmd or such.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 May 2023 16:43:56 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-15 Mo 19:43, Andres Freund wrote:\n> Hi,\n>\n> On 2023-05-15 15:30:28 -0700, Andres Freund wrote:\n>> As soon as either the pg_ctl for the start, or the whole bash invocation, has\n>> stdin redirected, the problem vanishes.\n> For a moment I thought this could be related to InheritStdHandles() - but no,\n> it doesn't make a difference.\n>\n> There's loads of handles referencing cygwin alive in pg_ctl.\n>\n> Based on difference in strace output for bash -c \"pg_ctl stop\" for the case\n> where start redirected stdin (#1) and where not (#2), it looks like some part\n> of msys / cygwin sees that stdin is alive when preparing to execute \"pg_ctl\n> stop\", and then runs into trouble.\n>\n> The way we start the child process on windows makes the use of cmd.exe for\n> redirection pretty odd.\n>\n>\n> I couldn't trivially reproduce this with a much simpler case (just nohup\n> sleep). Perhaps it's dependent on a wrapper cmd or such.\n>\n>\n\nI don't know where this all leaves us. It's still more than odd that the \nstart works fine and the stop doesn't.\n\nThis piece of code has worked happily for years. It's only a recent \ninstallation or update of msys2 that's made the problem appear.\n\nI have implemented a workaround where IPC::Run is available - that means \na little extra one-off work for people using msys2, but it's not a huge \nburden. Beyond that I don't really want to spend a lot more energy on it.\n\nI suppose the alternative would be to change the way the buildfarm calls \npg_ctl stop. Do you have a concrete suggestion for that?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-15 Mo 19:43, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-05-15 15:30:28 -0700, Andres Freund wrote:\n\n\nAs soon as either the pg_ctl for the start, or the whole bash invocation, has\nstdin redirected, the problem vanishes.\n\n\n\nFor a moment I thought this could be related to InheritStdHandles() - but no,\nit doesn't make a difference.\n\nThere's loads of handles referencing cygwin alive in pg_ctl.\n\nBased on difference in strace output for bash -c \"pg_ctl stop\" for the case\nwhere start redirected stdin (#1) and where not (#2), it looks like some part\nof msys / cygwin sees that stdin is alive when preparing to execute \"pg_ctl\nstop\", and then runs into trouble.\n\nThe way we start the child process on windows makes the use of cmd.exe for\nredirection pretty odd.\n\n\nI couldn't trivially reproduce this with a much simpler case (just nohup\nsleep). Perhaps it's dependent on a wrapper cmd or such.\n\n\n\n\n\n\nI don't know where this all leaves us. It's still more than odd\n that the start works fine and the stop doesn't.\nThis piece of code has worked happily for years. It's only a\n recent installation or update of msys2 that's made the problem\n appear.\nI have implemented a workaround where IPC::Run is available -\n that means a little extra one-off work for people using msys2, but\n it's not a huge burden. Beyond that I don't really want to spend a\n lot more energy on it.\nI suppose the alternative would be to change the way the\n buildfarm calls pg_ctl stop. Do you have a concrete suggestion for\n that?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 16 May 2023 08:55:20 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-16 08:55:20 -0400, Andrew Dunstan wrote:\n> I don't know where this all leaves us. It's still more than odd that the\n> start works fine and the stop doesn't.\n\n From what I understand it's just a question of starting another shell, with\nsome redirection, after having previously started a shell, which left a\nprogram running (thus still referencing the same console device).\n\n\n> This piece of code has worked happily for years. It's only a recent\n> installation or update of msys2 that's made the problem appear.\n\nYea, it does look like a bug somewhere. I just don't know how to make it a\nsmall enough reproducer right now.\n\n\n> I have implemented a workaround where IPC::Run is available - that means a\n> little extra one-off work for people using msys2, but it's not a huge\n> burden. Beyond that I don't really want to spend a lot more energy on it.\n\n> I suppose the alternative would be to change the way the buildfarm calls\n> pg_ctl stop. Do you have a concrete suggestion for that?\n\nThe easiest fix is to redirect stdin to /dev/null (or some file, if that's\neasier to do portably) - that should fix the problem entirely, without needing\nIPC::Run.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 May 2023 14:52:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-16 Tu 17:52, Andres Freund wrote:\n>\n>> I suppose the alternative would be to change the way the buildfarm calls\n>> pg_ctl stop. Do you have a concrete suggestion for that?\n> The easiest fix is to redirect stdin to /dev/null (or some file, if that's\n> easier to do portably) - that should fix the problem entirely, without needing\n> IPC::Run.\n>\n\nShould only be needed for the start command, right? I can probably just \nadd \"< $devnull\" to the command. I'll test it out.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-16 Tu 17:52, Andres Freund\n wrote:\n\n\n\nI suppose the alternative would be to change the way the buildfarm calls\npg_ctl stop. Do you have a concrete suggestion for that?\n\n\n\nThe easiest fix is to redirect stdin to /dev/null (or some file, if that's\neasier to do portably) - that should fix the problem entirely, without needing\nIPC::Run.\n\n\n\n\n\nShould only be needed for the start command, right? I can\n probably just add \"< $devnull\" to the command. I'll test it\n out.\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 17 May 2023 17:51:41 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "Hi, \n\nOn May 17, 2023 2:51:41 PM PDT, Andrew Dunstan <[email protected]> wrote:\n>\n>On 2023-05-16 Tu 17:52, Andres Freund wrote:\n>> \n>>> I suppose the alternative would be to change the way the buildfarm calls\n>>> pg_ctl stop. Do you have a concrete suggestion for that?\n>> The easiest fix is to redirect stdin to /dev/null (or some file, if that's\n>> easier to do portably) - that should fix the problem entirely, without needing\n>> IPC::Run.\n>> \n>\n>Should only be needed for the start command, right? \n\nI think so. \n\n> I can probably just add \"< $devnull\" to the command. I'll test it out.\n\nCool.\n\nAndres \n\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 17 May 2023 14:55:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
},
{
"msg_contents": "On 2023-05-17 We 17:55, Andres Freund wrote:\n> Hi,\n>\n> On May 17, 2023 2:51:41 PM PDT, Andrew Dunstan<[email protected]> wrote:\n>> On 2023-05-16 Tu 17:52, Andres Freund wrote:\n>>>> I suppose the alternative would be to change the way the buildfarm calls\n>>>> pg_ctl stop. Do you have a concrete suggestion for that?\n>>> The easiest fix is to redirect stdin to /dev/null (or some file, if that's\n>>> easier to do portably) - that should fix the problem entirely, without needing\n>>> IPC::Run.\n>>>\n>> Should only be needed for the start command, right?\n> I think so.\n>\n>> I can probably just add \"< $devnull\" to the command. I'll test it out.\n> Cool.\n>\n\nOK, that seems to work. *whew*. Thanks for your help.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-05-17 We 17:55, Andres Freund\n wrote:\n\n\nHi, \n\nOn May 17, 2023 2:51:41 PM PDT, Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2023-05-16 Tu 17:52, Andres Freund wrote:\n\n\n\n\n\nI suppose the alternative would be to change the way the buildfarm calls\npg_ctl stop. Do you have a concrete suggestion for that?\n\n\nThe easiest fix is to redirect stdin to /dev/null (or some file, if that's\neasier to do portably) - that should fix the problem entirely, without needing\nIPC::Run.\n\n\n\n\nShould only be needed for the start command, right? \n\n\n\nI think so. \n\n\n\nI can probably just add \"< $devnull\" to the command. I'll test it out.\n\n\n\nCool.\n\n\n\n\n\nOK, that seems to work. *whew*. Thanks for your help.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 17 May 2023 21:54:59 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue with meson builds on msys2"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease consider fixing the following unique words/identifiers introduced in v16:\n1. addresess -> addresses\n2. adminstrator -> administrator // the same typo found in src/backend/po/id.po, but perhaps it should be fixed via \npgsql-translators\n3. appeneded -> appended\n4. appliciable -> applicable\n5. BackroundPsql -> BackgroundPsql\n6. binaies -> binaries\n7. compresion -> compression\n8. containsthe -> contains the\n9. contextes -> contexts\n10. deparseAnalyzeTuplesSql -> deparseAnalyzeInfoSql // that function was renamed in 57d11ef0\n\n11. DO_LARGE_OJECT_DATA -> DO_LARGE_OBJECT_DATA\n12. doesnt't -> doesn't\n13. dst_perminfo -> dst_perminfos\n14. eror -> error\n15. execpt -> except\n16. forech -> foreach\n17. GetResultRelCheckAsUser -> ExecGetResultRelCheckAsUser\n18. GUCS -> GUCs\n19. happend -> happened\n20. immitated -> imitated\n\n21. insert_xid -> tuple_xid // see bfcf1b348\n22. ldap_add -> ldapadd_file\n23. ldapbindpassw -> ldapbindpasswd\n24. MemoryChunkSetExternal -> MemoryChunkSetHdrMaskExternal\n25. non-encyrpted -> non-encrypted\n26. --no-process_main -> --no-process-main\n27. optionn -> option\n28. Othewise -> Otherwise\n29. parellel -> parallel\n30. permissons -> permissions\n\n31. pg_pwrite_zeroes -> pg_pwrite_zeros\n32. pg_writev -> pg_pwritev\n33. possbile -> possible\n34. pqsymlink -> pgsymlink\n35. PG_GET_WAL_FPI_BLOCK_COLS -> PG_GET_WAL_BLOCK_INFO_COLS\n36. RangeVarCallbackOwnsTable -> RangeVarCallbackMaintainsTable // see 60684dd83\n37. remaing -> remaining\n38. ResourceOwnerForgetBufferIOs -> ResourceOwnerForgetBufferIO\n39. RMGRDESC_UTILS_H -> RMGRDESC_UTILS_H_ // or may be the other way\n40. rolenamehash -> rolename_hash\n\n41. ROLERECURSE_SETROLe -> ROLERECURSE_SETROLE\n42. sentinal -> sentinel\n43. smgzerorextend -> smgrzeroextend\n44. stacktoodeep -> rstacktoodeep // an excessive character was deleted with db4f21e4a?\n45. tar_set_error -- remove (obsolete since ebfb814f7)\n46. test_tranche_name -- remove (not used, see 006b69fd9)\n47. varilables -> variables\n48. xid_commit_status -> xmin_commit_status\n\nAlso, maybe OID_MAX should be removed from src/include/postgres_ext.h as it's unused since eb8312a22.\n\nBeside that, this simple script:\nfor w in $(cat src/tools/pgindent/typedefs.list); do grep -q -P \"\\b$w\\b\" -r * --exclude typedefs.list || echo \"$w\"; done\ndetects 58 identifiers that don't exist in the source tree anymore (see typedefs.lost attached).\nMaybe they should be removed from typedefs.list too.\n\nBest regards,\nAlexander",
"msg_date": "Mon, 17 Apr 2023 21:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 09:00:00PM +0300, Alexander Lakhin wrote:\n> Hello hackers,\n> \n> Please consider fixing the following unique words/identifiers introduced in v16:\n\nWell done.\n\nNote that your patches are overlapping:\n\n 3 --- a/src/backend/utils/misc/guc.c\n 2 --- a/src/test/perl/PostgreSQL/Test/BackgroundPsql.pm\n 2 --- a/src/test/ldap/LdapServer.pm\n 2 --- a/src/interfaces/libpq/t/004_load_balance_dns.pl\n 2 --- a/src/backend/utils/adt/acl.c\n\nIt'd make sense if the changes to each file were isolated to a single\npatch (especially 004_load and acl.c).\n\n> -\t\t * USER SET values are appliciable only for PGC_USERSET parameters. We\n> +\t\t * USER SET values are applicable only for PGC_USERSET parameters. We\n> \t\t * use InvalidOid as role in order to evade possible privileges of the\n\nand s/evade/avoid/\n\n> +++ b/src/bin/pg_dump/pg_dumpall.c\n\nYou missed \"boostrap\" :)\n\nI independently found 11 of the same typos you did:\n\n> 1. addresess -> addresses\n> 3. appeneded -> appended\n> 4. appliciable -> applicable\n> 8. containsthe -> �contains the\n> 15. execpt -> except\n> 19. happend -> happened\n> 27. optionn -> option\n> 30. permissons -> permissions\n> 37. remaing -> remaining\n> 42. sentinal -> sentinel\n> 47. varilables -> variables\n\nBut hadn't yet convinced myself to start the process of defending each\none of the fixes. Attached some others that I found.\n\n-- \nJustin",
"msg_date": "Mon, 17 Apr 2023 17:10:29 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 06:00, Alexander Lakhin <[email protected]> wrote:\n> Please consider fixing the following unique words/identifiers introduced in v16:\n\nThanks, I've pushed all of these apart from the following 2.\n\n> 45. tar_set_error -- remove (obsolete since ebfb814f7)\n> 46. test_tranche_name -- remove (not used, see 006b69fd9)\n\nThese didn't quite fit in with the \"typo fixes\" category of the\ncommit, so I left them off the commit I just pushed.\n\n> Also, maybe OID_MAX should be removed from src/include/postgres_ext.h as it's unused since eb8312a22.\n\nI didn't touch this. It seems like it could be useful for extensions\nand client apps even if it's not used in core.\n\n> Beside that, this simple script:\n> for w in $(cat src/tools/pgindent/typedefs.list); do grep -q -P \"\\b$w\\b\" -r * --exclude typedefs.list || echo \"$w\"; done\n> detects 58 identifiers that don't exist in the source tree anymore (see typedefs.lost attached).\n> Maybe they should be removed from typedefs.list too.\n\nI didn't touch this either. typedefs.list normally gets some work\ndone during the pgindent run, which is likely going to happen around\nMay or June. Maybe you can check back after that's done and make sure\nall these unused ones were removed. I'm not sure if the process that's\ndone for that only finds new ones that are now required or if it\ncompletely generates a new list.\n\nDavid\n\n\n",
"msg_date": "Tue, 18 Apr 2023 13:35:00 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 10:10, Justin Pryzby <[email protected]> wrote:\n> > - * USER SET values are appliciable only for PGC_USERSET parameters. We\n> > + * USER SET values are applicable only for PGC_USERSET parameters. We\n> > * use InvalidOid as role in order to evade possible privileges of the\n>\n> and s/evade/avoid/\n\nI didn't touch this. You'll need to provide more justification for why\nyou think it's more correct than what's there. It might not be worth\ntoo much discussion, however.\n\n> Attached some others that I found.\n\nPushed the rest. Thanks\n\nDavid\n\n\n",
"msg_date": "Tue, 18 Apr 2023 14:06:43 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "David Rowley <[email protected]> writes:\n> On Tue, 18 Apr 2023 at 06:00, Alexander Lakhin <[email protected]> wrote:\n>> Also, maybe OID_MAX should be removed from src/include/postgres_ext.h as it's unused since eb8312a22.\n\n> I didn't touch this. It seems like it could be useful for extensions\n> and client apps even if it's not used in core.\n\nAgreed, bad idea. For better or worse, that's part of our client API now.\n\n>> Beside that, this simple script:\n>> for w in $(cat src/tools/pgindent/typedefs.list); do grep -q -P \"\\b$w\\b\" -r * --exclude typedefs.list || echo \"$w\"; done\n>> detects 58 identifiers that don't exist in the source tree anymore (see typedefs.lost attached).\n>> Maybe they should be removed from typedefs.list too.\n\n> I didn't touch this either. typedefs.list normally gets some work\n> done during the pgindent run, which is likely going to happen around\n> May or June.\n\nYeah, it will get refreshed from the buildfarm output [1] pretty soon.\nA quick check says that as of today, that refresh would add 81 names\nand remove 94. (Seems like a remarkably high number of removals,\nbut I didn't dig further than counting the diff output.)\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/typedefs.pl?show_list\n\n\n",
"msg_date": "Mon, 17 Apr 2023 22:11:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 02:06:43PM +1200, David Rowley wrote:\n> On Tue, 18 Apr 2023 at 10:10, Justin Pryzby <[email protected]> wrote:\n> > > - * USER SET values are appliciable only for PGC_USERSET parameters. We\n> > > + * USER SET values are applicable only for PGC_USERSET parameters. We\n> > > * use InvalidOid as role in order to evade possible privileges of the\n> >\n> > and s/evade/avoid/\n> \n> I didn't touch this. You'll need to provide more justification for why\n> you think it's more correct than what's there. \n\nI'd noticed that it's a substitution/mistake that's been made in the\npast. I dug up:\n\n9436041ed848debb3d64fb5fbff6cdb35bc46d04\n8e12f4a250d250a89153da2eb9b91c31bb80c483\ncd9479af2af25d7fa9bfd24dd4dcf976b360f077\n6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n911e70207703799605f5a0e8aad9f06cff067c63\n\n> It might not be worth too much discussion, however.\n\n+many. I may resend the patch at some later date.\n\n> > Attached some others that I found.\n> \n> Pushed the rest. Thanks\n\nThanks!\n\n-- \nJustin \n\n\n",
"msg_date": "Tue, 18 Apr 2023 13:47:21 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "Hi Justin and David,\n\n18.04.2023 01:10, Justin Pryzby wrote:\n> Well done.\n\nThank you for reviewing!\n\n> On Mon, Apr 17, 2023 at 09:00:00PM +0300, Alexander Lakhin wrote:\n>> Hello hackers,\n>>\n>> Please consider fixing the following unique words/identifiers introduced in v16:\n> Note that your patches are overlapping:\n> ...\n> It'd make sense if the changes to each file were isolated to a single\n> patch (especially 004_load and acl.c).\n\nI'd hoped that most of the proposed fixes will be accepted, so conflicts due\nto skipping of some changes seemed unlikely to me. So if you are not\nstrongly disagree, I would continue presenting my findings the same way.\n\n> ...\n> You missed \"boostrap\" :)\n\nYes, that's because \"boostrap\" was not unique, but my semi-automatic approach\nis based on `uniq -u`, so I'm sure that there are typos that can't be found\nthis way.\n\n> But hadn't yet convinced myself to start the process of defending each\n> one of the fixes. Attached some others that I found.\n\nYeah, those are good catches too, but e. g. \"privilges\" is not new in v16,\nso it's fallen out of my \"hot errors\" category. If we're going to fix not so\nhot ones too now, please look at the similar list for v15+ (596b5af1d..HEAD).\n\n1. abbrevated -> abbreviated\n2. ArchiveModeRequested -> ArchiveRecoveryRequested\n3. BufFileOpenShared -> BufFileOpenFileSet // see dcac5e7ac\n4. check_publication_columns -> pub_collist_contains_invalid_column // see note 1\n5. configuation -> configuration\n6. copyAclUserName -> dequoteAclUserName // see 0c9d84427\n7. EndWalRecovery -> FinishWalRecovery\n8. HaveRegisteredOrActiveSnapshots -> HaveRegisteredOrActiveSnapshot\n9. idiosyncracies -> idiosyncrasies\n10. iif -> iff\n\n11. initpriv -> initprivs\n12. inserted_destrel -> insert_destrel\n13. Intialize -> Initialize\n14. invtrans -> invtransfn\n15. isolation-level -> isolation level\n16. lefthasheqoperator -> left_hasheqoperator + righthasheqoperator -> right_hasheqoperator\n17. LRQ_NO_IO -> LRQ_NEXT_NO_IO\n18. minRecovery point -> minRecoveryPoint\n19. multidimensional-aware -> multidimension-aware // sync with gistbuild.c\n20. ParalleVacuumState -> ParallelVacuumState\n\n21. PgStatShm_Stat*Entry -> PgStatShared_* // see note 2\n22. plpython_call_handler -> plpython3_call_handler // see 9b7e24a2c\n23. pulications -> publications\n24. ReadCheckPointRecord -> ReadCheckpointRecord\n25. relkkinds -> relkinds\n26. separare -> separate // though perhaps it's not the most suitable word here\n27. setup_formatted_log_time -> get_formatted_log_time // see ac7c80758\n28. SPI_abort -> SPI_rollback\n29. ssup_datum_int32_compare -> ssup_datum_int32_cmp\n30. ssup_datum_signed_compare -> ssup_datum_signed_cmp\n\n31. ssup_datum_unsigned_compare -> ssup_datum_unsigned_cmp\n32. SUBSCRITPION -> SUBSCRIPTION\n33. tabelspaces -> tablespaces\n34. table_state_not_ready -> table_states_not_ready\n35. underling -> underlying\n36. WalRecoveryResult -> EndOfWalRecoveryInfo\n\nAlso, I'd like to note that the following entities/references are orphaned now,\nso maybe some of them could be removed too:\n1. gen-rtab (in pgcrypto/Makefile) // orphaned since db7d1a7b0\n2. pgstat_temp_directory // was left by b3abca681 for v15, but maybe it's time to remove it for v16\n3. pgstat_write_statsfiles (in valgrind.supp)\n4. quote_system_arg (in vcregress.pl) // unused since d2a2ce418\n5. standard_initdb (in vcregress.pl) // unused since 322becb60\n/* though maybe vcregress.pl will be removed completely soon */\n6. int pstat; /* mcrypt uses it */ (in contrib/pgcrypto/px.h)\n/* \"mcrypt\" became unique after abe81ee08, support for libmcrypt was removed at 2005\n(3cc866123) */\n\nNote 1. A check that was located in check_publication_columns() in\nv13-0003-Add-column-filtering-to-logical-replication.patch [1],\ncan be found in pub_collist_contains_invalid_column() now (see also [2]).\n\nNote 2. The inconsistency appeared in [3],\nv67-0007-pgstat-store-statistics-in-shared-memory.patch was correct in\nthis aspect.\n\n\n18.04.2023 04:35, David Rowley wrote:\n>> Please consider fixing the following unique words/identifiers introduced in v16:\n> Thanks, I've pushed all of these apart from the following 2.\nThank you!\n\n[1] https://www.postgresql.org/message-id/202112302021.ca7ihogysgh3%40alvherre.pgsql\n[2] https://www.postgresql.org/message-id/CAA4eK1K5pkrPT9z5TByUPptExian5c18g6GnfNf9Cr97QdPbjw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/20220404041516.cctrvpadhuriawlq%40alap3.anarazel.de\n\nBest regards,\nAlexander",
"msg_date": "Tue, 18 Apr 2023 22:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Tue, Apr 18, 2023 at 02:06:43PM +1200, David Rowley wrote:\n>> On Tue, 18 Apr 2023 at 10:10, Justin Pryzby <[email protected]> wrote:\n>>> and s/evade/avoid/\n\n>> I didn't touch this. You'll need to provide more justification for why\n>> you think it's more correct than what's there. \n\n> I'd noticed that it's a substitution/mistake that's been made in the\n> past.\n\n\"Evade\" doesn't seem like le mot juste there; it's got negative\nconnotations. But the code around it is just horrible. Some offenses:\n\n* No documentation in the function header comment of what the\nusersetArray parameter is or does. Which is bad enough in itself,\nbut what the parameter actually does is falsify the header comment's\nprincipal claim that the passed context is what is used. So I don't\nfind that omission acceptable.\n\n* Non-obvious, and quite unnecessary, dependency on the isnull variable\nhaving been left in a particular state by previous code.\n\n* For me, at least, it'd read better if the if/else arms were swapped,\nallowing removal of the negation in the if-condition and bringing\nthe code this comment comments on closer to said comment.\n\nAs for the comment text, maybe say\n\n * If the value was USER SET, then apply it at PGC_USERSET context\n * rather than the caller-supplied context, to prevent any more-restricted\n * GUCs being set. Also pass InvalidOid for the role, to ensure any\n * special privileges of the current user aren't applied.\n\nI hesitate to go look at the rest of this commit, but I guess somebody\nhad better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 15:10:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Wed, 19 Apr 2023 at 07:00, Alexander Lakhin <[email protected]> wrote:\n> please look at the similar list for v15+ (596b5af1d..HEAD).\n\nI've now pushed most of these but didn't include the following ones:\n\n> 3. BufFileOpenShared -> BufFileOpenFileSet // see dcac5e7ac\n\nMaybe I need to spend longer, but I just didn't believe the command\nthat claimed that \"BufFiles opened using BufFileOpenFileSet() are\nread-only by definition\". Looking at the code for that, it seems to\ndepend on if O_RDONLY is included in the mode flags.\n\n> 19. multidimensional-aware -> multidimension-aware // sync with gistbuild.c\n\nI didn't change this as I didn't think it was an improvement. I'd\nprobably have written \"multidimensionally aware\", but I didn't feel\nstrongly enough to want to change it.\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Apr 2023 10:49:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "Hi David,\n\n21.04.2023 01:49, David Rowley wrote:\n> On Wed, 19 Apr 2023 at 07:00, Alexander Lakhin <[email protected]> wrote:\n>> please look at the similar list for v15+ (596b5af1d..HEAD).\n> I've now pushed most of these but didn't include the following ones:\n\nThank you!\n\n>> 3. BufFileOpenShared -> BufFileOpenFileSet // see dcac5e7ac\n> Maybe I need to spend longer, but I just didn't believe the command\n> that claimed that \"BufFiles opened using BufFileOpenFileSet() are\n> read-only by definition\". Looking at the code for that, it seems to\n> depend on if O_RDONLY is included in the mode flags.\n\nI've found the following explanation for that:\n1) As of dcac5e7ac~1 function ltsConcatWorkerTapes() contained:\n...\n file = BufFileOpenShared(fileset, filename, O_RDONLY);\n...\n * The only thing that currently prevents writing to the leader tape from\n * working is the fact that BufFiles opened using BufFileOpenShared() are\n * read-only by definition, but that could be changed if it seemed\n...\n\n2) A patch [1], which eventually resulted in c4649cce3, initially started\nwith the change:\n\n-ltsConcatWorkerTapes(LogicalTapeSet *lts, TapeShare *shared,\n...\n- * working is the fact that BufFiles opened using BufFileOpenShared() are\n...\n+LogicalTapeImport(LogicalTapeSet *lts, int worker, TapeShare *shared)\n...\n+ file = BufFileOpenShared(lts->fileset, filename, O_RDONLY);\n...\n+ * the fact that BufFiles opened using BufFileOpenShared() are read-only\n\n3) The commit dcac5e7ac (pushed 2021-08-30) renamed the function\nBufFileOpenShared() to BufFileOpenFileSet() and changed the comment:\n...\n * The only thing that currently prevents writing to the leader tape from\n- * working is the fact that BufFiles opened using BufFileOpenShared() are\n+ * working is the fact that BufFiles opened using BufFileOpenFileSet() are\n * read-only by definition, but that could be changed if it seemed\n...\n\n4) The commit c4649cce3 (pushed 2021-10-18) removed the comment referencing\nBufFileOpenFileSet() and added that somewhat distant comment\nreferencing BufFileOpenShared():\n$ git show c4649cce3 src/backend/utils/sort/logtape.c | grep 'BufFiles opened'\n- * working is the fact that BufFiles opened using BufFileOpenFileSet() are\n+ * the fact that BufFiles opened using BufFileOpenShared() are read-only\n\nSo I still believe that the \"BufFileOpenShared -> BufFileOpenFileSet\" change\nis correct and that comment can be read now as referencing to the line:\n file = BufFileOpenFileSet(<s->fileset->fs, filename, O_RDONLY, false);\nin LogicalTapeImport(). Although it could be improved, for sure.\n\nPlease look at the following two bunches for v14+ and v13+ (split to ease\nback-patching if needed). Having processed them, I've reached the state that\ncould be considered \"clean\" ([2], [3]); at least I don't see how to detect\nyet more errors of this class in dozens, so it's my last run for now (though I\nhave several entities left, which I couldn't find replacements for).\n\nv14+:\n1. AsyncCtl -> NotifyCtl // renamed in 5da14938f\n2. ATExecConstrRecurse -> ATExecAlterConstrRecurse\n3. attlocal -> attislocal\n4. before_shmem_access -> before_shmem_exit\n5. bodys -> bodies\n6. can_getnextslot_tidrange -> scan_getnextslot_tidrange\n7. DISABLE_ATOMICS -> HAVE_ATOMICS\n8. FETCH_H -> REWIND_SOURCE_H\n9. filed -> field\n10. find_minmax_aggs_walker -> can_minmax_aggs // renamed in 0a2bc5d61e\n\n11. GroupExprInfo -> GroupVarInfo //// a4d75c86b\n12. LD_DEAD -> LP_DEAD\n13. libpq-trace.c -> fe-trace.c\n14. lowerItem -> lowestItem //// bb437f995\n15. has_privs -> has_privs_of_role\n16. heap_hot_prune_opt -> heap_page_prune_opt\n17. MAX_CONVERSION_LENGTH -> MAX_CONVERSION_INPUT_LENGTH //// ea1b99a66\n18. MAX_FLUSH_BUFFERS -> MAX_WRITEALL_BUFFERS // renamed in dee663f78\n19. myscheam -> myschema // doc/ -- maybe should be backpatched\n20. pgbestat_beinit -> pgstat_beinit\n\n21. pgWALUsage -> pgWalUsage\n22. point-in-time-recovery -> point-in-time recovery\n23. PQnotify -> PGnotify\n24. QUERYJUBLE_H -> QUERYJUMBLE_H\n25. rd_partdesc_nodetach -> rd_partdesc_nodetached\n26. ReadNewTransactionid -> GetNewTransactionId\n27. RelationBuildDescr -> RelationBuildDesc\n28. SnapBuildCommittedTxn -> SnapBuildCommitTxn // see DecodeCommit()\n29. subscription_rel -> pg_subscription_rel\n30. tap_rep -> tab_rep\n\n31. total_heap_blks -> heap_blks_total\n32. tuple_cids -> tuplecids\n33. WatchLatch -> WaitLatch\n34. WriteAll -> SimpleLruWriteAll\n35. PageIsPrunable -- remove // that define and the PageIsPrunable() check above were removed in dc7420c2c\n\nCandidates for removal:\nBARRIER_SHOULD_CHECK //unused since a3ed4d1ef\nEXE_EXT // unused since f06b1c598\nget_toast_for // unused since 860593ec3\nSizeOfCommitTsSet // unused since 08aa89b32\n\nv13+:\n1. agg_init_trans_check -> agg_trans\n2. agg_strict_trans_check -> agg_trans\n3. amopclassopts -> amoptsprocnum //// since 911e70207\n4. CommitTSBuffer -> CommitTsBuffer // the inconsistency exists since 5da14938f; maybe this change should be backpatched\n5. gist_intbig_ops -> gist__intbig_ops\n6. gist_int_ops -> gist__int_ops\n7. laftleft -> lastleft\n8. lc_message -> locale_name // in accordance with the search_locale_enum() description\n9. leftype -> lefttype\n10. mksafefunc -> mkfunc // see 1f474d299\n\n11. openSegment -> segment_open // in accordance with the WALRead() description\n12. parse_util.c -> parse_utilcmd.c\n13. process_innerer_partition -> process_inner_partition\n14. read_spilled_tuple -> hashagg_batch_read\n15. SortGroupNode -> SortGroupClause\n16. SWITCH_WAL -> XLOG_SWITCH\n17. tts_attr -> ttc_attr\n18. tts_oldvalues -> ttc_oldvalues\n19. tts_oldisnull -> ttc_oldisnull\n20. tts_rel -> ttc_rel\n\n21. taget -> target\n22. WALSnd -> WalSnd\n23. XLogRoutine -> XLogReaderRoutine\n\nCandidates for removal:\nendterm // see 60c90c16c -- Use xreflabel attributes instead of endterm attributes ...\npackage_tarname // not used since introduction in 1933ae629\nmy $clearpass = \"FooBaR1\"; // unreferenced since b846091fd\n\n[1] https://www.postgresql.org/message-id/91284957-3cb2-944e-5f14-5c2ff86b49fa%40iki.fi\n(0001-Refactor-LogicalTapeSet-LogicalTape-interface.patch)\n\n[2] https://www.postgresql.org/message-id/flat/5da8e325-c665-da95-21e0-c8a99ea61fbf%40gmail.com\n[3] https://www.postgresql.org/message-id/flat/CALDaNm0ni%2BGAOe4%2BfbXiOxNrVudajMYmhJFtXGX-zBPoN8ixhw%40mail.gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Fri, 21 Apr 2023 12:00:00 +0300",
"msg_from": "Alexander Lakhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 12:00:00PM +0300, Alexander Lakhin wrote:\n> Please look at the following two bunches for v14+ and v13+ (split to ease\n> back-patching if needed). Having processed them, I've reached the state that\n> could be considered \"clean\" ([2], [3]); at least I don't see how to detect\n> yet more errors of this class in dozens, so it's my last run for now (though I\n> have several entities left, which I couldn't find replacements for).\n\nThis was hanging around, and I had some time, so I have looked at the\nwhole. One of the only two user-visible change was in the docs for\npg_amcheck, so I have applied that first as of 6fd8ae6 and backpatched\nit down to 14.\n\nNow, for the remaining 59..\n\n> 1. agg_init_trans_check -> agg_trans\n> 2. agg_strict_trans_check -> agg_trans\n\n /*\n * pergroup = &aggstate->all_pergroups\n- * [op->d.agg_strict_trans_check.setoff]\n- * [op->d.agg_init_trans_check.transno];\n+ * [op->d.agg_trans.setoff]\n+ * [op->d.agg_trans.transno];\n */\nHonestly, while incorrect, I have no idea what this comment means ;)\n\n> 4. CommitTSBuffer -> CommitTsBuffer // the inconsistency exists since 5da14938f; maybe this change should be backpatched\n\nYes, we'd better backpatch that. I agree that it seems more sensible\nhere to switch the compiled value rather than what the docs have been\nusing for years. Perhaps somebody has a different opinion?\n\nThe others were OK and in line with the discussion of upthread, so\napplied.\n--\nMichael",
"msg_date": "Tue, 2 May 2023 12:26:31 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
},
{
"msg_contents": "On Tue, May 02, 2023 at 12:26:31PM +0900, Michael Paquier wrote:\n> On Fri, Apr 21, 2023 at 12:00:00PM +0300, Alexander Lakhin wrote:\n>> 4. CommitTSBuffer -> CommitTsBuffer // the inconsistency exists since 5da14938f; maybe this change should be backpatched\n> \n> Yes, we'd better backpatch that. I agree that it seems more sensible\n> here to switch the compiled value rather than what the docs have been\n> using for years. Perhaps somebody has a different opinion?\n\nHearing nothing, I have now applied this part down to 13, on time for\nthe next minor release.\n--\nMichael",
"msg_date": "Fri, 5 May 2023 21:30:11 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos and inconsistencies for v16"
}
] |
[
{
"msg_contents": "Recently, I ran into a problem, InvokeObjectPostAlterHook was implemented for sepgsql,\r\nsepgsql use it to determine whether to check permissions during certain operations.\r\nBut InvokeObjectPostAlterHook doesn't handle all of the alter's behavior, at least the table is not controlled. e.g., ALTER TABLE... ENABLE/DISABLE ROW LEVEL SECURITY,ALTER TABLE ... DISABLE TRIGGER, GRANT and REVOKE and so on.\r\nWhether InvokeObjectPostAlterHook is not fully controlled? it's a bug?\r\n\r\n\r\n\r\n\r\n发自我的iPhone\nRecently, I ran into a problem, InvokeObjectPostAlterHook was implemented for sepgsql,sepgsql use it to determine whether to check permissions during certain operations.But InvokeObjectPostAlterHook doesn't handle all of the alter's behavior, at least the table is not controlled. e.g., ALTER TABLE... ENABLE/DISABLE ROW LEVEL SECURITY,ALTER TABLE ... DISABLE TRIGGER, GRANT and REVOKE and so on.Whether InvokeObjectPostAlterHook is not fully controlled? it's a bug?发自我的iPhone",
"msg_date": "Tue, 18 Apr 2023 09:51:30 +0800",
"msg_from": "\"=?utf-8?B?IExlZ3MgTWFuc2lvbg==?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "A Question about InvokeObjectPostAlterHook"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 09:51:30AM +0800, Legs Mansion wrote:\n> Recently, I ran into a problem, InvokeObjectPostAlterHook was\n> implemented for sepgsql, sepgsql use it to determine whether to\n> check permissions during certain operations. But\n> InvokeObjectPostAlterHook doesn't handle all of the alter's\n> behavior, at least the table is not controlled. e.g., ALTER \n> TABLE... ENABLE/DISABLE ROW LEVEL SECURITY,ALTER TABLE ... DISABLE\n> TRIGGER, GRANT and REVOKE and so on. \n> Whether InvokeObjectPostAlterHook is not fully controlled? it's\n> a bug? \n\nYes, tablecmds.c has some holes and these are added when there is a\nask for it, as far as I recall. In some cases, these locations can be\ntricky to add, so usually they require an independent analysis. For\nexample, EnableDisableTrigger() has one AOT for the trigger itself,\nbut not for the relation changed in tablecmds.c, as you say, anyway we\nshould be careful with cross-dependencies.\n\nNote that 90efa2f has made the tests for OATs much easier, and there\nis no need to rely only on sepgsql for that. (Even if test_oat_hooks\nhas been having some stability issues with namespace lookups because\nof the position on the namespace search hook.)\n\nAlso, the additions of InvokeObjectPostAlterHook() are historically\nconservative because they create behavior changes in stable branches,\nmeaning no backpatch. See a995b37 or 7b56584 as past examples, for\nexample.\n\nNote that the development of PostgreSQL 16 has just finished, so now\nmay not be the best moment to add these extra AOT calls, but these\ncould be added in 17~ for sure at the beginning of July once the next\ndevelopment cycle begins.\n\nAttached would be what I think would be required to add OATs for RLS,\ntriggers and rules, for example. There are much more of these at\nquick glance, still that's one step in providing more checks. Perhaps\nyou'd like to expand this patch with more ALTER TABLE subcommands\ncovered?\n--\nMichael",
"msg_date": "Tue, 18 Apr 2023 13:34:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A Question about InvokeObjectPostAlterHook"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 01:34:00PM +0900, Michael Paquier wrote:\n> Note that the development of PostgreSQL 16 has just finished, so now\n> may not be the best moment to add these extra AOT calls, but these\n> could be added in 17~ for sure at the beginning of July once the next\n> development cycle begins.\n\nThe OAT hooks are added in ALTER TABLE for the following subcommands:\n- { ENABLE | DISABLE | [NO] FORCE } ROW LEVEL SECURITY\n- { ENABLE | DISABLE } TRIGGER\n- { ENABLE | DISABLE } RULE\n\n> Attached would be what I think would be required to add OATs for RLS,\n> triggers and rules, for example. There are much more of these at\n> quick glance, still that's one step in providing more checks. Perhaps\n> you'd like to expand this patch with more ALTER TABLE subcommands\n> covered?\n\nNow that we are at the middle of the development cycle of 17~, it is\ntime to come back to this one (it was registered in the CF, but I did\nnot come back to it). Would there be any objections if I apply this\npatch with its tests? This would cover most of the ground requested\nby Legs at the start of this thread.\n\n(The patch had one diff because of a namespace lookup not happening\nanymore, so rebased.)\n--\nMichael",
"msg_date": "Tue, 15 Aug 2023 15:48:23 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A Question about InvokeObjectPostAlterHook"
}
] |
[
{
"msg_contents": "Attached patch removes xl_heap_lock_updated/XLOG_HEAP2_LOCK_UPDATED,\nwhich allows us to throw out about a hundred lines of duplicative\n(though hairy) code, net total. The patch also reclaims some Heap2\ninfo bit space, which seems quite useful.\n\nThe patch teaches heap_lock_updated_tuple_rec() to make use of the\ngeneric xl_heap_lock/XLOG_HEAP_LOCK record type, rather than using the\nnow-removed custom record type. My guess is that commit 0ac5ad5134\n(where xl_heap_lock_updated originated) simply missed the opportunity\nto consolidate the two records into one, perhaps because the patch\nevolved a lot during development. Note that xl_heap_lock is already\nused by several heapam routines besides heap_lock_tuple (e.g.\nheap_update uses it).\n\nTesting with wal_consistency_checking did not demonstrate any problems\nwith the patch. It's not completely trivial, though. It seems that\nthere are steps in the REDO routines that shouldn't be performed in\nthe locked-updated-tuple case -- I had to invent XLH_LOCK_UPDATED to\ndeal with the issue. There may be a better approach there, but I\nhaven't thought about it in enough detail to feel confident either\nway.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 17 Apr 2023 20:19:30 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Do we really need two xl_heap_lock records?"
}
] |
[
{
"msg_contents": "Over on [1], Peter mentions that we might want to consider putting the\nVACUUM options into some order that's better than the apparent random\norder that they're currently in.\n\nVACUUM is certainly one command that's grown a fairly good number of\noptions over the years and it appears we've not given much\nconsideration to what order to put those in in the documentation.\n\nIt's not just VACUUM that has this issue. I see 6 commands using the\nfollowing text:\n\n$ git grep \"option</replaceable> can be one of\"\nsrc/sgml/ref/analyze.sgml: ...\nsrc/sgml/ref/cluster.sgml: ...\nsrc/sgml/ref/copy.sgml: ...\nsrc/sgml/ref/explain.sgml: ...\nsrc/sgml/ref/reindex.sgml: ...\nsrc/sgml/ref/vacuum.sgml: ...\n\n(maybe there's more we should consider adjusting?)\n\nLikely if we do opt to put these options in a more well-defined order,\nwe should apply that to at least the 6 commands listed above.\n\nFor the case of reindex.sgml, I do see that the existing parameter\norder lists INDEX | TABLE | SCHEMA | DATABASE | SYSTEM first which is\nthe target of the reindex. I wondered if that was worth keeping. I'm\njust thinking that since all of these are under the \"Parameters\"\nheading that we should class them all as equals and just make the\norder alphabetical. I feel that if we don't do that then the order to\nadd any new parameters is just not going to be obvious and we'll end\nup with things getting out of order again quite quickly.\n\nI've attached a patch which makes the changes as I propose them.\n\nDavid\n\n[1] https://postgr.es/m/[email protected]",
"msg_date": "Tue, 18 Apr 2023 17:44:39 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 10:45 PM David Rowley <[email protected]> wrote:\n> For the case of reindex.sgml, I do see that the existing parameter\n> order lists INDEX | TABLE | SCHEMA | DATABASE | SYSTEM first which is\n> the target of the reindex. I wondered if that was worth keeping. I'm\n> just thinking that since all of these are under the \"Parameters\"\n> heading that we should class them all as equals and just make the\n> order alphabetical. I feel that if we don't do that then the order to\n> add any new parameters is just not going to be obvious and we'll end\n> up with things getting out of order again quite quickly.\n\nI don't think that alphabetical order makes much sense. Surely some\nparameters are more important than others. Surely there is some kind\nof natural grouping that makes somewhat more sense than alphabetical\norder.\n\nTake the VACUUM command. Right now FULL, FREEZE, and VERBOSE all come\nfirst. Those options are approximately the most important options --\nespecially VERBOSE. But your patch places VERBOSE dead last.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 17 Apr 2023 23:53:17 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 18:53, Peter Geoghegan <[email protected]> wrote:\n> Take the VACUUM command. Right now FULL, FREEZE, and VERBOSE all come\n> first. Those options are approximately the most important options --\n> especially VERBOSE. But your patch places VERBOSE dead last.\n\nhmm, how can we verify that the options are kept in order of\nimportance? What guidance can we provide to developers adding options\nabout where they should slot in the new option to the docs?\n\n\"Importance order\" just seems horribly subjective to me. I'd be\ninterested to know if you could tell me if SKIP_LOCKED has more\nimportance than INDEX_CLEANUP, for example. If you can, it would seem\nlike trying to say apples are more important than oranges, or\nvice-versa.\n\nDavid\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:17:52 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 4:18 PM David Rowley <[email protected]> wrote:\n> \"Importance order\" just seems horribly subjective to me.\n\nAlphabetical order seems objectively bad. At least to me.\n\n> I'd be interested to know if you could tell me if SKIP_LOCKED has more\n> importance than INDEX_CLEANUP, for example. If you can, it would seem\n> like trying to say apples are more important than oranges, or\n> vice-versa.\n\nI don't accept your premise that the only thing that matters (or the\nmost important thing) is adherence to some unambiguous and consistent\norder.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 Apr 2023 16:30:06 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 4:30 PM Peter Geoghegan <[email protected]> wrote:\n> > I'd be interested to know if you could tell me if SKIP_LOCKED has more\n> > importance than INDEX_CLEANUP, for example. If you can, it would seem\n> > like trying to say apples are more important than oranges, or\n> > vice-versa.\n>\n> I don't accept your premise that the only thing that matters (or the\n> most important thing) is adherence to some unambiguous and consistent\n> order.\n\nIn the case of VACUUM, the current devel order is:\n\nFULL, FREEZE, VERBOSE, ANALYZE, DISABLE_PAGE_SKIPPING, SKIP_LOCKED,\nINDEX_CLEANUP, PROCESS_MAIN, PROCESS_TOAST,\nTRUNCATE, PARALLEL, SKIP_DATABASE_STATS, ONLY_DATABASE_STATS, BUFFER_USAGE_LIMIT\n\nI think that this order is far superior to alphabetical order, which\nis tantamount to random order. The first 4 items are indeed the really\nimportant ones to users, in my experience.\n\nI do have some minor quibbles beyond that, though. These are:\n\n* PARALLEL deserves to be at the start, maybe 4th or 5th overall.\n\n* DISABLE_PAGE_SKIPPING should be later, since it's really only a\ntesting option that probably never proved useful in production. In\nparticular, it has little business being before SKIP_LOCKED, which is\nmuch more important and relevant.\n\n* TRUNCATE and INDEX_CLEANUP are similar options, and ought to be side\nby side. I would put PROCESS_MAIN and PROCESS_TOAST after those two\nfor the same reason.\n\nWhile I'm certain that nobody will agree with me on every little\ndetail, I have to imagine that most would find my preferred ordering\nquite understandable and unsurprising, at a high level -- this is not\na hopelessly idiosyncratic ranking, that could just as easily have\nbeen generated by a PRNG. People may not easily agree that \"apples are\nmore important than oranges, or vice-versa\", but what does it matter?\nI've really only put each option into buckets of items with *roughly*\nthe same importance. All of the details beyond that don't matter to\nme, at all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 Apr 2023 18:05:15 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On 2023-Apr-18, Peter Geoghegan wrote:\n\n> While I'm certain that nobody will agree with me on every little\n> detail, I have to imagine that most would find my preferred ordering\n> quite understandable and unsurprising, at a high level -- this is not\n> a hopelessly idiosyncratic ranking, that could just as easily have\n> been generated by a PRNG. People may not easily agree that \"apples are\n> more important than oranges, or vice-versa\", but what does it matter?\n> I've really only put each option into buckets of items with *roughly*\n> the same importance. All of the details beyond that don't matter to\n> me, at all.\n\nI agree with you that roughly bucketing items is a good approach.\nWithin each bucket we can then sort alphabetically.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:47:47 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On 19.04.23 01:30, Peter Geoghegan wrote:\n>> I'd be interested to know if you could tell me if SKIP_LOCKED has more\n>> importance than INDEX_CLEANUP, for example. If you can, it would seem\n>> like trying to say apples are more important than oranges, or\n>> vice-versa.\n> \n> I don't accept your premise that the only thing that matters (or the\n> most important thing) is adherence to some unambiguous and consistent\n> order.\n\nMy thinking is, if I want to look up FREEZE on the VACUUM man page, I \nwould welcome some easily identifiable way of locating it. At that \npoint, I don't know whether FREEZE is important or what kind of option \nit is. For reference material, easy lookup should be a priority. For a \nnarrative chapter on VACUUM, you can introduce the options in any other \nsuitable order.\n\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:52:06 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "> On 19 Apr 2023, at 10:52, Peter Eisentraut <[email protected]> wrote:\n\n> For reference material, easy lookup should be a priority.\n\n+1. Alphabetical ordering is consistent with POLA.\n\n> For a narrative chapter on VACUUM, you can introduce the options in any other\n> suitable order.\n\n\nI would even phrase it such that in this case one *should* present the options\nin the order most suitable to educate the reader.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:07:04 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 3:04 AM Alvaro Herrera <[email protected]> wrote:\n> > While I'm certain that nobody will agree with me on every little\n> > detail, I have to imagine that most would find my preferred ordering\n> > quite understandable and unsurprising, at a high level -- this is not\n> > a hopelessly idiosyncratic ranking, that could just as easily have\n> > been generated by a PRNG. People may not easily agree that \"apples are\n> > more important than oranges, or vice-versa\", but what does it matter?\n> > I've really only put each option into buckets of items with *roughly*\n> > the same importance. All of the details beyond that don't matter to\n> > me, at all.\n>\n> I agree with you that roughly bucketing items is a good approach.\n> Within each bucket we can then sort alphabetically.\n\nI think of these buckets as working at a logarithmic scale. The FULL,\nFREEZE, VERBOSE, and ANALYZE options are multiple orders of magnitude\nmore important than most of the other options, and maybe one order of\nmagnitude more important than the PARALLEL, TRUNCATE, and\nINDEX_CLEANUP options. With differences that big, you have a structure\nthat generalizes across all users quite well. This doesn't seem\nparticularly subjective.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:38:48 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 2:39 PM Peter Geoghegan <[email protected]> wrote:\n>\n> On Wed, Apr 19, 2023 at 3:04 AM Alvaro Herrera <[email protected]> wrote:\n> > > While I'm certain that nobody will agree with me on every little\n> > > detail, I have to imagine that most would find my preferred ordering\n> > > quite understandable and unsurprising, at a high level -- this is not\n> > > a hopelessly idiosyncratic ranking, that could just as easily have\n> > > been generated by a PRNG. People may not easily agree that \"apples are\n> > > more important than oranges, or vice-versa\", but what does it matter?\n> > > I've really only put each option into buckets of items with *roughly*\n> > > the same importance. All of the details beyond that don't matter to\n> > > me, at all.\n> >\n> > I agree with you that roughly bucketing items is a good approach.\n> > Within each bucket we can then sort alphabetically.\n>\n> I think of these buckets as working at a logarithmic scale. The FULL,\n> FREEZE, VERBOSE, and ANALYZE options are multiple orders of magnitude\n> more important than most of the other options, and maybe one order of\n> magnitude more important than the PARALLEL, TRUNCATE, and\n> INDEX_CLEANUP options. With differences that big, you have a structure\n> that generalizes across all users quite well. This doesn't seem\n> particularly subjective.\n\nI actually favor query/command order followed by alphabetical order for\nmost of the commands David included in his patch.\n\nOf course the parameter argument types, like boolean and integer, should\nbe grouped together separate from the main parameters. David fit this\ninto the alphabetical paradigm by doing uppercase alphabetical followed\nby lowercase alphabetical. There are some specific cases where I think\nthis isn't working quite as intended in his patch. I've called those out\nin my command-by-command code review below.\n\nI actually think we should consider having a single location which\ndefines all argument types for all SQL command parameters. Then we\nwouldn't need to define them for each command. We could simply link to\nthe definition from the synopsis. That would clean up these lists quite\na bit. Perhaps there is some variation from command to command in the\nactual definitions, though (I haven't checked). I would be happy to try\nand write this patch if folks are interested in the idea.\n\nAs for alphabetical ordering vs importance ordering: while I do think\nthat if a user does not know what parameter they are looking for, an\nalphabetical ordering is unhelpful, I also think the primary issue with\ngrouping them by \"importance\" is that it is difficult to maintain. Doing\nso requires a discussion of importance for every new option added. That\nseems like an annoying bit of overhead to give ourselves. Having a\nsubjective ordering seems worse than having a rule-based ordering. I\nthink command/query order followed by alphabetical order is a reasonable\nrule-based ordering.\n\nI went and took a look at some of the other SQL commands' documentation\nand noticed that they are all pretty different (for good reason).\n\nALTER ROLE parameters [1], for example, have a seemingly meaningless\norder except for the fact that there are pairs of parameters. SUPERUSER\nand NOSUPERUSER, INHERIT and NOINHERIT, etc. It might be a bit odd for\nthese to follow an absolute alphabetical ordering rule.\n\nMany of the CREATE type SQL commands don't really have this problem\nbecause there are only one or two options within each section of the\ncommand and otherwise the order the parameters must appear in the query\ndictates their order [2].\n\nOthers, like EXPLAIN [3], for example, obviously benefit from an\nalphabetical ordering of parameters -- which David has done in the\npatch. I think most of the commands that David has patched here are\ngood candidates for alphabetical ordering.\n\nBelow I've reviewed each command in the patch specifically:\n\nFor ANALYZE, I think this looks good in its new alphabetized form.\nThough table_name is alphabetically last for the lower case parameters\nand thus doesn't pose an issue, if it were alphabetically earlier, I\nwould still favor putting it at the end to maintain a query order then\nalphabetical order ordering.\n\nFor CLUSTER, I think alphabetical order isn't working well. I think we\nshould maintain query order followed by alphabetical order. Even though\ntable_name is optional, in the event that it is included, it would\nprecede index_name. So, perhaps the order should be VERBOSE, boolean,\ntable_name, index_name -- which pretty much cancels out alphabetizing.\n\nFor COPY, I think the new ordering of COPY has some issues. table_name\nis no longer first even though for COPY FROM it is required before the\nother parameters. I think this is confusing. Perhaps the options should\nbe after the other parameters are defined. I think having the options\nalphabetized at the end of the others would be nice. So, my suggested\nordering is table_name, column_name, filename, PROGRAM, STDIN, STDOUT,\nthen the WITH options alphabetically, WHERE, and then the parameter\nargument types alphabetically. The last one (where to put the parameter\nargument types) I'm not so sure about.\n\nEXPLAIN looks good to me as is.\n\nFor REINDEX, I would again suggest a query ordering followed by\nalphabetical ordering. CONCURRENTLY, TABLESPACE, VERBOSE, DATABASE,\nINDEX, SCHEMA, SYSTEM, TABLE, name, then all of the parameter argument\ntypes alphabetically. (Also, you can put CONCURRENTLY in two different\nplaces in the REINDEX command?)\n\nFor VACUUM, I'd perhaps suggest the options in alphabetical order\nfollowed by table_name and then column_name and then putting the\nparameter argument types at the end alphabetically.\n\nOf course, we could decide VACUUM is special and group its options by\nimportance because this is especially helpful for users. I think that\nthere are other SQL commands whose options' importance is not\nparticularly worth debating.\n\nI do think we should consider deprecating and dropping documentation of\nthe options that are supported without parentheses (relevant to commands\nlike ANALYZE, CLUSTER, VACUUM, and others). It is fine if we keep the\ncode to make ANALYZE VERBOSE work, but I don't think it is useful to\nkeep that documented. That is not a concern of this patch, however.\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/sql-alterrole.html\n[2] https://www.postgresql.org/docs/devel/sql-createindex.html\n[3] https://www.postgresql.org/docs/devel/sql-explain.html\n\n\n",
"msg_date": "Wed, 19 Apr 2023 17:33:47 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "Melanie Plageman <[email protected]> writes:\n> I do think we should consider deprecating and dropping documentation of\n> the options that are supported without parentheses (relevant to commands\n> like ANALYZE, CLUSTER, VACUUM, and others). It is fine if we keep the\n> code to make ANALYZE VERBOSE work, but I don't think it is useful to\n> keep that documented. That is not a concern of this patch, however.\n\nI doubt it's a great idea to de-document syntax that's still allowed\nand will still be widely used for years to come; that just promotes\nconfusion. However, we could do something similar to what we did\nfor COPY years ago, and move the un-parenthesized syntax to the\n\"Compatibility\" section.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Apr 2023 17:45:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 2:33 PM Melanie Plageman\n<[email protected]> wrote:\n> As for alphabetical ordering vs importance ordering: while I do think\n> that if a user does not know what parameter they are looking for, an\n> alphabetical ordering is unhelpful, I also think the primary issue with\n> grouping them by \"importance\" is that it is difficult to maintain. Doing\n> so requires a discussion of importance for every new option added.\n\nNot really. It's a matter that requires some amount of individual\njudgement, in some cases. It may require effort, but I think that\nthat's likely to be worth it.\n\nI won't be the one that quibbles over every little thing.\n\n> For VACUUM, I'd perhaps suggest the options in alphabetical order\n> followed by table_name and then column_name and then putting the\n> parameter argument types at the end alphabetically.\n>\n> Of course, we could decide VACUUM is special and group its options by\n> importance because this is especially helpful for users. I think that\n> there are other SQL commands whose options' importance is not\n> particularly worth debating.\n\nThat's very likely true -- it may be that most individual commands\nreally wouldn't be any worse off if they just used a standard\nalphabetical order. I agree that consistency can be a virtue. But it's\nnot the highest virtue. There will be a number of important\nexceptions, which will have outsized impact. VACUUM, ANALYZE, maybe\nCREATE INDEX. So if there is going to be a new standard, there should\nalso be significant wiggle-room. Kind of like with the guidelines for\nrmgr desc authors discussion.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Apr 2023 16:46:11 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 05:33:47PM -0400, Melanie Plageman wrote:\n> On Wed, Apr 19, 2023 at 2:39 PM Peter Geoghegan <[email protected]> wrote:\n> >\n> > On Wed, Apr 19, 2023 at 3:04 AM Alvaro Herrera <[email protected]> wrote:\n> > > > While I'm certain that nobody will agree with me on every little\n> > > > detail, I have to imagine that most would find my preferred ordering\n> > > > quite understandable and unsurprising, at a high level -- this is not\n> > > > a hopelessly idiosyncratic ranking, that could just as easily have\n> > > > been generated by a PRNG. People may not easily agree that \"apples are\n> > > > more important than oranges, or vice-versa\", but what does it matter?\n> > > > I've really only put each option into buckets of items with *roughly*\n> > > > the same importance. All of the details beyond that don't matter to\n> > > > me, at all.\n> > >\n> > > I agree with you that roughly bucketing items is a good approach.\n> > > Within each bucket we can then sort alphabetically.\n> >\n> > I think of these buckets as working at a logarithmic scale. The FULL,\n> > FREEZE, VERBOSE, and ANALYZE options are multiple orders of magnitude\n> > more important than most of the other options, and maybe one order of\n> > magnitude more important than the PARALLEL, TRUNCATE, and\n> > INDEX_CLEANUP options. With differences that big, you have a structure\n> > that generalizes across all users quite well. This doesn't seem\n> > particularly subjective.\n> \n> I actually favor query/command order followed by alphabetical order for\n> most of the commands David included in his patch.\n> \n> Of course the parameter argument types, like boolean and integer, should\n> be grouped together separate from the main parameters. David fit this\n> into the alphabetical paradigm by doing uppercase alphabetical followed\n> by lowercase alphabetical. There are some specific cases where I think\n> this isn't working quite as intended in his patch. I've called those out\n> in my command-by-command code review below.\n> \n> I actually think we should consider having a single location which\n> defines all argument types for all SQL command parameters. Then we\n> wouldn't need to define them for each command. We could simply link to\n> the definition from the synopsis. That would clean up these lists quite\n> a bit. Perhaps there is some variation from command to command in the\n> actual definitions, though (I haven't checked). I would be happy to try\n> and write this patch if folks are interested in the idea.\n\nI looked into this and it isn't a good idea. Out of the 183 SQL\ncommands, really only ANALYZE, VACUUM, COPY, CLUSTER, EXPLAIN, and\nREINDEX have parameter argument types that are context-independent. And\nout of those, boolean is the only type shared by all. VACUUM is the only\none with more than one parameter argument \"type\". So, it is basically\njust a bad idea. Oh well...\n\n- Melanie\n\n\n",
"msg_date": "Thu, 20 Apr 2023 08:37:52 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "On Wed, 19 Apr 2023 at 22:04, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Apr-18, Peter Geoghegan wrote:\n>\n> > While I'm certain that nobody will agree with me on every little\n> > detail, I have to imagine that most would find my preferred ordering\n> > quite understandable and unsurprising, at a high level -- this is not\n> > a hopelessly idiosyncratic ranking, that could just as easily have\n> > been generated by a PRNG. People may not easily agree that \"apples are\n> > more important than oranges, or vice-versa\", but what does it matter?\n> > I've really only put each option into buckets of items with *roughly*\n> > the same importance. All of the details beyond that don't matter to\n> > me, at all.\n>\n> I agree with you that roughly bucketing items is a good approach.\n> Within each bucket we can then sort alphabetically.\n\nIf these \"buckets\" were subcategories, then it might be ok. I see \"man\ngrep\" categorises the command line options and then sorts\nalphabetically within the category. If we could come up with a way of\ncategorising the options then this would satisfy what Melanie\nmentioned about having the argument types listed separately. However,\nI'm really not sure which categories we could have. I really don't\nhave any concrete ideas here, but I'll attempt to at least start\nsomething:\n\nBehavioral:\nANALYZE\nDISABLE_PAGE_SKIPPING\nFREEZE\nFULL\nINDEX_CLEANUP\nONLY_DATABASE_STATS\nPROCESS_MAIN\nPROCESS_TOAST\nSKIP_DATABASE_STATS\nSKIP_LOCKED\nTRUNCATE\n\nResource Usage:\nBUFFER_USAGE_LIMIT\nPARALLEL\n\nInformational:\nVERBOSE\n\nOption Parameters:\nboolean\ncolumn_name\ninteger\nsize\ntable_name\n\nI'm just not sure if we have enough options to have a need to\ncategorise them. Also, going by the categories I attempted to come up\nwith, it just feels like \"Behavioral\" contains too many and\n\"Informational\" is likely only ever going to contain VERBOSE. So I'm\nnot very happy with them.\n\nI'm not really feeling excited enough about this to even come up with\na draft patch. I thought I'd send out this anyway to see if anyone can\nthink of anything better.\n\nFWIW, vacuumdb --help has its options in alphabetical order using the\nabbreviated form of the option.\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Apr 2023 00:40:48 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
},
{
"msg_contents": "> On 20 Apr 2023, at 14:40, David Rowley <[email protected]> wrote:\n\n> I see \"man grep\" categorises the command line options and then sorts\n> alphabetically within the category.\n\n\nOn FreeBSD and macOS \"man grep\" lists all options alphabetically.\n\n> FWIW, vacuumdb --help has its options in alphabetical order using the\n> abbreviated form of the option.\n\nIt does (as most of our binaries do) group \"Connection options\" separately\nthough, and in initdb --help and pg_dump --help we have other groupings as\nwell.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 20 Apr 2023 14:57:46 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should we put command options in alphabetical order in the doc?"
}
] |
[
{
"msg_contents": "In thread [1] which discussed 'Right Anti Join', Tom once mentioned\n'Right Semi Join'. After a preliminary investigation I think it is\nbeneficial and can be implemented with very short change. With 'Right\nSemi Join', what we want to do is to just have the first match for each\ninner tuple. For HashJoin, after scanning the hash bucket for matches\nto current outer, we just need to check whether the inner tuple has been\nset match and skip it if so. For MergeJoin, we can do it by avoiding\nrestoring inner scan to the marked tuple in EXEC_MJ_TESTOUTER, in the\ncase when new outer tuple == marked tuple.\n\nAs that thread is already too long, fork a new thread and attach a patch\nused for discussion. The patch implements 'Right Semi Join' for\nHashJoin.\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4_eChX1bN%3Dvj0Uzg_7iz9Uivan%2BWjjor-X87L-V27A%2Brw%40mail.gmail.com\n\nThanks\nRichard",
"msg_date": "Tue, 18 Apr 2023 17:07:34 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 5:07 PM Richard Guo <[email protected]> wrote:\n\n> In thread [1] which discussed 'Right Anti Join', Tom once mentioned\n> 'Right Semi Join'. After a preliminary investigation I think it is\n> beneficial and can be implemented with very short change. With 'Right\n> Semi Join', what we want to do is to just have the first match for each\n> inner tuple. For HashJoin, after scanning the hash bucket for matches\n> to current outer, we just need to check whether the inner tuple has been\n> set match and skip it if so. For MergeJoin, we can do it by avoiding\n> restoring inner scan to the marked tuple in EXEC_MJ_TESTOUTER, in the\n> case when new outer tuple == marked tuple.\n>\n> As that thread is already too long, fork a new thread and attach a patch\n> used for discussion. The patch implements 'Right Semi Join' for\n> HashJoin.\n>\n\nThe cfbot reminds that this patch does not apply any more, so rebase it\nto v2.\n\nThanks\nRichard",
"msg_date": "Thu, 10 Aug 2023 15:24:28 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Thu, Aug 10, 2023 at 3:24 PM Richard Guo <[email protected]> wrote:\n\n> The cfbot reminds that this patch does not apply any more, so rebase it\n> to v2.\n>\n\nAttached is another rebase over the latest master. Any feedback is\nappreciated.\n\nThanks\nRichard",
"msg_date": "Wed, 1 Nov 2023 13:55:17 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Wed, 1 Nov 2023 at 11:25, Richard Guo <[email protected]> wrote:\n>\n>\n> On Thu, Aug 10, 2023 at 3:24 PM Richard Guo <[email protected]> wrote:\n>>\n>> The cfbot reminds that this patch does not apply any more, so rebase it\n>> to v2.\n>\n>\n> Attached is another rebase over the latest master. Any feedback is\n> appreciated.\n\nOne of the tests in CFBot has failed at [1] with:\n- Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n- Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\nr1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND EXISTS\n(SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3))) ORDER BY\nr1.\"C 1\" ASC NULLS LAST\n-(4 rows)\n+ Sort Key: t1.c1\n+ -> Foreign Scan\n+ Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n+ Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n+ Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5,\nr1.c6, r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND\nEXISTS (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3)))\n+(7 rows)\n\nMore details are available at [2].\n\n[1] - https://cirrus-ci.com/task/4868751326183424\n[2] - https://api.cirrus-ci.com/v1/artifact/task/4868751326183424/testrun/build/testrun/postgres_fdw/regress/regression.diffs\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 7 Jan 2024 12:33:00 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Sun, Jan 7, 2024 at 3:03 PM vignesh C <[email protected]> wrote:\n\n> One of the tests in CFBot has failed at [1] with:\n> - Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n> - Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\n> r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND EXISTS\n> (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n> ((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3))) ORDER BY\n> r1.\"C 1\" ASC NULLS LAST\n> -(4 rows)\n> + Sort Key: t1.c1\n> + -> Foreign Scan\n> + Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> + Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n> + Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5,\n> r1.c6, r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND\n> EXISTS (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n> ((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3)))\n> +(7 rows)\n\n\nThanks. I looked into it and have figured out why the plan differs.\nWith this patch the SEMI JOIN that is pushed down to the remote server\nis now implemented using JOIN_RIGHT_SEMI, whereas previously it was\nimplemented using JOIN_SEMI. Consequently, this leads to changes in the\ncosts of the paths: path with the sort pushed down to remote server, and\npath with the sort added atop the foreign join. And at last the latter\none wins by a slim margin.\n\nI think we can simply update the expected file to fix this plan diff, as\nattached.\n\nThanks\nRichard",
"msg_date": "Tue, 9 Jan 2024 18:48:59 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi vignesh C I saw this path has been passed (\nhttps://cirrus-ci.com/build/6109321080078336),can we push it?\n\nBest wish\n\nRichard Guo <[email protected]> 于2024年1月9日周二 18:49写道:\n\n>\n> On Sun, Jan 7, 2024 at 3:03 PM vignesh C <[email protected]> wrote:\n>\n>> One of the tests in CFBot has failed at [1] with:\n>> - Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n>> - Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\n>> r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND EXISTS\n>> (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n>> ((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3))) ORDER BY\n>> r1.\"C 1\" ASC NULLS LAST\n>> -(4 rows)\n>> + Sort Key: t1.c1\n>> + -> Foreign Scan\n>> + Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n>> + Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n>> + Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5,\n>> r1.c6, r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND\n>> EXISTS (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n>> ((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3)))\n>> +(7 rows)\n>\n>\n> Thanks. I looked into it and have figured out why the plan differs.\n> With this patch the SEMI JOIN that is pushed down to the remote server\n> is now implemented using JOIN_RIGHT_SEMI, whereas previously it was\n> implemented using JOIN_SEMI. Consequently, this leads to changes in the\n> costs of the paths: path with the sort pushed down to remote server, and\n> path with the sort added atop the foreign join. And at last the latter\n> one wins by a slim margin.\n>\n> I think we can simply update the expected file to fix this plan diff, as\n> attached.\n>\n> Thanks\n> Richard\n>\n\nHi vignesh C I saw this path has been passed (https://cirrus-ci.com/build/6109321080078336),can we push it?Best wishRichard Guo <[email protected]> 于2024年1月9日周二 18:49写道:On Sun, Jan 7, 2024 at 3:03 PM vignesh C <[email protected]> wrote:\nOne of the tests in CFBot has failed at [1] with:\n- Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n- Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\nr1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND EXISTS\n(SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3))) ORDER BY\nr1.\"C 1\" ASC NULLS LAST\n-(4 rows)\n+ Sort Key: t1.c1\n+ -> Foreign Scan\n+ Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n+ Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n+ Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5,\nr1.c6, r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND\nEXISTS (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n((date(r3.c5) = '1970-01-17'::date)) AND ((r3.c3 = r1.c3)))\n+(7 rows)Thanks. I looked into it and have figured out why the plan differs.With this patch the SEMI JOIN that is pushed down to the remote serveris now implemented using JOIN_RIGHT_SEMI, whereas previously it wasimplemented using JOIN_SEMI. Consequently, this leads to changes in thecosts of the paths: path with the sort pushed down to remote server, andpath with the sort added atop the foreign join. And at last the latterone wins by a slim margin.I think we can simply update the expected file to fix this plan diff, asattached.ThanksRichard",
"msg_date": "Mon, 22 Jan 2024 13:56:55 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Mon, 22 Jan 2024 at 11:27, wenhui qiu <[email protected]> wrote:\n>\n> Hi vignesh C I saw this path has been passed (https://cirrus-ci.com/build/6109321080078336),can we push it?\n\nIf you have found no comments from your review and testing, let's mark\nit as \"ready for committer\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 23 Jan 2024 08:26:25 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi vignesh C\n Many thanks, I have marked it to \"ready for committer\"\n\nBest wish\n\nvignesh C <[email protected]> 于2024年1月23日周二 10:56写道:\n\n> On Mon, 22 Jan 2024 at 11:27, wenhui qiu <[email protected]> wrote:\n> >\n> > Hi vignesh C I saw this path has been passed (\n> https://cirrus-ci.com/build/6109321080078336),can we push it?\n>\n> If you have found no comments from your review and testing, let's mark\n> it as \"ready for committer\".\n>\n> Regards,\n> Vignesh\n>\n\nHi vignesh C Many thanks, I have marked it to \"ready for committer\"Best wishvignesh C <[email protected]> 于2024年1月23日周二 10:56写道:On Mon, 22 Jan 2024 at 11:27, wenhui qiu <[email protected]> wrote:\n>\n> Hi vignesh C I saw this path has been passed (https://cirrus-ci.com/build/6109321080078336),can we push it?\n\nIf you have found no comments from your review and testing, let's mark\nit as \"ready for committer\".\n\nRegards,\nVignesh",
"msg_date": "Tue, 23 Jan 2024 16:13:46 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi! Thank you for your work on this subject.\n\nI have reviewed your patch and I think it is better to add an Assert for \nJOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to \nprevent the use of RIGHT_SEMI for these types of connections (NestedLoop \nand MergeJoin).\nMostly I'm suggesting this because of the set_join_pathlist_hook \nfunction, which is in the add_paths_to_joinrel function, which allows \nyou to create a custom node. What do you think?\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 30 Jan 2024 09:51:07 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Alena Rybakina\n I saw this code snippet also disable mergejoin ,I think it same effect\n+ /*\n+ * For now we do not support RIGHT_SEMI join in mergejoin.\n+ */\n+ if (jointype == JOIN_RIGHT_SEMI)\n+ {\n+ *mergejoin_allowed = false;\n+ return NIL;\n+ }\n+\n\nRegards\n\nAlena Rybakina <[email protected]> 于2024年1月30日周二 14:51写道:\n\n> Hi! Thank you for your work on this subject.\n>\n> I have reviewed your patch and I think it is better to add an Assert for\n> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to\n> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop\n> and MergeJoin).\n> Mostly I'm suggesting this because of the set_join_pathlist_hook\n> function, which is in the add_paths_to_joinrel function, which allows\n> you to create a custom node. What do you think?\n>\n> --\n> Regards,\n> Alena Rybakina\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nHi Alena Rybakina I saw this code snippet also disable mergejoin ,I think it same effect +\t/*+\t * For now we do not support RIGHT_SEMI join in mergejoin.+\t */+\tif (jointype == JOIN_RIGHT_SEMI)+\t{+\t\t*mergejoin_allowed = false;+\t\treturn NIL;+\t}+RegardsAlena Rybakina <[email protected]> 于2024年1月30日周二 14:51写道:Hi! Thank you for your work on this subject.\n\nI have reviewed your patch and I think it is better to add an Assert for \nJOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to \nprevent the use of RIGHT_SEMI for these types of connections (NestedLoop \nand MergeJoin).\nMostly I'm suggesting this because of the set_join_pathlist_hook \nfunction, which is in the add_paths_to_joinrel function, which allows \nyou to create a custom node. What do you think?\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 8 Feb 2024 13:50:17 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "HI Richard\n Now it is starting the last commitfest for v17, can you respond to\nAlena Rybakina points?\n\n\nRegards\n\nOn Thu, 8 Feb 2024 at 13:50, wenhui qiu <[email protected]> wrote:\n\n> Hi Alena Rybakina\n> I saw this code snippet also disable mergejoin ,I think it same effect\n> + /*\n> + * For now we do not support RIGHT_SEMI join in mergejoin.\n> + */\n> + if (jointype == JOIN_RIGHT_SEMI)\n> + {\n> + *mergejoin_allowed = false;\n> + return NIL;\n> + }\n> +\n>\n> Regards\n>\n> Alena Rybakina <[email protected]> 于2024年1月30日周二 14:51写道:\n>\n>> Hi! Thank you for your work on this subject.\n>>\n>> I have reviewed your patch and I think it is better to add an Assert for\n>> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to\n>> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop\n>> and MergeJoin).\n>> Mostly I'm suggesting this because of the set_join_pathlist_hook\n>> function, which is in the add_paths_to_joinrel function, which allows\n>> you to create a custom node. What do you think?\n>>\n>> --\n>> Regards,\n>> Alena Rybakina\n>> Postgres Professional: http://www.postgrespro.com\n>> The Russian Postgres Company\n>>\n>>\n\nHI Richard Now it is starting the last commitfest for v17, can you respond to Alena Rybakina points?RegardsOn Thu, 8 Feb 2024 at 13:50, wenhui qiu <[email protected]> wrote:Hi Alena Rybakina I saw this code snippet also disable mergejoin ,I think it same effect +\t/*+\t * For now we do not support RIGHT_SEMI join in mergejoin.+\t */+\tif (jointype == JOIN_RIGHT_SEMI)+\t{+\t\t*mergejoin_allowed = false;+\t\treturn NIL;+\t}+RegardsAlena Rybakina <[email protected]> 于2024年1月30日周二 14:51写道:Hi! Thank you for your work on this subject.\n\nI have reviewed your patch and I think it is better to add an Assert for \nJOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to \nprevent the use of RIGHT_SEMI for these types of connections (NestedLoop \nand MergeJoin).\nMostly I'm suggesting this because of the set_join_pathlist_hook \nfunction, which is in the add_paths_to_joinrel function, which allows \nyou to create a custom node. What do you think?\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 4 Mar 2024 10:33:00 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 10:33 AM wenhui qiu <[email protected]> wrote:\n\n> HI Richard\n> Now it is starting the last commitfest for v17, can you respond to\n> Alena Rybakina points?\n>\n\nThanks for reminding. Will do that soon.\n\nThanks\nRichard\n\nOn Mon, Mar 4, 2024 at 10:33 AM wenhui qiu <[email protected]> wrote:HI Richard Now it is starting the last commitfest for v17, can you respond to Alena Rybakina points?Thanks for reminding. Will do that soon.ThanksRichard",
"msg_date": "Tue, 5 Mar 2024 10:33:23 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina <[email protected]>\nwrote:\n\n> I have reviewed your patch and I think it is better to add an Assert for\n> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to\n> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop\n> and MergeJoin).\n\n\nHmm, I don't see why this is necessary. The planner should already\nguarantee that we won't have nestloops/mergejoins with right-semi joins.\n\nThanks\nRichard\n\nOn Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina <[email protected]> wrote:\nI have reviewed your patch and I think it is better to add an Assert for \nJOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to \nprevent the use of RIGHT_SEMI for these types of connections (NestedLoop \nand MergeJoin).Hmm, I don't see why this is necessary. The planner should alreadyguarantee that we won't have nestloops/mergejoins with right-semi joins.ThanksRichard",
"msg_date": "Tue, 5 Mar 2024 10:44:11 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Richard\n Agree +1 ,I think can push now.\n\nRichard\n\nOn Tue, 5 Mar 2024 at 10:44, Richard Guo <[email protected]> wrote:\n\n>\n> On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina <[email protected]>\n> wrote:\n>\n>> I have reviewed your patch and I think it is better to add an Assert for\n>> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to\n>> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop\n>> and MergeJoin).\n>\n>\n> Hmm, I don't see why this is necessary. The planner should already\n> guarantee that we won't have nestloops/mergejoins with right-semi joins.\n>\n> Thanks\n> Richard\n>\n\nHi Richard Agree +1 ,I think can push now.RichardOn Tue, 5 Mar 2024 at 10:44, Richard Guo <[email protected]> wrote:On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina <[email protected]> wrote:\nI have reviewed your patch and I think it is better to add an Assert for \nJOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to \nprevent the use of RIGHT_SEMI for these types of connections (NestedLoop \nand MergeJoin).Hmm, I don't see why this is necessary. The planner should alreadyguarantee that we won't have nestloops/mergejoins with right-semi joins.ThanksRichard",
"msg_date": "Tue, 5 Mar 2024 11:05:46 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "To be honest, I didn't see it in the code, could you tell me where they \nare, please?\n\nOn 05.03.2024 05:44, Richard Guo wrote:\n>\n> On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina \n> <[email protected]> wrote:\n>\n> I have reviewed your patch and I think it is better to add an\n> Assert for\n> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to\n> prevent the use of RIGHT_SEMI for these types of connections\n> (NestedLoop\n> and MergeJoin).\n>\n>\n> Hmm, I don't see why this is necessary. The planner should already\n> guarantee that we won't have nestloops/mergejoins with right-semi joins.\n>\n> Thanks\n> Richard\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nTo be honest, I didn't see it in the code, could you tell me\n where they are, please?\n\nOn 05.03.2024 05:44, Richard Guo wrote:\n\n\n\n\n\n\n\nOn Tue, Jan 30, 2024 at\n 2:51 PM Alena Rybakina <[email protected]>\n wrote:\n\n I have reviewed your patch and I think it is better to add\n an Assert for \n JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop\n functions to \n prevent the use of RIGHT_SEMI for these types of connections\n (NestedLoop \n and MergeJoin).\n\n\nHmm, I don't see why this is necessary. The planner\n should already\n guarantee that we won't have nestloops/mergejoins with\n right-semi joins.\n\n Thanks\n Richard\n\n\n\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 5 Mar 2024 23:10:00 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Alena Rybakina\nFor merge join\n+ /*\n+ * For now we do not support RIGHT_SEMI join in mergejoin.\n+ */\n+ if (jointype == JOIN_RIGHT_SEMI)\n+ {\n+ *mergejoin_allowed = false;\n+ return NIL;\n+ }\n+\nTanks\n\nOn Wed, 6 Mar 2024 at 04:10, Alena Rybakina <[email protected]>\nwrote:\n\n> To be honest, I didn't see it in the code, could you tell me where they\n> are, please?\n> On 05.03.2024 05:44, Richard Guo wrote:\n>\n>\n> On Tue, Jan 30, 2024 at 2:51 PM Alena Rybakina <[email protected]>\n> wrote:\n>\n>> I have reviewed your patch and I think it is better to add an Assert for\n>> JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop functions to\n>> prevent the use of RIGHT_SEMI for these types of connections (NestedLoop\n>> and MergeJoin).\n>\n>\n> Hmm, I don't see why this is necessary. The planner should already\n> guarantee that we won't have nestloops/mergejoins with right-semi joins.\n>\n> Thanks\n> Richard\n>\n> --\n> Regards,\n> Alena Rybakina\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n\nHi Alena RybakinaFor merge join+ /*+ * For now we do not support RIGHT_SEMI join in mergejoin.+ */+ if (jointype == JOIN_RIGHT_SEMI)+ {+ *mergejoin_allowed = false;+ return NIL;+ }+TanksOn Wed, 6 Mar 2024 at 04:10, Alena Rybakina <[email protected]> wrote:\n\nTo be honest, I didn't see it in the code, could you tell me\n where they are, please?\n\nOn 05.03.2024 05:44, Richard Guo wrote:\n\n\n\n\n\n\nOn Tue, Jan 30, 2024 at\n 2:51 PM Alena Rybakina <[email protected]>\n wrote:\n\n I have reviewed your patch and I think it is better to add\n an Assert for \n JOIN_RIGHT_SEMI to the ExecMergeJoin and ExecNestLoop\n functions to \n prevent the use of RIGHT_SEMI for these types of connections\n (NestedLoop \n and MergeJoin).\n\n\nHmm, I don't see why this is necessary. The planner\n should already\n guarantee that we won't have nestloops/mergejoins with\n right-semi joins.\n\n Thanks\n Richard\n\n\n\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 6 Mar 2024 10:23:23 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On 06.03.2024 05:23, wenhui qiu wrote:\n>\n>\n> Hi Alena Rybakina\n> For merge join\n> + /*\n> + * For now we do not support RIGHT_SEMI join in mergejoin.\n> + */\n> + if (jointype == JOIN_RIGHT_SEMI)\n> + {\n> + *mergejoin_allowed = false;\n> + return NIL;\n> + }\n> +\n> Tanks\n>\n>\nYes, I see it, thank you. Sorry for the noise.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\nOn 06.03.2024 05:23, wenhui qiu wrote:\n\n\n\n\nHi\n Alena Rybakina\n For merge join\n + /*\n + * For now we do not support RIGHT_SEMI join in mergejoin.\n + */\n + if (jointype == JOIN_RIGHT_SEMI)\n + {\n + *mergejoin_allowed = false;\n + return NIL;\n + }\n +\n Tanks\n\n\n\n Yes, I see it, thank you. Sorry for the noise.\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 9 Mar 2024 16:24:25 +0300",
"msg_from": "Alena Rybakina <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Here is another rebase with a commit message to help review. I also\ntweaked some comments.\n\nThanks\nRichard",
"msg_date": "Thu, 25 Apr 2024 11:28:37 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Richard\n Thank you so much for your tireless work on this,I see the new version\nof the patch improves some of the comments .I think it can commit in July\n\n\nThanks\n\nOn Thu, 25 Apr 2024 at 11:28, Richard Guo <[email protected]> wrote:\n\n> Here is another rebase with a commit message to help review. I also\n> tweaked some comments.\n>\n> Thanks\n> Richard\n>\n\nHi Richard Thank you so much for your tireless work on this,I see the new version of the patch improves some of the comments .I think it can commit in JulyThanksOn Thu, 25 Apr 2024 at 11:28, Richard Guo <[email protected]> wrote:Here is another rebase with a commit message to help review. I alsotweaked some comments.ThanksRichard",
"msg_date": "Mon, 29 Apr 2024 09:36:58 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi, Richard\n\n> On Apr 25, 2024, at 11:28, Richard Guo <[email protected]> wrote:\n> \n> Here is another rebase with a commit message to help review. I also\n> tweaked some comments.\n\nThank you for updating the patch, here are some comments on the v5 patch.\n\n+\t/*\n+\t * For now we do not support RIGHT_SEMI join in mergejoin or nestloop\n+\t * join.\n+\t */\n+\tif (jointype == JOIN_RIGHT_SEMI)\n+\t\treturn;\n+\n\nHow about adding some reasons here? \n\n+ * this is a right-semi join, or this is a right/right-anti/full join and\n+ * there are nonmergejoinable join clauses. The executor's mergejoin\n\nMaybe we can put the right-semi join together with the right/right-anti/full\njoin. Is there any other significance by putting it separately?\n\n\nMaybe the following comments also should be updated. Right?\n\ndiff --git a/src/backend/optimizer/prep/prepjointree.c b/src/backend/optimizer/prep/prepjointree.c\nindex 5482ab85a7..791cbc551e 100644\n--- a/src/backend/optimizer/prep/prepjointree.c\n+++ b/src/backend/optimizer/prep/prepjointree.c\n@@ -455,8 +455,8 @@ pull_up_sublinks_jointree_recurse(PlannerInfo *root, Node *jtnode,\n * point of the available_rels machinations is to ensure that we only\n * pull up quals for which that's okay.\n *\n- * We don't expect to see any pre-existing JOIN_SEMI, JOIN_ANTI, or\n- * JOIN_RIGHT_ANTI jointypes here.\n+ * We don't expect to see any pre-existing JOIN_SEMI, JOIN_ANTI,\n+ * JOIN_RIGHT_SEMI, or JOIN_RIGHT_ANTI jointypes here.\n */\n switch (j->jointype)\n {\n@@ -2951,6 +2951,7 @@ reduce_outer_joins_pass2(Node *jtnode,\n * so there's no way that upper quals could refer to their\n * righthand sides, and no point in checking. We don't expect\n * to see JOIN_RIGHT_ANTI yet.\n+ * Does JOIN_RIGHT_SEMI is expected here?\n */\n break;\n default:\n\n",
"msg_date": "Mon, 24 Jun 2024 05:27:54 +0000",
"msg_from": "Li Japin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Thank you for reviewing.\n\nOn Mon, Jun 24, 2024 at 1:27 PM Li Japin <[email protected]> wrote:\n> + /*\n> + * For now we do not support RIGHT_SEMI join in mergejoin or nestloop\n> + * join.\n> + */\n> + if (jointype == JOIN_RIGHT_SEMI)\n> + return;\n> +\n>\n> How about adding some reasons here?\n\nI've included a brief explanation in select_mergejoin_clauses.\n\n> + * this is a right-semi join, or this is a right/right-anti/full join and\n> + * there are nonmergejoinable join clauses. The executor's mergejoin\n>\n> Maybe we can put the right-semi join together with the right/right-anti/full\n> join. Is there any other significance by putting it separately?\n\nI don't think so. The logic is different: for right-semi join we will\nalways set *mergejoin_allowed to false, while for right/right-anti/full\njoin it is set to false only if there are nonmergejoinable join clauses.\n\n> Maybe the following comments also should be updated. Right?\n\nCorrect. And there are a few more places where we need to mention\nJOIN_RIGHT_SEMI, such as in reduce_outer_joins_pass2 and in the comment\nfor SpecialJoinInfo.\n\n\nI noticed that this patch changes the plan of a query in join.sql from\na semi join to right semi join, compromising the original purpose of\nthis query, which was to test the fix for neqjoinsel's behavior for\nsemijoins (see commit 7ca25b7d).\n\n--\n-- semijoin selectivity for <>\n--\nexplain (costs off)\nselect * from int4_tbl i4, tenk1 a\nwhere exists(select * from tenk1 b\n where a.twothousand = b.twothousand and a.fivethous <> b.fivethous)\n and i4.f1 = a.tenthous;\n\nSo I've changed this test case a bit so that it is still testing what it\nis supposed to test.\n\nIn passing, I've also updated the commit message to clarify that this\npatch does not address the support of \"Right Semi Join\" for merge joins.\n\nThanks\nRichard",
"msg_date": "Mon, 24 Jun 2024 17:59:03 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Mon, 24 Jun 2024 at 17:59, Richard Guo <[email protected]> wrote:\n> Thank you for reviewing.\n>\n> On Mon, Jun 24, 2024 at 1:27 PM Li Japin <[email protected]> wrote:\n>> + /*\n>> + * For now we do not support RIGHT_SEMI join in mergejoin or nestloop\n>> + * join.\n>> + */\n>> + if (jointype == JOIN_RIGHT_SEMI)\n>> + return;\n>> +\n>>\n>> How about adding some reasons here?\n>\n> I've included a brief explanation in select_mergejoin_clauses.\n>\n\nThank you for updating the patch.\n\n>> + * this is a right-semi join, or this is a right/right-anti/full join and\n>> + * there are nonmergejoinable join clauses. The executor's mergejoin\n>>\n>> Maybe we can put the right-semi join together with the right/right-anti/full\n>> join. Is there any other significance by putting it separately?\n>\n> I don't think so. The logic is different: for right-semi join we will\n> always set *mergejoin_allowed to false, while for right/right-anti/full\n> join it is set to false only if there are nonmergejoinable join clauses.\n>\n\nGot it. Thanks for the explanation.\n\n>> Maybe the following comments also should be updated. Right?\n>\n> Correct. And there are a few more places where we need to mention\n> JOIN_RIGHT_SEMI, such as in reduce_outer_joins_pass2 and in the comment\n> for SpecialJoinInfo.\n>\n>\n> I noticed that this patch changes the plan of a query in join.sql from\n> a semi join to right semi join, compromising the original purpose of\n> this query, which was to test the fix for neqjoinsel's behavior for\n> semijoins (see commit 7ca25b7d).\n>\n> --\n> -- semijoin selectivity for <>\n> --\n> explain (costs off)\n> select * from int4_tbl i4, tenk1 a\n> where exists(select * from tenk1 b\n> where a.twothousand = b.twothousand and a.fivethous <> b.fivethous)\n> and i4.f1 = a.tenthous;\n>\n> So I've changed this test case a bit so that it is still testing what it\n> is supposed to test.\n>\n> In passing, I've also updated the commit message to clarify that this\n> patch does not address the support of \"Right Semi Join\" for merge joins.\n>\n\nTested and looks good to me!\n\n-- \nRegrads,\nJapin Li\n\n\n",
"msg_date": "Tue, 25 Jun 2024 08:51:03 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Japin Li\n Thank you for your reviewing ,This way the notes are more accurate and\ncomplete. Thanks also to the author for updating the patch ,I also tested\nthe new patch ,It looks good to me\n\n\nRegrads\n\nJapin Li <[email protected]> 于2024年6月25日周二 08:51写道:\n\n> On Mon, 24 Jun 2024 at 17:59, Richard Guo <[email protected]> wrote:\n> > Thank you for reviewing.\n> >\n> > On Mon, Jun 24, 2024 at 1:27 PM Li Japin <[email protected]> wrote:\n> >> + /*\n> >> + * For now we do not support RIGHT_SEMI join in mergejoin or\n> nestloop\n> >> + * join.\n> >> + */\n> >> + if (jointype == JOIN_RIGHT_SEMI)\n> >> + return;\n> >> +\n> >>\n> >> How about adding some reasons here?\n> >\n> > I've included a brief explanation in select_mergejoin_clauses.\n> >\n>\n> Thank you for updating the patch.\n>\n> >> + * this is a right-semi join, or this is a right/right-anti/full join\n> and\n> >> + * there are nonmergejoinable join clauses. The executor's mergejoin\n> >>\n> >> Maybe we can put the right-semi join together with the\n> right/right-anti/full\n> >> join. Is there any other significance by putting it separately?\n> >\n> > I don't think so. The logic is different: for right-semi join we will\n> > always set *mergejoin_allowed to false, while for right/right-anti/full\n> > join it is set to false only if there are nonmergejoinable join clauses.\n> >\n>\n> Got it. Thanks for the explanation.\n>\n> >> Maybe the following comments also should be updated. Right?\n> >\n> > Correct. And there are a few more places where we need to mention\n> > JOIN_RIGHT_SEMI, such as in reduce_outer_joins_pass2 and in the comment\n> > for SpecialJoinInfo.\n> >\n> >\n> > I noticed that this patch changes the plan of a query in join.sql from\n> > a semi join to right semi join, compromising the original purpose of\n> > this query, which was to test the fix for neqjoinsel's behavior for\n> > semijoins (see commit 7ca25b7d).\n> >\n> > --\n> > -- semijoin selectivity for <>\n> > --\n> > explain (costs off)\n> > select * from int4_tbl i4, tenk1 a\n> > where exists(select * from tenk1 b\n> > where a.twothousand = b.twothousand and a.fivethous <>\n> b.fivethous)\n> > and i4.f1 = a.tenthous;\n> >\n> > So I've changed this test case a bit so that it is still testing what it\n> > is supposed to test.\n> >\n> > In passing, I've also updated the commit message to clarify that this\n> > patch does not address the support of \"Right Semi Join\" for merge joins.\n> >\n>\n> Tested and looks good to me!\n>\n> --\n> Regrads,\n> Japin Li\n>\n\nHi Japin Li Thank you for your reviewing ,This way the notes are more accurate and complete. Thanks also to the author for updating the patch ,I also tested the new patch ,It looks good to me RegradsJapin Li <[email protected]> 于2024年6月25日周二 08:51写道:On Mon, 24 Jun 2024 at 17:59, Richard Guo <[email protected]> wrote:\n> Thank you for reviewing.\n>\n> On Mon, Jun 24, 2024 at 1:27 PM Li Japin <[email protected]> wrote:\n>> + /*\n>> + * For now we do not support RIGHT_SEMI join in mergejoin or nestloop\n>> + * join.\n>> + */\n>> + if (jointype == JOIN_RIGHT_SEMI)\n>> + return;\n>> +\n>>\n>> How about adding some reasons here?\n>\n> I've included a brief explanation in select_mergejoin_clauses.\n>\n\nThank you for updating the patch.\n\n>> + * this is a right-semi join, or this is a right/right-anti/full join and\n>> + * there are nonmergejoinable join clauses. The executor's mergejoin\n>>\n>> Maybe we can put the right-semi join together with the right/right-anti/full\n>> join. Is there any other significance by putting it separately?\n>\n> I don't think so. The logic is different: for right-semi join we will\n> always set *mergejoin_allowed to false, while for right/right-anti/full\n> join it is set to false only if there are nonmergejoinable join clauses.\n>\n\nGot it. Thanks for the explanation.\n\n>> Maybe the following comments also should be updated. Right?\n>\n> Correct. And there are a few more places where we need to mention\n> JOIN_RIGHT_SEMI, such as in reduce_outer_joins_pass2 and in the comment\n> for SpecialJoinInfo.\n>\n>\n> I noticed that this patch changes the plan of a query in join.sql from\n> a semi join to right semi join, compromising the original purpose of\n> this query, which was to test the fix for neqjoinsel's behavior for\n> semijoins (see commit 7ca25b7d).\n>\n> --\n> -- semijoin selectivity for <>\n> --\n> explain (costs off)\n> select * from int4_tbl i4, tenk1 a\n> where exists(select * from tenk1 b\n> where a.twothousand = b.twothousand and a.fivethous <> b.fivethous)\n> and i4.f1 = a.tenthous;\n>\n> So I've changed this test case a bit so that it is still testing what it\n> is supposed to test.\n>\n> In passing, I've also updated the commit message to clarify that this\n> patch does not address the support of \"Right Semi Join\" for merge joins.\n>\n\nTested and looks good to me!\n\n-- \nRegrads,\nJapin Li",
"msg_date": "Tue, 25 Jun 2024 11:00:42 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Mon, Jun 24, 2024 at 5:59 PM Richard Guo <[email protected]> wrote:\n> I noticed that this patch changes the plan of a query in join.sql from\n> a semi join to right semi join, compromising the original purpose of\n> this query, which was to test the fix for neqjoinsel's behavior for\n> semijoins (see commit 7ca25b7d).\n>\n> --\n> -- semijoin selectivity for <>\n> --\n> explain (costs off)\n> select * from int4_tbl i4, tenk1 a\n> where exists(select * from tenk1 b\n> where a.twothousand = b.twothousand and a.fivethous <> b.fivethous)\n> and i4.f1 = a.tenthous;\n>\n> So I've changed this test case a bit so that it is still testing what it\n> is supposed to test.\n\nI've refined this test case further to make it more stable by using an\nadditional filter 'a.tenthous < 5000'. Besides, I noticed a surplus\nblank line in ExecHashJoinImpl(). I've removed it in the v7 patch.\n\nThanks\nRichard",
"msg_date": "Fri, 28 Jun 2024 14:54:39 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Fri, Jun 28, 2024 at 2:54 PM Richard Guo <[email protected]> wrote:\n> On Mon, Jun 24, 2024 at 5:59 PM Richard Guo <[email protected]> wrote:\n> > I noticed that this patch changes the plan of a query in join.sql from\n> > a semi join to right semi join, compromising the original purpose of\n> > this query, which was to test the fix for neqjoinsel's behavior for\n> > semijoins (see commit 7ca25b7d).\n> >\n> > --\n> > -- semijoin selectivity for <>\n> > --\n> > explain (costs off)\n> > select * from int4_tbl i4, tenk1 a\n> > where exists(select * from tenk1 b\n> > where a.twothousand = b.twothousand and a.fivethous <> b.fivethous)\n> > and i4.f1 = a.tenthous;\n> >\n> > So I've changed this test case a bit so that it is still testing what it\n> > is supposed to test.\n>\n> I've refined this test case further to make it more stable by using an\n> additional filter 'a.tenthous < 5000'. Besides, I noticed a surplus\n> blank line in ExecHashJoinImpl(). I've removed it in the v7 patch.\n\nBTW, I've also verified the empty-rel optimization for hash join and\nAFAICT it works correctly for the new right-semi join.\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 28 Jun 2024 15:21:35 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Fri, Jun 28, 2024 at 3:21 PM Richard Guo <[email protected]> wrote:\n> On Fri, Jun 28, 2024 at 2:54 PM Richard Guo <[email protected]> wrote:\n> > I've refined this test case further to make it more stable by using an\n> > additional filter 'a.tenthous < 5000'. Besides, I noticed a surplus\n> > blank line in ExecHashJoinImpl(). I've removed it in the v7 patch.\n>\n> BTW, I've also verified the empty-rel optimization for hash join and\n> AFAICT it works correctly for the new right-semi join.\n\nHere is a new rebase.\n\nBarring objections, I'm planning to push it soon.\n\nThanks\nRichard",
"msg_date": "Thu, 4 Jul 2024 17:17:41 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "Hi Richard Guo\n Thank you for updating the patch.Tested on v8 , It looks good to me\n\n\n\nThanks\n\nRichard Guo <[email protected]> 于2024年7月4日周四 17:18写道:\n\n> On Fri, Jun 28, 2024 at 3:21 PM Richard Guo <[email protected]>\n> wrote:\n> > On Fri, Jun 28, 2024 at 2:54 PM Richard Guo <[email protected]>\n> wrote:\n> > > I've refined this test case further to make it more stable by using an\n> > > additional filter 'a.tenthous < 5000'. Besides, I noticed a surplus\n> > > blank line in ExecHashJoinImpl(). I've removed it in the v7 patch.\n> >\n> > BTW, I've also verified the empty-rel optimization for hash join and\n> > AFAICT it works correctly for the new right-semi join.\n>\n> Here is a new rebase.\n>\n> Barring objections, I'm planning to push it soon.\n>\n> Thanks\n> Richard\n>\n\nHi Richard Guo Thank you for updating the patch.Tested on v8 , It looks good to me Thanks Richard Guo <[email protected]> 于2024年7月4日周四 17:18写道:On Fri, Jun 28, 2024 at 3:21 PM Richard Guo <[email protected]> wrote:\n> On Fri, Jun 28, 2024 at 2:54 PM Richard Guo <[email protected]> wrote:\n> > I've refined this test case further to make it more stable by using an\n> > additional filter 'a.tenthous < 5000'. Besides, I noticed a surplus\n> > blank line in ExecHashJoinImpl(). I've removed it in the v7 patch.\n>\n> BTW, I've also verified the empty-rel optimization for hash join and\n> AFAICT it works correctly for the new right-semi join.\n\nHere is a new rebase.\n\nBarring objections, I'm planning to push it soon.\n\nThanks\nRichard",
"msg_date": "Thu, 4 Jul 2024 17:25:12 +0800",
"msg_from": "wenhui qiu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Thu, 04 Jul 2024 at 17:17, Richard Guo <[email protected]> wrote:\n> On Fri, Jun 28, 2024 at 3:21 PM Richard Guo <[email protected]> wrote:\n>> On Fri, Jun 28, 2024 at 2:54 PM Richard Guo <[email protected]> wrote:\n>> > I've refined this test case further to make it more stable by using an\n>> > additional filter 'a.tenthous < 5000'. Besides, I noticed a surplus\n>> > blank line in ExecHashJoinImpl(). I've removed it in the v7 patch.\n>>\n>> BTW, I've also verified the empty-rel optimization for hash join and\n>> AFAICT it works correctly for the new right-semi join.\n>\n> Here is a new rebase.\n>\n> Barring objections, I'm planning to push it soon.\n>\n\nThanks for updating the patch. It looks good to me, except for a minor nitpick:\n\ns/right-semijoin/right-semi join/\n\n\n-- \nRegrads,\nJapin Li\n\n\n",
"msg_date": "Thu, 04 Jul 2024 23:18:23 +0800",
"msg_from": "Japin Li <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
},
{
"msg_contents": "On Thu, Jul 4, 2024 at 11:18 PM Japin Li <[email protected]> wrote:\n\n> On Thu, 04 Jul 2024 at 17:17, Richard Guo <[email protected]> wrote:\n> > Here is a new rebase.\n> >\n> > Barring objections, I'm planning to push it soon.\n\nPushed. Thanks for all the reviews.\n\n> Thanks for updating the patch. It looks good to me, except for a minor nitpick:\n>\n> s/right-semijoin/right-semi join/\n\nI did not take this one. The comment nearby for RIGHT_ANTI uses\n'right-antijoin', and I think we'd better adopt a consistent pattern for\nRIGHT_SEMI.\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 5 Jul 2024 09:00:47 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Support \"Right Semi Join\" plan shapes"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI found that GRANT MAINTAIN is not tab-completed with ON, so here is a \npatch.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 18 Apr 2023 18:08:23 +0900",
"msg_from": "Ken Kato <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tab completion for GRANT MAINTAIN"
},
{
"msg_contents": "On 18.04.23 11:08, Ken Kato wrote:\n> Hi hackers,\n>\n> I found that GRANT MAINTAIN is not tab-completed with ON, so here is a \n> patch.\n\nHi,\n\nthe patch applies cleanly and now GRANT MAINTAIN tab-completes with ON. \nFor the sake of completeness I tested a whole statement:\n\npostgres=# GRANT M<tab>\n\n=> postgres=# GRANT MAINTAIN\n\npostgres=# GRANT MAINTAIN <tab>\n\n=> postgres=# GRANT MAINTAIN ON\n\npostgres=# GRANT MAINTAIN ON t <tab>\n\n=> postgres=# GRANT MAINTAIN ON t TO\n\npostgres=# GRANT MAINTAIN ON t TO <tab><tab>\n\n=>\n\nCURRENT_ROLE pg_monitor pg_use_reserved_connections\nCURRENT_USER pg_read_all_data pg_write_all_data\npg_checkpoint pg_read_all_settings pg_write_server_files\npg_create_subscription pg_read_all_stats postgres\npg_database_owner pg_read_server_files PUBLIC\npg_execute_server_program pg_signal_backend SESSION_USER\npg_maintain pg_stat_scan_tables\n\nI've marked the CF entry as \"ready for committer\"\n\nBest, Jim\n\n\n\n",
"msg_date": "Wed, 19 Apr 2023 13:50:30 +0200",
"msg_from": "Jim Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for GRANT MAINTAIN"
},
{
"msg_contents": "Jim Jones <[email protected]> writes:\n> On 18.04.23 11:08, Ken Kato wrote:\n>> I found that GRANT MAINTAIN is not tab-completed with ON, so here is a \n>> patch.\n\n> I've marked the CF entry as \"ready for committer\"\n\nYup, clearly an oversight. Pushed.\n\n(One could wish that it didn't take touching three or so places in\ntab-complete.c to add a privilege, especially when a naive hacker might\nthink he was done after touching Privilege_options_of_grant_and_revoke.\nI didn't see any easy way to improve that situation though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:54:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for GRANT MAINTAIN"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 10:54:05AM -0400, Tom Lane wrote:\n> (One could wish that it didn't take touching three or so places in\n> tab-complete.c to add a privilege, especially when a naive hacker might\n> think he was done after touching Privilege_options_of_grant_and_revoke.\n> I didn't see any easy way to improve that situation though.)\n\nSorry, I think this was my fault. Thanks for fixing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Apr 2023 13:17:36 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for GRANT MAINTAIN"
}
] |
[
{
"msg_contents": "Hi,\n\nget_collation_actual_version() in pg_locale.c currently \nexcludes C.UTF-8 (and more generally C.*) from versioning,\nwhich makes pg_collation.collversion being empty for these\ncollations.\n\nchar *\nget_collation_actual_version(char collprovider, const char *collcollate)\n{\n....\n\tif (collprovider == COLLPROVIDER_LIBC &&\n\t\tpg_strcasecmp(\"C\", collcollate) != 0 &&\n\t\tpg_strncasecmp(\"C.\", collcollate, 2) != 0 &&\n\t\tpg_strcasecmp(\"POSIX\", collcollate) != 0)\n\nThis seems to be based on the idea that C.* collations provide an\nimmutable sort like \"C\", but it appears that it's not the case.\n\nFor instance, consider how these C.UTF-8 comparisons differ between\nrecent linux systems:\n\nU+1D400 = Mathematical Bold Capital A\n\nDebian 9.13 (glibc 2.24)\n=> select 'A' < E'\\U0001D400' collate \"C.UTF-8\";\n ?column? \n----------\n t\n\nDebian 10.13 (glibc 2.28)\n=> select 'A' < E'\\U0001D400' collate \"C.UTF-8\";\n ?column? \n----------\n f\n\nDebian 11.6 (glibc 2.31)\n=> select 'A' < E'\\U0001D400' collate \"C.UTF-8\";\n ?column? \n----------\n f\n\nUbuntu 22.04 (glibc 2.35)\n=> select 'A' < E'\\U0001D400' collate \"C.UTF-8\";\n ?column? \n----------\n t\n\nSo I suggest the attached patch to no longer exclude these collations\nfrom the generic versioning.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Tue, 18 Apr 2023 14:35:50 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 12:36 AM Daniel Verite <[email protected]> wrote:\n> This seems to be based on the idea that C.* collations provide an\n> immutable sort like \"C\", but it appears that it's not the case.\n\nHmm. It seems I added that exemption initially for FreeBSD only in\nca051d8b101, and then merged the cases for several OSes in\nbeb4480c853.\n\nIt's extremely surprising to me that the sort order changed. I\nexpected the sort order to be code point order:\n\nhttps://sourceware.org/glibc/wiki/Proposals/C.UTF-8\n\nOne interesting thing is that it seems that it might have been\nindependently invented by Debian (?) and then harmonised with glibc\n2.35:\n\nhttps://www.mail-archive.com/[email protected]/msg1871363.html\n\nWas the earlier Debian version buggy, or did it simply have a\ndifferent idea of what the sort order should be, intentionally? Ugh.\n From your examples, we can see that the older Debian system did not\nhave A < [some 4 digit code point], while the later version did (as\nexpected). If so then it might be tempting to *not* do what you're\nsuggesting, since the stated goal of the thing is to be stable from\nnow on. But it changed once in the early years of its existence!\nAnnoying.\n\nMany OSes have a locale with this name. I don't know this history,\nwho did it first etc, but now I am wondering if they all took the\n\"obvious\" interpretation, that it should be code-point based,\nextrapolating from \"C\" (really memcmp order):\n\nhttps://unix.stackexchange.com/questions/597962/how-widespread-is-the-c-utf-8-locale\n\n\n",
"msg_date": "Wed, 19 Apr 2023 07:48:05 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Wed, 2023-04-19 at 07:48 +1200, Thomas Munro wrote:\n> Many OSes have a locale with this name. I don't know this history,\n> who did it first etc, but now I am wondering if they all took the\n> \"obvious\" interpretation, that it should be code-point based,\n> extrapolating from \"C\" (really memcmp order):\n\nmemcmp() is not the same as code-point order in all encodings, right?\n\nI've been thinking that we should have a \"provider=none\" for the\nspecial cases that use memcmp(). It's not using libc as a collation\nprovider; it's really postgres in control of the semantics.\n\nThat would clean up the documentation and the code a bit, and make it\nmore clear which locales are being passed to the provider and which\nones aren't.\n\nIf we are passing it to a provider (e.g. \"C.UTF-8\"), we shouldn't make\nunnecessary assumptions about what the provider will do with it.\n\nFor what it's worth, in my recent ICU language tag work, I special-\ncased ICU locales with language \"C\" or \"POSIX\" to map to \"en-US-u-va-\nposix\", disregarding everything else (collation attributes, etc.). I\nbelieve that's the right thing based on the behavior I observed: for\nthe POSIX variant of en-US, ICU seems to disregard other things such as\ncase insensitivity. But it still ultimately goes to the provider and\nICU has particular rules for that locale -- I don't assume memcpy-like\nsemantics or code point order.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 18 Apr 2023 18:30:13 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 1:30 PM Jeff Davis <[email protected]> wrote:\n> On Wed, 2023-04-19 at 07:48 +1200, Thomas Munro wrote:\n> > Many OSes have a locale with this name. I don't know this history,\n> > who did it first etc, but now I am wondering if they all took the\n> > \"obvious\" interpretation, that it should be code-point based,\n> > extrapolating from \"C\" (really memcmp order):\n>\n> memcmp() is not the same as code-point order in all encodings, right?\n\nRight. I wasn't trying to suggest that *we* should assume that, I was\njust thinking out loud about how a libc implementor would surely think\nthat a \"C.encoding\" should work, in the spirit of \"C\", given that the\nstandard doesn't tell us IIUC. It looks like for technical reasons\ninside glibc, that couldn't be done before 2.35:\n\nhttps://sourceware.org/bugzilla/show_bug.cgi?id=17318\n\nThat strengthens my opinion that C.UTF-8 (the real C.UTF-8 supplied by\nthe glibc project) isn't supposed to be versioned, but it's extremely\nunfortunate that a bunch of OSes (Debian and maybe more) have been\nsorting text in some other order under that name for years.\n\n> I've been thinking that we should have a \"provider=none\" for the\n> special cases that use memcmp(). It's not using libc as a collation\n> provider; it's really postgres in control of the semantics.\n\nYeah, interesting idea.\n\n\n",
"msg_date": "Wed, 19 Apr 2023 14:07:13 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "\tThomas Munro wrote:\n\n> It looks like for technical reasons\n> inside glibc, that couldn't be done before 2.35:\n> \n> https://sourceware.org/bugzilla/show_bug.cgi?id=17318\n> \n> That strengthens my opinion that C.UTF-8 (the real C.UTF-8 supplied\n> by the glibc project) isn't supposed to be versioned, but it's\n> extremely unfortunate that a bunch of OSes (Debian and maybe more)\n> have been sorting text in some other order under that name for\n> years.\n\nYes. This is consistent with Debian/Ubuntu patches in \nglibc/localedata/locales/C\n\nglibc-2.35 is not patched, and upstream has this:\n LC_COLLATE\n % The keyword 'codepoint_collation' in any part of any LC_COLLATE\n % immediately discards all collation information and causes the\n % locale to use strcmp/wcscmp for collation comparison. This is\n % exactly what is needed for C (ASCII) or C.UTF-8.\n codepoint_collation\n END LC_COLLATE\n\nBut in older versions, glibc doesn't have the locales/C data file.\nDebian adds it in debian/patches/localedata/C with that kind of\ncontent:\n\n* glibc 2.31 Debian 11\n LC_COLLATE\n order_start forward\n <U0000>\n ..\n <U007F>\n <U0080>\n ..\n <U00FF>\n etc...\n\nBut as explained in the above-linked bugzilla entry, that did not\nresult in true byte-comparison semantics, for several reasons\nthat got fixed in 2.35.\n\nSo this looks like a solved problem for anyone starting to use these\ncollation with glibc 2.35 or newer (or other OSes that don't have a\ncompatibility issue with them in the first place).\nBut Debian/Ubuntu users upgrading from the older C.* to 2.35+ will not\nbe having the normal warning about the need to reindex.\n\nI understand that my proposal to version C.* like any other collation\nmight be erring on the side of caution, but ignoring these collation\nchanges on at least one major OS does not feel right either.\nMaybe we should consider doing platform-dependent checks?\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sat, 22 Apr 2023 19:22:24 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Wed, 2023-04-19 at 14:07 +1200, Thomas Munro wrote:\n> That strengthens my opinion that C.UTF-8 (the real C.UTF-8 supplied\n> by\n> the glibc project) isn't supposed to be versioned, but it's extremely\n> unfortunate that a bunch of OSes (Debian and maybe more) have been\n> sorting text in some other order under that name for years.\n\nWhat should we do with locales like C.UTF-8 in both libc and ICU? \n\nWe either need to capture it and use the memcmp/pg_ascii code paths so\nit doesn't use the provider at all (like C); or if we send it to the\nprovider, we can't have too many expectations about what will be done\nwith it (even if we know what \"should\" happen).\n\nIf we capture it like the C locale, then where do we draw the line? Any\nlocale that begins with \"C.\"? What if the language part is C but there\nis some other part to the locale? What about lower case? Should all of\nthese apply the same way except with POSIX? What about backwards\ncompatibility?\n\nIf we pass it to the provider:\n\n* ICU: Recent versions of ICU don't recognize C.UTF-8 at all, and if\nyou try to open it, you'll get the root collator (with warning or\nerror, which is not great for such a common locale name). ICU versions\n63 and earlier recognize C.UTF-8 as en-US-u-va-posix (a.k.a.\nen_US_POSIX), which has some adjustments to match expectations of C\nsorting (e.g. upper case first).\n\n* libc: problems as raised in this thread.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 25 May 2023 11:30:11 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> What should we do with locales like C.UTF-8 in both libc and ICU? \n\nI vote for passing those to the existing C-specific code paths,\nwhereever we have any (not sure that we do for <ctype.h> functionality).\nThe semantics are quite well-defined and I can see no good coming of\nallowing either provider to mess with them.\n\n> If we capture it like the C locale, then where do we draw the line? Any\n> locale that begins with \"C.\"? What if the language part is C but there\n> is some other part to the locale? What about lower case? Should all of\n> these apply the same way except with POSIX? What about backwards\n> compatibility?\n\nProbably \"C\", or \"C.anything\", or \"POSIX\", or \"POSIX.anything\".\nCase-independent might be good, but we haven't accepted such in\nthe past, so I don't feel strongly about it. (Arguably, passing\nlower case \"c\" to the provider would provide an \"out\" to anybody\nwho dislikes our choices here.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 May 2023 14:48:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Thu, 2023-05-25 at 14:48 -0400, Tom Lane wrote:\n> Jeff Davis <[email protected]> writes:\n> > What should we do with locales like C.UTF-8 in both libc and ICU? \n> \n> I vote for passing those to the existing C-specific code paths,\n\nGreat, this would be a big step toward solving the ICU usability issues\nin this thread:\n\nhttps://postgr.es/m/000b01d97465%24c34bbd60%2449e33820%24%40pcorp.us\n\n> Probably \"C\", or \"C.anything\", or \"POSIX\", or \"POSIX.anything\".\n> Case-independent might be good, but we haven't accepted such in\n> the past, so I don't feel strongly about it. (Arguably, passing\n> lower case \"c\" to the provider would provide an \"out\" to anybody\n> who dislikes our choices here.)\n\nPatch attached with your suggestions. It's based on the first patch in\nthe series I posted here:\n\nhttps://postgr.es/m/[email protected]\n\nWe still need to consider backwards compatibility. If someone has a\ncollation with locale name C.UTF-8 in an earlier version, any change to\nthe interpretation of that locale name after an upgrade carries a\ncorruption risk. The risks are different in ICU vs libc:\n\n For ICU: iculocale=C in an earlier version was a mistake that must\nhave been explicitly requested by the user. However, if such a mistake\nwas made, the indexes would have been created using the ICU root\nlocale, which is very different from the C locale. So reinterpreting\niculocale=C as memcmp() would be likely to result in index corruption.\nPatch 0002 (also based on a patch from the series linked above) solves\nthis with a pg_upgrade check for iculocale=C in versions 15 and\nearlier. The upgrade check is not likely to affect many users, and\nthose it does affect have a mis-defined collation and would benefit\nfrom the check.\n\n For libc: this change may affect any user who happened to have\nLANG=C.UTF-8 in their environment at initdb time, which is probably a\nlot of users, and some buildfarm members. However, the average risk\nseems to be much lower, because we've gone a long time with the\nassumption that C.UTF-8 has the same behavior as C, and this only\nrecently came up. Also, I'm not sure how obscure the cases are even if\nthere is a difference; perhaps they don't often occur in practice? It's\nnot clear to me how we mitigate this risk further, though.\n\nRegards,\n\tJeff Davis",
"msg_date": "Fri, 26 May 2023 10:43:09 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Fri, 2023-05-26 at 10:43 -0700, Jeff Davis wrote:\n> We still need to consider backwards compatibility. If someone has a\n> collation with locale name C.UTF-8 in an earlier version, any change\n> to\n> the interpretation of that locale name after an upgrade carries a\n> corruption risk. The risks are different in ICU vs libc:\n\n...\n\n> For libc: this change may affect any user who happened to have\n> LANG=C.UTF-8 in their environment at initdb time, which is probably a\n> lot of users, and some buildfarm members. However, the average risk\n> seems to be much lower, because we've gone a long time with the\n> assumption that C.UTF-8 has the same behavior as C, and this only\n> recently came up. Also, I'm not sure how obscure the cases are even\n> if\n> there is a difference; perhaps they don't often occur in practice?\n> It's\n> not clear to me how we mitigate this risk further, though.\n\nWe can avoid this risk by converting C.anything or POSIX.anything to\nplain \"C\" or \"POSIX\", respectively, for new collations before storing\nthe string in the catalog. For upgraded collations, we can preserve the\nexisting locale name. When opening the locale, we would still only\nrecognize plain \"C\" and \"POSIX\" as the C locale.\n\nThat would be more consistent behavior for new users, without creating\na backwards compatibility problem for existing users who happened to\ncreate a collation with C.UTF-8.\n\nFor ICU users, we'd still need the upgrade check, because even the \"C\"\nlocale was not implemented with memcmp in prior versions. But I think\nthat's fine and should be done anyway, as the behavior in that case was\nincorrect and was almost certainly a mistake by the user.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 05 Jun 2023 09:37:07 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "\tJeff Davis wrote:\n\n> > For libc: this change may affect any user who happened to have\n> > LANG=C.UTF-8 in their environment at initdb time, which is probably a\n> > lot of users, and some buildfarm members. However, the average risk\n> > seems to be much lower, because we've gone a long time with the\n> > assumption that C.UTF-8 has the same behavior as C, and this only\n> > recently came up.\n\nCurrently, neither lc_collate_is_c() nor lookup_collation_cache()\nthink that C.UTF-8 is a C collation, since they do that kind of test:\n\n\t\tif (strcmp(localeptr, \"C\") == 0)\n\t\t\tresult = true;\n\t\telse if (strcmp(localeptr, \"POSIX\") == 0)\n\t\t\tresult = true;\n\t\telse\n\t\t\tresult = false;\n\nWhat is relatively new (v15) is that we compute a version for libc\ncollations in get_collation_actual_version(), with code that assumes\nthat C.* does not need a version, implying that it's immune to\nUnicode changes. What came up in this thread is that this assumption\nis not true for at least one major platform: Debian/Ubuntu for\nreleases occurring before 2022 (glibc < 2.35).\n\n\n> We can avoid this risk by converting C.anything or POSIX.anything to\n> plain \"C\" or \"POSIX\", respectively, for new collations before storing\n> the string in the catalog. For upgraded collations, we can preserve the\n> existing locale name. When opening the locale, we would still only\n> recognize plain \"C\" and \"POSIX\" as the C locale.\n\n\nThen Postgres would not sort the same as the operating system with the\nsame locale, at least on some OS. Concerning glibc, after waiting a\nfew years, glibc<2.35 will be obsolete, and C.UTF-8 sorting like C\nwill happen by itself.\nBut in the meantime, personally I don't quite see why Postgres should\nstart forcing C.UTF-8 to sort differently in the database than in the\nOS.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 05 Jun 2023 19:43:26 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Mon, 2023-06-05 at 19:43 +0200, Daniel Verite wrote:\n> But in the meantime, personally I don't quite see why Postgres should\n> start forcing C.UTF-8 to sort differently in the database than in the\n> OS.\n\nI can see both points of view. It could be surprising to users if\nC.UTF-8 does not sort like C/memcmp, or surprising if it changes out\nfrom under them. It could also be surprising that it wouldn't sort like\nthe current OS's libc interpretation of C.UTF-8.\n\nWhat about ICU? How should provider=icu locale=C.UTF-8 behave? We\ncould:\n\na. Just pass it to the provider and see what happens (older versions of\nICU would interpret it as en-US-u-va-posix; newer versions would give\nthe root locale).\n\nb. Consistently interpret it as en-US-u-va-posix.\n\nc. Don't pass it to the provider at all and treat it with memcmp\nsemantics.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 06 Jun 2023 12:23:30 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On 6/6/23 15:23, Jeff Davis wrote:\n> On Mon, 2023-06-05 at 19:43 +0200, Daniel Verite wrote:\n>> But in the meantime, personally I don't quite see why Postgres should\n>> start forcing C.UTF-8 to sort differently in the database than in the\n>> OS.\n> \n> I can see both points of view. It could be surprising to users if\n> C.UTF-8 does not sort like C/memcmp, or surprising if it changes out\n> from under them. It could also be surprising that it wouldn't sort like\n> the current OS's libc interpretation of C.UTF-8.\n> \n> What about ICU? How should provider=icu locale=C.UTF-8 behave? We\n> could:\n> \n> a. Just pass it to the provider and see what happens (older versions of\n> ICU would interpret it as en-US-u-va-posix; newer versions would give\n> the root locale).\n> \n> b. Consistently interpret it as en-US-u-va-posix.\n> \n> c. Don't pass it to the provider at all and treat it with memcmp\n> semantics.\n\nPersonally I think this should be (a). However we should also clearly \ndocument that the semantics of such is provider/OS dependent and \ntherefore may not be what is expected/desired.\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 6 Jun 2023 15:27:19 -0400",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "\tJeff Davis wrote:\n\n> What about ICU? How should provider=icu locale=C.UTF-8 behave? We\n> could:\n> \n> a. Just pass it to the provider and see what happens (older versions of\n> ICU would interpret it as en-US-u-va-posix; newer versions would give\n> the root locale).\n> \n> b. Consistently interpret it as en-US-u-va-posix.\n> \n> c. Don't pass it to the provider at all and treat it with memcmp\n> semantics.\n\n\nI think b) and c) are quite problematic.\n\n\nFirst, en-US-u-va-posix does not sort like C.UTF-8 in glibc.\nFor one thing it seems that en-US-u-va-posix assigns zero weights to\nsome codepoints, which makes it semantically definitely different.\nFor instance consider ZERO WIDTH SPACE (U+200B):\n\npostgres=# select 'ab' < E'a\\u200Ba' COLLATE \"C.utf8\";\n ?column? \n----------\n t\n\n\npostgres=# select 'ab' < E'a\\u200Ba' COLLATE \"en-US-u-va-posix-x-icu\";\n ?column? \n----------\n f\n\nEven if ICU folks refer to u-va-posix as approximating POSIX (as in [1]),\nfor our purpose, either it sorts by codepoints or it does not,\nand it clearly does not. One consequence is that \nen-US-u-va-posix-x-icu needs to be versioned and indexes\ndepending on it need to be rebuilt on upgrades.\nOTOH the goal with C.UTF-8, that is achieved in glibc>=2.35,\nis to not need that.\n\nAlso it's not just about sorting. The semantics for the ctype-kind\nfunctions are also different.\n\nConsider matching '\\d' in a regexp. With C.UTF-8 (glibc-2.35), we only match\nASCII characters 0-9, or 10 codepoints.\nWith \"en-US-u-va-posix-x-icu\" we match 660 codepoints comprising\nall the digit characters in all languages, plus a bunch of variants\nfor mathematical symbols.\n\nFor instance consider U+FF10 (Fullwidth Digit Zero):\n\npostgres=# select E'\\uff10' collate \"C.utf8\" ~ '\\d';\n ?column? \n----------\n f\n\npostgres=# select E'\\uff10' collate \"en-US-u-va-posix-x-icu\" ~ '\\d';\n ?column? \n----------\n t\n\nIf someone dumps their C.UTF-8 database to reload into an\nICU/en-US-u-va-posix database, there is no guarantee that it\neven reloads because of semantic differences occuring\nin constraints. In general it will surely reload, but the apps\nmight not behave the same with the new database\nin a way that might be problematic.\nIt's fine if that's what they want and they explicitly ask for this\nconversion, but it's not fine if it's postgres that has quietly\ndecided that for them.\n\n\nAbout c) \"don't pass it to the operators\", it would be doable for\nsorting (ignoring the \"glibc before 2.35 does not sort like that\" issue)\nbut not for the ctype-kind functions, where postgres' own code\ndoesn't have the Unicode knowledge.\n\n\nAbout a) \"just pass it to the provider\", that seems better than b) or\nc), but still, when a user asks for provider=icu locale=C.UTF-8, \nit's a very probably a pilot error.\n\nTo me the user would be best served by a warning, if not an error,\ninforming them that it's quite probably not the combination they want.\n\n\n\n\n[1]\nhttps://sourceforge.net/p/icu/mailman/icu-support/thread/CAN49p6pvQKP93j8LMn3zBWhpk-T0qYD0TCuiHMv6Z3UPGFh3QQ%40mail.gmail.com/#msg35638356\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 07 Jun 2023 16:08:05 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "\tI wrote:\n\n> Consider matching '\\d' in a regexp. With C.UTF-8 (glibc-2.35), we\n> only match ASCII characters 0-9, or 10 codepoints. With\n> \"en-US-u-va-posix-x-icu\" we match 660 codepoints comprising all the\n> digit characters in all languages, plus a bunch of variants for\n> mathematical symbols.\n\nBTW this not specifically a C.UTF-8 versus \"en-US-u-va-posix-x-icu\"\ndifference.\nIf think that any glibc-based locale will consider that \\d\nin a regexp means [0-9], and that any ICU locale\nwill make \\d match a much larger variety of characters.\n\nWhile moving to ICU by default, we should expect that \ndifferences like that will affect apps in a way that might be\nmore or less disruptive.\n\nAnother known difference it that upper() with ICU does not do a\ncharacter-by-character conversion, for instance:\n\nWITH words(w) as (values('muß'),('final'))\n SELECT\n w,\n length(w),\n upper(w collate \"C.utf8\") as \"upper (libc)\",\n length(upper(w collate \"C.utf8\")),\n upper(w collate \"en-x-icu\") as \"upper (ICU)\",\n length(upper(w collate \"en-x-icu\"))\nFROM words;\n\n w | length | upper libc | length | upper ICU | length \n------+--------+------------+--------+-----------+--------\n muß | 3 | MUß\t |\t 3 | MUSS\t |\t4\n final | 4 | fiNAL\t |\t 4 | FINAL\t |\t5\n\n\nThe fact that the resulting string is larger that the original\nmight cause problems.\n\nIn general, we can't abstract from the fact that ICU semantics\nare different.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 07 Jun 2023 17:08:10 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On 06.06.23 21:23, Jeff Davis wrote:\n> What about ICU? How should provider=icu locale=C.UTF-8 behave? We\n> could:\n\nIt should be an error.\n\n> a. Just pass it to the provider and see what happens (older versions of\n> ICU would interpret it as en-US-u-va-posix; newer versions would give\n> the root locale).\n\nThis, but with an error instead of a warning.\n\n\n\n",
"msg_date": "Wed, 7 Jun 2023 23:28:48 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Wed, 2023-06-07 at 23:28 +0200, Peter Eisentraut wrote:\n> On 06.06.23 21:23, Jeff Davis wrote:\n> > What about ICU? How should provider=icu locale=C.UTF-8 behave? We\n> > could:\n> \n> It should be an error.\n> \n> > a. Just pass it to the provider and see what happens (older\n> > versions of\n> > ICU would interpret it as en-US-u-va-posix; newer versions would\n> > give\n> > the root locale).\n> \n> This, but with an error instead of a warning.\n\nIf we do that, a plain \"initdb -D data\" will fail if LANG=C.UTF-8.\n\nPerhaps that's fine, but certainly some buildfarm members would\ncomplain. I'm not sure how many users would be affected.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 07 Jun 2023 15:43:14 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Sun, Apr 23, 2023 at 5:22 AM Daniel Verite <[email protected]> wrote:\n> I understand that my proposal to version C.* like any other collation\n> might be erring on the side of caution, but ignoring these collation\n> changes on at least one major OS does not feel right either.\n> Maybe we should consider doing platform-dependent checks?\n\nHmm, OK let's explore that. What could we do that would be helpful\nhere, without affecting users of the \"true\" C.UTF-8 for the rest of\ntime? This is a Debian (+ downstream distro) only problem as far as\nwe know so far, and only for Debian 11 and older. Debian 12 was just\nreleased the other day as the new stable, so the window of opportunity\nto actually help anyone with our actions in this area is now beginning\nto close, because we're really talking about new databases initdb'd\nand new COLLATIONs CREATEd on Debian old stable after our next\n(August) release. The window may be much wider for long term Ubuntu\nreleases. You're right that we could do something platform-specific\nto help with that: we could change that code so that it captures the\nversion for C.* #if __GLIBC_MAJOR__ == 2 && __GLIBC_MINOR__ < 35 (or\nwe could parse the string returned by the runtime version function).\nI don't recall immediately what our warning code does if it sees a\nchange from versioned to not versioned, but we could make sure it does\nwhat you want here. That way we wouldn't burden all future users of\nC.* with version warnings (because it'll be empty), but we'd catch\nthose doing Debian 11 -> 12 upgrades, and whatever Ubuntu upgrades\nthat corresponds to, etc. Is it worth it?\n\n\n",
"msg_date": "Thu, 15 Jun 2023 19:15:08 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Thu, 2023-06-15 at 19:15 +1200, Thomas Munro wrote:\n> Hmm, OK let's explore that. What could we do that would be helpful\n> here, without affecting users of the \"true\" C.UTF-8 for the rest of\n> time?\n\nWhere is the \"true\" C.UTF-8 defined?\n\nI assume you mean that the collation order can't (shouldn't, anyway)\nchange. But what about the ctype (upper/lower/initcap) behavior? Is\nthat also locked down for all time, or could it change if some new\nunicode characters are added?\n\nWould it be correct to interpret LC_COLLATE=C.UTF-8 as LC_COLLATE=C,\nbut leave LC_CTYPE=C.UTF-8 as-is?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 16 Jun 2023 15:03:34 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 10:03 AM Jeff Davis <[email protected]> wrote:\n> On Thu, 2023-06-15 at 19:15 +1200, Thomas Munro wrote:\n> > Hmm, OK let's explore that. What could we do that would be helpful\n> > here, without affecting users of the \"true\" C.UTF-8 for the rest of\n> > time?\n>\n> Where is the \"true\" C.UTF-8 defined?\n\nBy \"true\" I just meant glibc's official one, in contrast to the\nimposter from Debian oldstable's patches. It's not defined by any\nstandard, but we only know how to record versions for glibc, FreeBSD\nand Windows, and we know what the first two of those do for that\nlocale because they tell us (see below). For Windows, the manual's\nBNF-style description of acceptable strings doesn't appear to accept\nC.UTF-8 (but I haven't tried it).\n\n> I assume you mean that the collation order can't (shouldn't, anyway)\n> change. But what about the ctype (upper/lower/initcap) behavior? Is\n> that also locked down for all time, or could it change if some new\n> unicode characters are added?\n\nFair point. Considering that our collversion effectively functions as\na proxy for ctype version too, Daniel's patch makes a certain amount\nof sense.\n\nOur versioning is nominally based only on the collation category, not\nlocales more generally or any other category they contain (nominally,\nas in: we named it collversion, and our code and comments and\ndiscussions so far only contemplated collations in this context).\nBut, clearly, changes to underlying ctype data could also cause a\nconstraint CHECK (x ~ '[[:digit:]]') or a partial index with WHERE\n(upper(x) <> 'ẞ') to be corrupted, which I'd considered to be a\nseparate topic, but Daniel's patch would cover with the same\nmechanism. (Actually I just learned that [[:digit:]] is a bad example\non a glibc system, because they appear to have hardcoded a test for\n[0-9] into their iswdigit_l() implementation, but FreeBSD gives the\nUnicode answer, which is subject to change, and other classes may work\nbetter on glibc.)\n\n> Would it be correct to interpret LC_COLLATE=C.UTF-8 as LC_COLLATE=C,\n> but leave LC_CTYPE=C.UTF-8 as-is?\n\nYes. The basic idea, at least for these two OSes, is that every\ncategory behaves as if set to C, except LC_CTYPE. For implementation\nreasons the glibc people don't quite describe it that way[1]: for\nLC_COLLATE, they decode to codepoints first and then compare those\nusing a new codepath they had to write for release 2.35, while FreeBSD\nskips that useless step and compares raw UTF-8 bytes like\nLC_COLLATE=C[2]. Which is the same, as RFC 3692 tells us:\n\n o The byte-value lexicographic sorting order of UTF-8 strings is the\n same as if ordered by character numbers. Of course this is of\n limited interest since a sort order based on character numbers is\n almost never culturally valid.\n\nIt is interesting to note that LC_COLLATE=C, LC_CTYPE=C.UTF-8 is\nequivalent, but would not get version warnings with Daniel's patch,\nrevealing that it's only a proxy. But recording ctype version\nseparately would be excessive.\n\nFor completeness, Solaris also has C.UTF-8. I can't read about what\nit does, the release notes are behind a signup thing. *shrug* I\ncan't find any other systems that have it.\n\n[1] https://sourceware.org/glibc/wiki/Proposals/C.UTF-8\n[2] https://reviews.freebsd.org/D17833\n\n\n",
"msg_date": "Sat, 17 Jun 2023 17:54:35 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Sat, 2023-06-17 at 17:54 +1200, Thomas Munro wrote:\n> \n> > Would it be correct to interpret LC_COLLATE=C.UTF-8 as\n> > LC_COLLATE=C,\n> > but leave LC_CTYPE=C.UTF-8 as-is?\n> \n> Yes. The basic idea, at least for these two OSes, is that every\n> category behaves as if set to C, except LC_CTYPE.\n\nIf that's true, and we version C.UTF-8, then users could still get the\nbehavior they want, a stable collation order, and benefit from the\noptimized code path by setting LC_COLLATE=C and LC_CTYPE=C.UTF-8.\n\nThe only caveat is to be careful with things that depend on ctype in\nindexes and constraints. While still a problem, it's a smaller problem\nthan unversioned collation. We should think a little more about solving\nit, because I think there's a strong case to be made that a default\ncollation of C and a database ctype of something else is a good\ncombination (it makes less sense for a case-insensitive collation, but\nthose aren't allowed as a default collation).\n\nIn any case, we're better off following the rule \"version anything that\ngoes to any external provider, period\". And by \"version\", I really mean\na best effort, because we don't always have great information, but I\nthink it's better to record what we do have than not. We have just seen\ntoo many examples of weird behavior. On top of that, it's simply\ninconsistent to assume that C=C.UTF-8 for collation version, but not\nfor the collation implementation.\n\nUsers might get frustrated that the collation for C.UTF-8 is versioned,\nof course. But I don't think it will affect anyone for quite some time,\nbecause existing users will have a datcollversion=NULL; so they won't\nget the warnings until they refresh the versions (or create new\ncollations/databases), and then after that upgrade libc. Right? So they\nshould have time to adjust to use LC_COLLATE=C if that's what they\nwant.\n\nAn alternative would be to define lc_collate_is_c(\"C.UTF-8\") == true\nwhile leaving lc_ctype_is_c(\"C.UTF-8\") == false and\nget_collation_actual_version(\"C.UTF-8\") == NULL. In that case we would\nnot be passing it to an external provider, so we don't have to version\nit. But that might be a little too magical and I'm not inclined to do\nthat.\n\nAnother alternative would be to implement C.UTF-8 internally according\nto the \"true\" semantics, if they are truly simple and well-defined and\nstable. But I don't think ctype=C.UTF-8 is actually stable because new\ncharacters can be added, right?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 19 Jun 2023 11:47:56 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "\tThomas Munro wrote:\n\n> What could we do that would be helpful here, without affecting users\n> of the \"true\" C.UTF-8 for the rest of time? This is a Debian (+\n> downstream distro) only problem as far as we know so far, and only\n> for Debian 11 and older.\n\nIt seems to include RedHat-based distros as well.\n\nAccording to https://bugzilla.redhat.com/show_bug.cgi?id=902094\nC.utf8 was added in 2015 and backported down to Fedora 22.\nRHEL8 / CentOS 8 / Rocky8 provide glibc 2.28 with a C.utf8\nlocale. We can reasonably suspect that they've been using the same\nkind of patches as Debian before version 12, with not all codepoints\nbeing sorted bytewise.\n\nRHEL9 comes with glibc 2.34 according to distrowatch [1] and the\nannouncement [2], so presumably it also lacks the \"real\" C.utf8\nwith bytewise sorting that glibc 2.35 upstream added.\n\n\n[1] https://distrowatch.com/table.php?distribution=redhat\n[2]\nhttps://developers.redhat.com/articles/2022/05/18/whats-new-red-hat-enterprise-linux-9\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 21 Jun 2023 19:07:14 +0200",
"msg_from": "\"Daniel Verite\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 6:48 AM Jeff Davis <[email protected]> wrote:\n> On Sat, 2023-06-17 at 17:54 +1200, Thomas Munro wrote:\n> > > Would it be correct to interpret LC_COLLATE=C.UTF-8 as\n> > > LC_COLLATE=C,\n> > > but leave LC_CTYPE=C.UTF-8 as-is?\n> >\n> > Yes. The basic idea, at least for these two OSes, is that every\n> > category behaves as if set to C, except LC_CTYPE.\n>\n> If that's true, and we version C.UTF-8, then users could still get the\n> behavior they want, a stable collation order, and benefit from the\n> optimized code path by setting LC_COLLATE=C and LC_CTYPE=C.UTF-8.\n>\n> The only caveat is to be careful with things that depend on ctype in\n> indexes and constraints. While still a problem, it's a smaller problem\n> than unversioned collation. We should think a little more about solving\n> it, because I think there's a strong case to be made that a default\n> collation of C and a database ctype of something else is a good\n> combination (it makes less sense for a case-insensitive collation, but\n> those aren't allowed as a default collation).\n>\n> In any case, we're better off following the rule \"version anything that\n> goes to any external provider, period\". And by \"version\", I really mean\n> a best effort, because we don't always have great information, but I\n> think it's better to record what we do have than not. We have just seen\n> too many examples of weird behavior. On top of that, it's simply\n> inconsistent to assume that C=C.UTF-8 for collation version, but not\n> for the collation implementation.\n\nYeah, OK, you're convincing me. It's hard to decide because our model\nis basically wrong so it's only warning you about potential ctype\nchanges by happy coincidence, but even in respect of sort order it was\nprobably a mistake to start second-guessing what libc is doing, and\nwith that observation about the C/C.UTF-8 combination, at least an\nend-user has a way to opt in/out of this choice. I'll try to write a\nconcise commit message for Daniel's patch explaining all this and we\ncan see about squeaking it into beta2.\n\n> Use rs might get frustrated that the collation for C.UTF-8 is versioned,\n> of course. But I don't think it will affect anyone for quite some time,\n> because existing users will have a datcollversion=NULL; so they won't\n> get the warnings until they refresh the versions (or create new\n> collations/databases), and then after that upgrade libc. Right? So they\n> should have time to adjust to use LC_COLLATE=C if that's what they\n> want.\n\nYeah.\n\n> An alternative would be to define lc_collate_is_c(\"C.UTF-8\") == true\n> while leaving lc_ctype_is_c(\"C.UTF-8\") == false and\n> get_collation_actual_version(\"C.UTF-8\") == NULL. In that case we would\n> not be passing it to an external provider, so we don't have to version\n> it. But that might be a little too magical and I'm not inclined to do\n> that.\n\nAgreed, let's not do any more of that sort of thing.\n\n> Another alternative would be to implement C.UTF-8 internally according\n> to the \"true\" semantics, if they are truly simple and well-defined and\n> stable. But I don't think ctype=C.UTF-8 is actually stable because new\n> characters can be added, right?\n\nCorrect.\n\n\n",
"msg_date": "Fri, 23 Jun 2023 09:22:19 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_collation.collversion for C.UTF-8"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have a few different places in the code where we generate or modify\ntar headers or just read data out of them. The code in question uses\none of my less-favorite programming things: magic numbers. The offsets\nof the various fields within the tar header are just hard-coded in\neach relevant place in our code. I think we should clean that up, as\nin the attached patch.\n\nI hasten to emphasize that, while I think this is an improvement, I\ndon't think the result is particularly awesome. Even with the patch,\nsrc/port/tar.c and src/include/pgtar.h do a poor job insulating\ncallers from the details of the tar format. However, it's also not\nvery clear to me how to fix that. For instance, I thought about\nwriting a function that parses a tar header into a struct and then\nusing it in all of these places, but that seems like it would lose too\nmuch efficiency relative to the current ad-hoc coding. So for now I\ndon't have a better idea than this.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 18 Apr 2023 11:20:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "constants for tar header offsets"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> We have a few different places in the code where we generate or modify\n> tar headers or just read data out of them. The code in question uses\n> one of my less-favorite programming things: magic numbers. The offsets\n> of the various fields within the tar header are just hard-coded in\n> each relevant place in our code. I think we should clean that up, as\n> in the attached patch.\n\nGenerally +1, with a couple of additional thoughts:\n\n1. Is it worth inventing macros for the values of the file type,\nrather than just writing the comment you did?\n\n2. The header size is defined as 512 bytes, but this doesn't sum to 512:\n\n+\tTAR_OFFSET_PREFIX = 345\t\t/* 155 byte string */\n\nEither that field's length is really 167 bytes, or there's some other\nfield you didn't document. (It looks like you may have copied \"155\"\nfrom an incorrect existing comment?)\n\n> I hasten to emphasize that, while I think this is an improvement, I\n> don't think the result is particularly awesome. Even with the patch,\n> src/port/tar.c and src/include/pgtar.h do a poor job insulating\n> callers from the details of the tar format. However, it's also not\n> very clear to me how to fix that.\n\nYeah, this is adding greppability (a good thing!) but little more.\nHowever, I'm not convinced it's worth doing more. It's not like\nthis data structure will change anytime soon.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:38:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 11:38 AM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > We have a few different places in the code where we generate or modify\n> > tar headers or just read data out of them. The code in question uses\n> > one of my less-favorite programming things: magic numbers. The offsets\n> > of the various fields within the tar header are just hard-coded in\n> > each relevant place in our code. I think we should clean that up, as\n> > in the attached patch.\n>\n> Generally +1, with a couple of additional thoughts:\n>\n> 1. Is it worth inventing macros for the values of the file type,\n> rather than just writing the comment you did?\n\nMight be.\n\n> 2. The header size is defined as 512 bytes, but this doesn't sum to 512:\n>\n> + TAR_OFFSET_PREFIX = 345 /* 155 byte string */\n>\n> Either that field's length is really 167 bytes, or there's some other\n> field you didn't document. (It looks like you may have copied \"155\"\n> from an incorrect existing comment?)\n\nAccording to my research, it is neither of those, e.g. see\n\nhttps://www.subspacefield.org/~vax/tar_format.html\nhttps://www.ibm.com/docs/en/zos/2.4.0?topic=formats-tar-format-tar-archives\nhttps://wiki.osdev.org/USTAR\n\nI think that what happened is that whoever designed the original tar\nformat decided on 512 byte blocks. And the header did not take up the\nwhole block. The USTAR format is an extension of the original format\nwhich uses more of the block, but still not all of it.\n\n> Yeah, this is adding greppability (a good thing!) but little more.\n> However, I'm not convinced it's worth doing more. It's not like\n> this data structure will change anytime soon.\n\nRight. greppability is a major concern for me here, and also bug\nsurface. Right now, to use the functions in pgtar.h, you need to know\nall the header offsets as well as the format and length of each header\nfield. This centralizes constants for the header offsets, and at least\nprovides some centralized documentation of the rest. It's not great,\nthough, because it seems like there's some risk of someone writing new\ncode and getting confused about whether the length of a certain field\nis 8 or 12, for example. A thicker abstraction layer might be able to\navoid or minimize such risks better than what we have, but I don't\nreally know how to design it, whereas this seems like an obvious\nimprovement.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:53:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Apr 18, 2023 at 11:38 AM Tom Lane <[email protected]> wrote:\n>> 2. The header size is defined as 512 bytes, but this doesn't sum to 512:\n>> + TAR_OFFSET_PREFIX = 345 /* 155 byte string */\n\n> I think that what happened is that whoever designed the original tar\n> format decided on 512 byte blocks. And the header did not take up the\n> whole block. The USTAR format is an extension of the original format\n> which uses more of the block, but still not all of it.\n\nHmm, you're right: I checked the POSIX.1-2018 spec as well, and\nit agrees that the prefix field is 155 bytes long. Perhaps just\nadd another comment line indicating that 12 bytes remain unassigned?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 12:06:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 12:06 PM Tom Lane <[email protected]> wrote:\n> Hmm, you're right: I checked the POSIX.1-2018 spec as well, and\n> it agrees that the prefix field is 155 bytes long. Perhaps just\n> add another comment line indicating that 12 bytes remain unassigned?\n\nOK. Here's v2, with that change and a few others.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 18 Apr 2023 12:24:11 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> OK. Here's v2, with that change and a few others.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 12:38:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n\n> On Tue, Apr 18, 2023 at 12:06 PM Tom Lane <[email protected]> wrote:\n>> Hmm, you're right: I checked the POSIX.1-2018 spec as well, and\n>> it agrees that the prefix field is 155 bytes long. Perhaps just\n>> add another comment line indicating that 12 bytes remain unassigned?\n>\n> OK. Here's v2, with that change and a few others.\n\nIt still has magic numbers for the sizes of the fields, should those\nalso be named constants?\n\n- ilmari\n\n\n",
"msg_date": "Tue, 18 Apr 2023 17:56:43 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 12:56 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n> It still has magic numbers for the sizes of the fields, should those\n> also be named constants?\n\nI thought about that. It's arguable, but personally, I don't think\nit's worth it. If the concern is greppability, having constants for\nthe offsets is good enough for that. If the concern is making it\nerror-free, I think we'd be well-advised to consider bigger redesigns\nof the API. For example, we could have a function\nread_number_from_tar_header(char *pointer_to_the_start_of_the_block,\nenum which_field) and then that function could encapsulate the\nknowledge of which tar numbers are 8 bytes and which are 12 bytes.\nWriting read_number_from_tar_header(h, TAR_FIELD_CHECKSUM) seems\npotentially less error-prone than\nread_tar_number(&h[TAR_OFFSET_CHECKSUM], 8). On the other hand,\nchanging the latter to read_tar_number(&h[TAR_OFFSET_CHECKSUM],\nTAR_LENGTH_CHECKSUM) seems longer but not necessarily cleaner. So I\nfelt it didn't make sense.\n\nJust to be clear, I don't have a full vision for what a replacement\nAPI ought to look like, and I'm not sure that figuring that out is\nsomething that has to be done right this minute. I proposed this patch\nnot because it's perfect, but because it's simple. We can think of\ndoing more in the future if someone wants to devote the effort, and\nthat person might even be me, but right now it's not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Apr 2023 09:09:52 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "On Wed Apr 19, 2023 at 8:09 AM CDT, Robert Haas wrote:\n> On Tue, Apr 18, 2023 at 12:56 PM Dagfinn Ilmari Mannsåker\n> <[email protected]> wrote:\n> > It still has magic numbers for the sizes of the fields, should those\n> > also be named constants?\n>\n> I thought about that. It's arguable, but personally, I don't think\n> it's worth it. If the concern is greppability, having constants for\n> the offsets is good enough for that. If the concern is making it\n> error-free, I think we'd be well-advised to consider bigger redesigns\n> of the API. For example, we could have a function\n> read_number_from_tar_header(char *pointer_to_the_start_of_the_block,\n> enum which_field) and then that function could encapsulate the\n> knowledge of which tar numbers are 8 bytes and which are 12 bytes.\n> Writing read_number_from_tar_header(h, TAR_FIELD_CHECKSUM) seems\n> potentially less error-prone than\n> read_tar_number(&h[TAR_OFFSET_CHECKSUM], 8). On the other hand,\n> changing the latter to read_tar_number(&h[TAR_OFFSET_CHECKSUM],\n> TAR_LENGTH_CHECKSUM) seems longer but not necessarily cleaner. So I\n> felt it didn't make sense.\n>\n> Just to be clear, I don't have a full vision for what a replacement\n> API ought to look like, and I'm not sure that figuring that out is\n> something that has to be done right this minute. I proposed this patch\n> not because it's perfect, but because it's simple. We can think of\n> doing more in the future if someone wants to devote the effort, and\n> that person might even be me, but right now it's not.\n\nA new API design would be great, but for right now v2 is good enough and \nshould be committed. It is much easier to read the code with this patch \napplied.\n\nMarking as \"Ready for Committer\" since we all seem to agree that this is \nbetter than what exists.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 01 Aug 2023 10:07:10 -0500",
"msg_from": "\"Tristan Partin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: constants for tar header offsets"
},
{
"msg_contents": "On Tue, Aug 1, 2023 at 11:07 AM Tristan Partin <[email protected]> wrote:\n> A new API design would be great, but for right now v2 is good enough and\n> should be committed. It is much easier to read the code with this patch\n> applied.\n>\n> Marking as \"Ready for Committer\" since we all seem to agree that this is\n> better than what exists.\n\nThanks, committed now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Aug 2023 13:57:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: constants for tar header offsets"
}
] |
[
{
"msg_contents": "Recent commits that enhanced rmgr desc routines (commits 7d8219a4 and\n1c453cfd) dealt with records that lack relevant block data (and so\nlack anything to give a more detailed summary of) by testing\n!DecodedBkpBlock.has_image -- that is the gating condition that\ndetermines if we want to (say) output a textual array representation\nof the page offset number from a given nbtree VACUUM WAL record.\nStrictly speaking, this isn't the correct gating condition to test. We\nshould be testing the *presence* of the relevant block data instead.\nWhy test an inexact proxy for the condition that we care about, when\nwe can just as easily test the precise condition we care about\ninstead?\n\nThis isn't just a theoretical issue. Currently, we won't display\ndetailed descriptions of block data whenever wal_consistency_checking\nhappens to be in use. At least for those records with relevant block\ndata available to summarize that also happen to have an FPI that the\nREDO routine isn't supposed to apply (i.e. an FPI that is included in\nthe record purely so that verifyBackupPageConsistency can verify that\nthe REDO routine produces a matching image).\n\nAttached patch fixes this bug.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 18 Apr 2023 14:36:40 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Enhanced rmgr desc routines test !has_image, not has_data"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 02:36:40PM -0700, Peter Geoghegan wrote:\n> This isn't just a theoretical issue. Currently, we won't display\n> detailed descriptions of block data whenever wal_consistency_checking\n> happens to be in use. At least for those records with relevant block\n> data available to summarize that also happen to have an FPI that the\n> REDO routine isn't supposed to apply (i.e. an FPI that is included in\n> the record purely so that verifyBackupPageConsistency can verify that\n> the REDO routine produces a matching image).\n\nYeah, I agree that your suggestion is more useful for debugging when a\nrecord includes both a block image and some data associated to it.\nSo, +1.\n--\nMichael",
"msg_date": "Wed, 19 Apr 2023 15:10:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Enhanced rmgr desc routines test !has_image, not has_data"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 11:10 PM Michael Paquier <[email protected]> wrote:\n> Yeah, I agree that your suggestion is more useful for debugging when a\n> record includes both a block image and some data associated to it.\n> So, +1.\n\nOkay, pushed that fix just now.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:43:04 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Enhanced rmgr desc routines test !has_image, not has_data"
}
] |
[
{
"msg_contents": "The wal size related gucs use the MB unit, so we should just use\nINT_MAX instead of MAX_KILOBYTES as the max value.\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 19 Apr 2023 11:26:26 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use INT_MAX for wal size related gucs's max value"
},
{
"msg_contents": "Junwang Zhao <[email protected]> writes:\n> The wal size related gucs use the MB unit, so we should just use\n> INT_MAX instead of MAX_KILOBYTES as the max value.\n\nThe point of MAX_KILOBYTES is to avoid overflow when the value\nis multiplied by 1kB. It does seem like that might not be\nappropriate for these values, but that doesn't mean that we can\nblithely go to INT_MAX. Have you chased down how they are used?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 23:33:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use INT_MAX for wal size related gucs's max value"
},
{
"msg_contents": "These gucs are always used with ConvertToXSegs, to calculate the count of\nwal segments(see the following code snip), and wal_segment_size can be\nconfigured by initdb as a value of a power of 2 between 1 and 1024 (megabytes),\nso I think INT_MAX should be safe here.\n\n/*\n* Convert values of GUCs measured in megabytes to equiv. segment count.\n* Rounds down.\n*/\n#define ConvertToXSegs(x, segsize) XLogMBVarToSegs((x), (segsize))\n\n/*\n* Convert values of GUCs measured in megabytes to equiv. segment count.\n* Rounds down.\n*/\n#define XLogMBVarToSegs(mbvar, wal_segsz_bytes) \\\n((mbvar) / ((wal_segsz_bytes) / (1024 * 1024)))\n\nOn Wed, Apr 19, 2023 at 11:33 AM Tom Lane <[email protected]> wrote:\n>\n> Junwang Zhao <[email protected]> writes:\n> > The wal size related gucs use the MB unit, so we should just use\n> > INT_MAX instead of MAX_KILOBYTES as the max value.\n>\n> The point of MAX_KILOBYTES is to avoid overflow when the value\n> is multiplied by 1kB. It does seem like that might not be\n> appropriate for these values, but that doesn't mean that we can\n> blithely go to INT_MAX. Have you chased down how they are used?\n>\n> regards, tom lane\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:51:19 +0800",
"msg_from": "Junwang Zhao <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Use INT_MAX for wal size related gucs's max value"
}
] |
[
{
"msg_contents": "Hello there, \n\nA few years ago, someone reported a bug (#13489) about attndims, which\nreturned a false value on an array on a table created by CREATE TABLE\n<cloned_table> (LIKE <original_table> INCLUDING ALL), \n\nexample:\n\nCREATE TABLE test (data integer, data_array integer[];\nCREATE TABLE test_clone (LIKE test INCLUDING ALL);\n\nSELECT attndims FROM pg_attribute WHERE attrelid = 'test'::regclass AND\nattname = 'data_array';\n\nreturns 1\n\nbut\n\nSELECT attndims FROM pg_attribute WHERE attrelid = 'test_clone'::regclass AND\nattname = 'data_array';\n\nreturns 0\n\nHowever, according to the documentation https://www.postgresql.org/docs/15/catalog-pg-attribute.html,\nsince data_array is an array I expected the returned value should be\ngreater than 0\n\nThanks\n\n(tested on PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1))\n\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:35:29 +0200",
"msg_from": "Bruno Bonfils <[email protected]>",
"msg_from_op": true,
"msg_subject": "About #13489"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 11:35:29AM +0200, Bruno Bonfils wrote:\n> Hello there, \n> \n> A few years ago, someone reported a bug (#13489) about attndims, which\n> returned a false value on an array on a table created by CREATE TABLE\n> <cloned_table> (LIKE <original_table> INCLUDING ALL), \n> \n> example:\n> \n> CREATE TABLE test (data integer, data_array integer[];\n> CREATE TABLE test_clone (LIKE test INCLUDING ALL);\n> \n> SELECT attndims FROM pg_attribute WHERE attrelid = 'test'::regclass AND\n> attname = 'data_array';\n> \n> returns 1\n> \n> but\n> \n> SELECT attndims FROM pg_attribute WHERE attrelid = 'test_clone'::regclass AND\n> attname = 'data_array';\n> \n> returns 0\n> \n> However, according to the documentation https://www.postgresql.org/docs/15/catalog-pg-attribute.html,\n> since data_array is an array I expected the returned value should be\n> greater than 0\n\nI did a lot of research on this and found out a few things. First,\nCREATE TABLE is a complex command that gets its column names, types,\ntype modifiers, and array dimensions from a a variety of places:\n\n* Specified literally\n* Gotten from LIKE\n* Gotten from queries\n\nWhat you found is that we don't pass the array dimensions properly with\nLIKE. As the code is written, it can only get dimensions that are\nliterally specified in the query. What I was able to do in the attached\npatch is to pass the array dimensions to the ColumnDef structure, which\nis picked up by LIKE, and optionally use that if no dimensions are\nspecified in the query.\n\nI am not sure how I feel about the patch. We don't seem to record array\ndimensionality well --- we don't record the dimension constants and we\ndon't enforce the dimensionality either, and psql doesn't even show the\ndimensionality we do record in pg_attribute, which looks like another\nbug. (I think the SQL function format_type() would need to pass in the\narray dimensionality to fix this):\n\n\tCREATE TABLE test (data integer, data_array integer[5][5]);\n\n\tCREATE TABLE test_clone (LIKE test INCLUDING ALL);\n\n\tSELECT attndims FROM pg_attribute WHERE attrelid = 'test'::regclass AND\n\tattname = 'data_array';\n\t attndims\n\t----------\n\t 2\n\t\n\tSELECT attndims FROM pg_attribute WHERE attrelid = 'test_clone'::regclass AND\n\tattname = 'data_array';\n\t attndims\n\t----------\n-->\t 2\n\t\n\tINSERT INTO test VALUES (1, '{1}');\n\tINSERT INTO test VALUES (1, '{{1},{2}}');\n\tINSERT INTO test VALUES (1, '{{1},{2},{3}}');\n\n\t\\d test\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t------------+-----------+-----------+----------+---------\n\t data | integer | | |\n-->\t data_array | integer[] | | |\n\t\n\tSELECT * FROM test;\n\t data | data_array\n\t------+---------------\n-->\t 1 | {1}\n\t 1 | {{1},{2}}\n-->\t 1 | {{1},{2},{3}}\n\nIs it worth applying this patch and improving psql? Are there other\nmissing pieces that could be easily improved.\n\nHowever, we already document that array dimensions are for documentation\npurposes only, so the fact we don't update pg_attribute, and don't\ndisplay the dimensions properly, could be considered acceptable:\n\n\thttps://www.postgresql.org/docs/devel/arrays.html#ARRAYS-DECLARATION\n\t\n\tThe current implementation does not enforce the declared number of\n\tdimensions either. Arrays of a particular element type are all\n\tconsidered to be of the same type, regardless of size or number of\n\tdimensions. So, declaring the array size or number of dimensions in\n\tCREATE TABLE is simply documentation; it does not affect run-time\n\tbehavior.\n\nI knew we only considered the array dimension sizes to be documentation\n_in_ the query, but I thought we at least properly displayed the number\nof dimensions specified at creation when we described the table in psql,\nbut it seems we don't do that either.\n\nA big question is why we even bother to record the dimensions in\npg_attribute if is not accurate for LIKE and not displayed to the user\nin a meaningful way by psql.\n\nI think another big question is whether the structure we are using to\nsupply the column information to BuildDescForRelation is optimal. The\ntypmod that has to be found for CREATE TABLE uses:\n\n typenameTypeIdAndMod(NULL, entry->typeName, &atttypid, &atttypmod);\n\nwhich calls typenameTypeIdAndMod() -> typenameType() -> LookupTypeName()\n-> LookupTypeNameExtended() -> typenameTypeMod(). This seems very\ncomplicated because the ColumnDef, at least in the LIKE case, already\nhas the value. Is there a need to revisit how we handle type such\ncases?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Fri, 8 Sep 2023 17:10:51 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
},
{
"msg_contents": "On Fri, Sep 8, 2023 at 05:10:51PM -0400, Bruce Momjian wrote:\n> I knew we only considered the array dimension sizes to be documentation\n> _in_ the query, but I thought we at least properly displayed the number\n> of dimensions specified at creation when we described the table in psql,\n> but it seems we don't do that either.\n> \n> A big question is why we even bother to record the dimensions in\n> pg_attribute if is not accurate for LIKE and not displayed to the user\n> in a meaningful way by psql.\n> \n> I think another big question is whether the structure we are using to\n> supply the column information to BuildDescForRelation is optimal. The\n> typmod that has to be found for CREATE TABLE uses:\n> \n> typenameTypeIdAndMod(NULL, entry->typeName, &atttypid, &atttypmod);\n> \n> which calls typenameTypeIdAndMod() -> typenameType() -> LookupTypeName()\n> -> LookupTypeNameExtended() -> typenameTypeMod(). This seems very\n> complicated because the ColumnDef, at least in the LIKE case, already\n> has the value. Is there a need to revisit how we handle type such\n> cases?\n\n(Bug report moved to hackers, previous bug reporters added CCs.)\n\nI looked at this some more and found more fundamental problems. We have\npg_attribute.attndims which does record the array dimensionality:\n\n\tCREATE TABLE test (data integer, data_array integer[5][5]);\n\n\tSELECT attndims\n\tFROM pg_attribute\n\tWHERE attrelid = 'test'::regclass AND\n\t attname = 'data_array';\n\t attndims\n\t----------\n\t 2\n\nThe first new problem I found is that we don't dump the dimensionality:\n\n\t$ pg_dump test\n\t...\n\tCREATE TABLE public.test (\n\t data integer,\n-->\t data_array integer[]\n\t);\n\nand psql doesn't display the dimensionality:\n\n\t\\d test\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t------------+-----------+-----------+----------+---------\n\t data | integer | | |\n-->\t data_array | integer[] | | |\n\nA report from 2015 reports that CREATE TABLE ... LIKE and CREATE TABLE\n... AS doesn't propagate the dimensionality:\n\n\thttps://www.postgresql.org/message-id/flat/20150707072942.1186.98151%40wrigleys.postgresql.org\n\nand this thread from 2018 supplied a fix:\n\n\thttps://www.postgresql.org/message-id/flat/7862e882-8b9a-0c8e-4a38-40ad374d3634%40brandwatch.com\n\nthough in my testing it only fixes LIKE, not CREATE TABLE ... AS. This\nreport from April of this year also complains about LIKE:\n\n\thttps://www.postgresql.org/message-id/flat/ZD%2B14YZ4IUue8Rhi%40gendo.asyd.net\n\nHere is the output from master for LIKE:\n\n\tCREATE TABLE test2 (LIKE test);\n\n\tSELECT attndims\n\tFROM pg_attribute\n\tWHERE attrelid = 'test2'::regclass AND\n\t attname = 'data_array';\n\t attndims\n\t----------\n-->\t 0\n\nand this is the output for CREATE TABLE ... AS:\n\n\tCREATE TABLE test3 AS SELECT * FROM test;\n\t\n\tSELECT attndims\n\tFROM pg_attribute\n\tWHERE attrelid = 'test3'::regclass AND\n\t attname = 'data_array';\n\t attndims\n\t----------\n-->\t 0\n\nThe attached patch fixes pg_dump:\n\n\t$ pg_dump test\n\t...\n\tCREATE TABLE public.test2 (\n\t data integer,\n-->\t data_array integer[][]\n\t);\n\nIt uses repeat() at the SQL level rather then modifying format_type() at\nthe SQL or C level. It seems format_type() is mostly used to get the\ntype name, e.g. int4[], rather than the column definition so I added\nbrackets at the call site. I used a similar fix for psql output:\n\n\t\\d test\n\t Table \"public.test\"\n\t Column | Type | Collation | Nullable | Default\n\t------------+-------------+-----------+----------+---------\n\t data | integer | | |\n-->\t data_array | integer[][] | | |\n\n\nThe 2018 patch from Alexey Bashtanov fixes the LIKE case:\n\n\tCREATE TABLE test2 (LIKE test);\n\n\t\\d test2\n\t\t\t Table \"public.test2\"\n\t Column |\t Type\t | Collation | Nullable | Default\n\t------------+-------------+-----------+----------+---------\n\t data\t | integer\t |\t |\t\t |\n-->\t data_array | integer[][] |\t |\t\t |\n\nIt does not fix CREATE TABLE ... AS because it looks like fixing that\nwould require adding an ndims column to Var for WITH NO DATA and adding\nndims to TupleDesc for WITH DATA. I am not sure if that overhead is\nwarrented to fix this item. I have added C comments where they should\nbe added.\n\nI would like to apply this patch to master because I think our current\ndeficiencies in this area are unacceptable. An alternate approach would\nbe to remove pg_attribute.attndims so we don't even try to preserve \ndimensionality.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Mon, 20 Nov 2023 20:33:50 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I would like to apply this patch to master because I think our current\n> deficiencies in this area are unacceptable.\n\nI do not think this is a particularly good idea, because it creates\nthe impression in a couple of places that we track this data, when\nwe do not really do so to any meaningful extent.\n\n> An alternate approach would\n> be to remove pg_attribute.attndims so we don't even try to preserve \n> dimensionality.\n\nI could get behind that, perhaps. It looks like we're not using the\nfield in any meaningful way, and we could simplify TupleDescInitEntry\nand perhaps some other APIs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 21:04:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 09:04:21PM -0500, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I would like to apply this patch to master because I think our current\n> > deficiencies in this area are unacceptable.\n> \n> I do not think this is a particularly good idea, because it creates\n> the impression in a couple of places that we track this data, when\n> we do not really do so to any meaningful extent.\n\nOkay, I thought we could get by without tracking the CREATE TABLE AS\ncase, but it is inconsistent. My patch just makes it less\ninconsistent.\n\n> > An alternate approach would\n> > be to remove pg_attribute.attndims so we don't even try to preserve \n> > dimensionality.\n> \n> I could get behind that, perhaps. It looks like we're not using the\n> field in any meaningful way, and we could simplify TupleDescInitEntry\n> and perhaps some other APIs.\n\nSo should I work on that patch or do you want to try? I think we should\ndo something.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Mon, 20 Nov 2023 21:07:36 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> On Mon, Nov 20, 2023 at 09:04:21PM -0500, Tom Lane wrote:\n>> Bruce Momjian <[email protected]> writes:\n>>> An alternate approach would\n>>> be to remove pg_attribute.attndims so we don't even try to preserve \n>>> dimensionality.\n\n>> I could get behind that, perhaps. It looks like we're not using the\n>> field in any meaningful way, and we could simplify TupleDescInitEntry\n>> and perhaps some other APIs.\n\n> So should I work on that patch or do you want to try? I think we should\n> do something.\n\nLet's wait for some other opinions, first ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Nov 2023 21:13:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
},
{
"msg_contents": "On Mon, 2023-11-20 at 21:13 -0500, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > On Mon, Nov 20, 2023 at 09:04:21PM -0500, Tom Lane wrote:\n> > > Bruce Momjian <[email protected]> writes:\n> > > > An alternate approach would\n> > > > be to remove pg_attribute.attndims so we don't even try to preserve \n> > > > dimensionality.\n> \n> > > I could get behind that, perhaps. It looks like we're not using the\n> > > field in any meaningful way, and we could simplify TupleDescInitEntry\n> > > and perhaps some other APIs.\n> \n> > So should I work on that patch or do you want to try? I think we should\n> > do something.\n> \n> Let's wait for some other opinions, first ...\n\nLooking at the code, I get the impression that we wouldn't lose anything\nwithout \"pg_attribute.attndims\", so +1 for removing it.\n\nThis would call for some documentation. We should remove most of the\ndocumentation about the non-existing difference between declaring a column\n\"integer[]\", \"integer[][]\" or \"integer[3][3]\" and just describe the first\nvariant in detail, perhaps mentioning that the other notations are\naccepted for backward compatibility.\n\nI also think that it would be helpful to emphasize that while dimensionality\ndoes not matter to a column definition, it matters for individual array values.\nPerhaps it would make sense to recommend a check constraint if one wants\nto make sure that an array column should contain only a certain kind of array.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:33:18 +0100",
"msg_from": "Laurenz Albe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
},
{
"msg_contents": "On Tue, Nov 21, 2023 at 09:33:18AM +0100, Laurenz Albe wrote:\n> On Mon, 2023-11-20 at 21:13 -0500, Tom Lane wrote:\n> > Bruce Momjian <[email protected]> writes:\n> > > On Mon, Nov 20, 2023 at 09:04:21PM -0500, Tom Lane wrote:\n> > > > Bruce Momjian <[email protected]> writes:\n> > > > > An alternate approach would\n> > > > > be to remove pg_attribute.attndims so we don't even try to preserve \n> > > > > dimensionality.\n> > \n> > > > I could get behind that, perhaps. It looks like we're not using the\n> > > > field in any meaningful way, and we could simplify TupleDescInitEntry\n> > > > and perhaps some other APIs.\n> > \n> > > So should I work on that patch or do you want to try? I think we should\n> > > do something.\n> > \n> > Let's wait for some other opinions, first ...\n> \n> Looking at the code, I get the impression that we wouldn't lose anything\n> without \"pg_attribute.attndims\", so +1 for removing it.\n> \n> This would call for some documentation. We should remove most of the\n> documentation about the non-existing difference between declaring a column\n> \"integer[]\", \"integer[][]\" or \"integer[3][3]\" and just describe the first\n> variant in detail, perhaps mentioning that the other notations are\n> accepted for backward compatibility.\n\nAgreed, I see:\n\n\thttps://www.postgresql.org/docs/current/arrays.html\n\n\tHowever, the current implementation ignores any supplied array\n\tsize limits, i.e., the behavior is the same as for arrays of\n\tunspecified length.\n\n\tThe current implementation does not enforce the declared number\n\tof dimensions either.\n\nSo both size limits and dimensions would be ignored.\n\n> I also think that it would be helpful to emphasize that while dimensionality\n> does not matter to a column definition, it matters for individual array values.\n> Perhaps it would make sense to recommend a check constraint if one wants\n> to make sure that an array column should contain only a certain kind of array.\n\nThe CHECK constraint idea is very good.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 21 Nov 2023 09:20:43 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About #13489, array dimensions and CREATE TABLE ... LIKE"
}
] |
[
{
"msg_contents": "Please find below simple repro for CacheMemoryContext memory leak\n\ncreate type two_int4s as (f1 int4, f2 int4);\ncreate type two_int8s as (q1 int8, q2 int8);\n\nPLpgSQL example:\ndo $$ declare c4 two_int4s; c8 two_int8s;\nbegin\n c8 := row(1,2);\n c4 := c8;\nend$$;\n\nExecuting above plpgsql in same memory session we observe\ncachememorycontext goes on increasing as below which is captured using\nMemoryContextStats\n\n1590:2023-04-19 13:31:54.336 IST [31615] LOG: Grand total: 1213440 bytes\nin 153 blocks; 496000 free (53 chunks); 717440 used\n1687:2023-04-19 13:31:54.348 IST [31615] LOG: Grand total: 1220608 bytes\nin 160 blocks; 497160 free (53 chunks); 723448 used\n1781:2023-04-19 13:31:59.919 IST [31615] LOG: Grand total: 1213440 bytes\nin 154 blocks; 494168 free (45 chunks); 719272 used\n1880:2023-04-19 13:31:59.924 IST [31615] LOG: Grand total: 1220608 bytes\nin 161 blocks; 496128 free (45 chunks); 724480 used\n1976:2023-04-19 13:32:29.977 IST [31615] LOG: Grand total: 1215488 bytes\nin 156 blocks; 495144 free (45 chunks); 720344 used\n2077:2023-04-19 13:32:29.978 IST [31615] LOG: Grand total: 1222656 bytes\nin 163 blocks; 497104 free (45 chunks); 725552 used\n\n\n\nRoot cause:\nMemory leak is in function \"GetCachedExpression\" which creates context\nunder CacheMemoryContext. During each execution in the same session memory\ngets allocated and it is never freed resulting in memory leak.\n\nDuring anonymous block execution in the function \"plpgsql_estate_setup\", a\nlocal casting hash table gets created in SPI memory context. When hash\ntable look up is performed in \"get_cast_hashenty\" function if entry is no\npresent , memory is allocated in CacheMemoryContext in function\n\"GetCachedExpression\".At the end of proc execution SPI memory context is\ndeleted and hence local hash table gets deleted, but still entries remain\nin Cachemeorycontext.\n\nDuring the next execution in the same session, a brand new hash table is\ncreated and if entry is not present memory will be repeatedly assigned in\nCacheMemoryContext.\n\n\nSolution:\n\nPlease find attached(memoryleakfix.patch) to this email. We need to keep\ntrack of the local casting hash table or session wide cast hash table which\ngets created in the function \"plpgsql_estate_setup\". We need to allocate\nCacheMemorycontext only for session wide cast hash table and for local cast\nhash table memory will be allocated from SPI context.\n\nPlease find below CacheMemory Context stats with fix as below\n\n\n3316:2023-04-19 14:07:23.391 IST [38021] LOG: Grand total: 1210368 bytes\nin 151 blocks; 492704 free (45 chunks); 717664 used\n3411:2023-04-19 14:07:23.391 IST [38021] LOG: Grand total: 1216512 bytes\nin 157 blocks; 494176 free (45 chunks); 722336 used\n3502:2023-04-19 14:07:23.932 IST [38021] LOG: Grand total: 1210368 bytes\nin 151 blocks; 492704 free (45 chunks); 717664 used\n3597:2023-04-19 14:07:23.932 IST [38021] LOG: Grand total: 1216512 bytes\nin 157 blocks; 494176 free (45 chunks); 722336 used\n3688:2023-04-19 14:07:24.464 IST [38021] LOG: Grand total: 1210368 bytes\nin 151 blocks; 492704 free (45 chunks); 717664 used\n3783:2023-04-19 14:07:24.464 IST [38021] LOG: Grand total: 1216512 bytes\nin 157 blocks; 494176 free (45 chunks); 722336 used\n3874:2023-04-19 14:07:25.012 IST [38021] LOG: Grand total: 1210368 bytes\nin 151 blocks; 492704 free (45 chunks); 717664 used\n3969:2023-04-19 14:07:25.012 IST [38021] LOG: Grand total: 1216512 bytes\nin 157 blocks; 494176 free (45 chunks); 722336 used\n4060:2023-04-19 14:07:25.552 IST [38021] LOG: Grand total: 1210368 bytes\nin 151 blocks; 492704 free (45 chunks); 717664 used\n4155:2023-04-19 14:07:25.552 IST [38021] LOG: Grand total: 1216512 bytes\nin 157 blocks; 494176 free (45 chunks); 722336 used\n\n\nThanks & Best Regards,\nAjit",
"msg_date": "Wed, 19 Apr 2023 17:22:32 +0530",
"msg_from": "Ajit Awekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory leak in CachememoryContext"
},
{
"msg_contents": "Ajit Awekar <[email protected]> writes:\n> Please find below simple repro for CacheMemoryContext memory leak\n\nHm, yeah, reproduced here.\n\n> During anonymous block execution in the function \"plpgsql_estate_setup\", a\n> local casting hash table gets created in SPI memory context. When hash\n> table look up is performed in \"get_cast_hashenty\" function if entry is no\n> present , memory is allocated in CacheMemoryContext in function\n> \"GetCachedExpression\".At the end of proc execution SPI memory context is\n> deleted and hence local hash table gets deleted, but still entries remain\n> in Cachemeorycontext.\n\nYeah, it's from using just a short-lived cast hash table for DO blocks.\nI think that was okay when it was written, but when we wheeled the\nCachedExpression machinery undeneath it, we created a problem.\n\n> Please find attached(memoryleakfix.patch) to this email.\n\nI don't think this fix is acceptable at all. A minor problem is that\nwe can't change the API of GetCachedExpression() in stable branches,\nbecause extensions may be using it. We could work around that by\nmaking it a wrapper function. But the big problem is that this patch\ndestroys the reason for using a CachedExpression in the first place:\nbecause you aren't linking it into cached_expression_list, the plancache\nwill not detect events that should obsolete the expression. The\ntest cases added by 04fe805a1 only cover regular functions, but one\nthat did domain constraint DDL within a DO block would expose the\nshortcoming.\n\nA possible answer is to split plpgsql's cast hash table into two parts.\nThe lower-level part would contain the hash key, cast_expr and\ncast_cexpr fields, and would have session lifespan and be used by\nboth DO blocks and regular functions. In this way we'd not leak\nCachedExpressions. The upper-level hash table would contain the\nhash key, a link to the relevant lower-level entry, and the\ncast_exprstate, cast_in_use, cast_lxid fields. There would be a\nsession-lifespan one of these plus one for each DO block, so that\nmanagement of the ExprStates still works as it does now for DO blocks.\n\nThis could be factored in other ways, and maybe another way would be\nsimpler. But the idea is that DO blocks should use persistent\nCachedExpressions even though their cast_exprstates are transient.\n\nI've not tried to code this, do you want to?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Apr 2023 12:43:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "Hi Tom,\n\nThanks a lot for your possible approach for a solution.\nI have implemented the approach by splitting the hash table into two parts.\nPlease find the attached patch for the same.\n\n\nThanks & Best Regards,\nAjit\n\nOn Wed, Apr 19, 2023 at 10:13 PM Tom Lane <[email protected]> wrote:\n\n> Ajit Awekar <[email protected]> writes:\n> > Please find below simple repro for CacheMemoryContext memory leak\n>\n> Hm, yeah, reproduced here.\n>\n> > During anonymous block execution in the function \"plpgsql_estate_setup\",\n> a\n> > local casting hash table gets created in SPI memory context. When hash\n> > table look up is performed in \"get_cast_hashenty\" function if entry is no\n> > present , memory is allocated in CacheMemoryContext in function\n> > \"GetCachedExpression\".At the end of proc execution SPI memory context is\n> > deleted and hence local hash table gets deleted, but still entries remain\n> > in Cachemeorycontext.\n>\n> Yeah, it's from using just a short-lived cast hash table for DO blocks.\n> I think that was okay when it was written, but when we wheeled the\n> CachedExpression machinery undeneath it, we created a problem.\n>\n> > Please find attached(memoryleakfix.patch) to this email.\n>\n> I don't think this fix is acceptable at all. A minor problem is that\n> we can't change the API of GetCachedExpression() in stable branches,\n> because extensions may be using it. We could work around that by\n> making it a wrapper function. But the big problem is that this patch\n> destroys the reason for using a CachedExpression in the first place:\n> because you aren't linking it into cached_expression_list, the plancache\n> will not detect events that should obsolete the expression. The\n> test cases added by 04fe805a1 only cover regular functions, but one\n> that did domain constraint DDL within a DO block would expose the\n> shortcoming.\n>\n> A possible answer is to split plpgsql's cast hash table into two parts.\n> The lower-level part would contain the hash key, cast_expr and\n> cast_cexpr fields, and would have session lifespan and be used by\n> both DO blocks and regular functions. In this way we'd not leak\n> CachedExpressions. The upper-level hash table would contain the\n> hash key, a link to the relevant lower-level entry, and the\n> cast_exprstate, cast_in_use, cast_lxid fields. There would be a\n> session-lifespan one of these plus one for each DO block, so that\n> management of the ExprStates still works as it does now for DO blocks.\n>\n> This could be factored in other ways, and maybe another way would be\n> simpler. But the idea is that DO blocks should use persistent\n> CachedExpressions even though their cast_exprstates are transient.\n>\n> I've not tried to code this, do you want to?\n>\n> regards, tom lane\n>",
"msg_date": "Fri, 21 Apr 2023 16:55:20 +0530",
"msg_from": "Ajit Awekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "Ajit Awekar <[email protected]> writes:\n> I have implemented the approach by splitting the hash table into two parts.\n> Please find the attached patch for the same.\n\nI found a few things not to like about this:\n\n* You didn't update the comment describing these hash tables.\n\n* I wasn't really thrilled about renaming the plpgsql_CastHashEntry\ntypedef, as that seemed to just create uninteresting diff noise.\nAlso, \"SessionCastHashEntry\" versus \"PrivateCastHashEntry\" seems a\nvery misleading choice of names, since one of the \"PrivateCastHashEntry\"\nhash tables is in fact session-lifespan. After some thought I left\nthe \"upper\" hash table entry type as plpgsql_CastHashEntry so that\ncode outside the immediate area needn't be affected, and named the\n\"lower\" table cast_expr_hash, with entry type plpgsql_CastExprHashEntry.\nI'm not wedded to those names though, if you have a better idea.\n\n(BTW, it's completely reasonable to rename the type as an intermediate\nstep in making a patch like this, since it ensures you'll examine\nevery existing usage to choose the right thing to change it to. But\nI generally rename things back afterwards.)\n\n* I didn't like having to do two hashtable lookups during every\ncall even after we've fully cached the info. That's easy to avoid\nby keeping a link to the associated \"lower\" hashtable entry in the\n\"upper\" ones.\n\n* You removed the reset of cast_exprstate etc from the code path where\nwe've just reconstructed the cast_expr. That's a mistake since it\nmight allow us to skip rebuilding the derived expression state after\na DDL change.\n\n\nAlso, while looking at this I noticed that we are no longer making\nany use of estate->cast_hash_context. That's not the fault of\nyour patch; it's another oversight in the one that added the\nCachedExpression mechanism. The compiled expressions used to be\nstored in that context, but now the plancache is responsible for\nthem and we are never putting anything in the cast_hash_context.\nSo we might as well get rid of that and save 8K of wasted memory.\nThis allows some simplification in the hashtable setup code too.\n\nIn short, I think we need something more like the attached.\n\n(Note to self: we can't remove the cast_hash_context field in\nback branches for fear of causing an ABI break for pldebugger.\nBut we can leave it unused, I think.)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 21 Apr 2023 19:19:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "Tom, Thanks a lot for your patch. I applied the changes and confirmed\nthere is no memory leak with the V2 patch.\nWe are not using MemoryContext variables \"cast_hash_context\" and\n\"shared_cast_context\".\n\nThanks & Best Regards,\nAjit\n\nOn Sat, Apr 22, 2023 at 4:49 AM Tom Lane <[email protected]> wrote:\n\n> Ajit Awekar <[email protected]> writes:\n> > I have implemented the approach by splitting the hash table into two\n> parts.\n> > Please find the attached patch for the same.\n>\n> I found a few things not to like about this:\n>\n> * You didn't update the comment describing these hash tables.\n>\n> * I wasn't really thrilled about renaming the plpgsql_CastHashEntry\n> typedef, as that seemed to just create uninteresting diff noise.\n> Also, \"SessionCastHashEntry\" versus \"PrivateCastHashEntry\" seems a\n> very misleading choice of names, since one of the \"PrivateCastHashEntry\"\n> hash tables is in fact session-lifespan. After some thought I left\n> the \"upper\" hash table entry type as plpgsql_CastHashEntry so that\n> code outside the immediate area needn't be affected, and named the\n> \"lower\" table cast_expr_hash, with entry type plpgsql_CastExprHashEntry.\n> I'm not wedded to those names though, if you have a better idea.\n>\n> (BTW, it's completely reasonable to rename the type as an intermediate\n> step in making a patch like this, since it ensures you'll examine\n> every existing usage to choose the right thing to change it to. But\n> I generally rename things back afterwards.)\n>\n> * I didn't like having to do two hashtable lookups during every\n> call even after we've fully cached the info. That's easy to avoid\n> by keeping a link to the associated \"lower\" hashtable entry in the\n> \"upper\" ones.\n>\n> * You removed the reset of cast_exprstate etc from the code path where\n> we've just reconstructed the cast_expr. That's a mistake since it\n> might allow us to skip rebuilding the derived expression state after\n> a DDL change.\n>\n>\n> Also, while looking at this I noticed that we are no longer making\n> any use of estate->cast_hash_context. That's not the fault of\n> your patch; it's another oversight in the one that added the\n> CachedExpression mechanism. The compiled expressions used to be\n> stored in that context, but now the plancache is responsible for\n> them and we are never putting anything in the cast_hash_context.\n> So we might as well get rid of that and save 8K of wasted memory.\n> This allows some simplification in the hashtable setup code too.\n>\n> In short, I think we need something more like the attached.\n>\n> (Note to self: we can't remove the cast_hash_context field in\n> back branches for fear of causing an ABI break for pldebugger.\n> But we can leave it unused, I think.)\n>\n> regards, tom lane\n>\n>\n\nTom, Thanks a lot for your patch. I applied the changes and confirmed there is no memory leak with the V2 patch.We are not using MemoryContext variables \"cast_hash_context\" and \"shared_cast_context\".Thanks & Best Regards,AjitOn Sat, Apr 22, 2023 at 4:49 AM Tom Lane <[email protected]> wrote:Ajit Awekar <[email protected]> writes:\n> I have implemented the approach by splitting the hash table into two parts.\n> Please find the attached patch for the same.\n\nI found a few things not to like about this:\n\n* You didn't update the comment describing these hash tables.\n\n* I wasn't really thrilled about renaming the plpgsql_CastHashEntry\ntypedef, as that seemed to just create uninteresting diff noise.\nAlso, \"SessionCastHashEntry\" versus \"PrivateCastHashEntry\" seems a\nvery misleading choice of names, since one of the \"PrivateCastHashEntry\"\nhash tables is in fact session-lifespan. After some thought I left\nthe \"upper\" hash table entry type as plpgsql_CastHashEntry so that\ncode outside the immediate area needn't be affected, and named the\n\"lower\" table cast_expr_hash, with entry type plpgsql_CastExprHashEntry.\nI'm not wedded to those names though, if you have a better idea.\n\n(BTW, it's completely reasonable to rename the type as an intermediate\nstep in making a patch like this, since it ensures you'll examine\nevery existing usage to choose the right thing to change it to. But\nI generally rename things back afterwards.)\n\n* I didn't like having to do two hashtable lookups during every\ncall even after we've fully cached the info. That's easy to avoid\nby keeping a link to the associated \"lower\" hashtable entry in the\n\"upper\" ones.\n\n* You removed the reset of cast_exprstate etc from the code path where\nwe've just reconstructed the cast_expr. That's a mistake since it\nmight allow us to skip rebuilding the derived expression state after\na DDL change.\n\n\nAlso, while looking at this I noticed that we are no longer making\nany use of estate->cast_hash_context. That's not the fault of\nyour patch; it's another oversight in the one that added the\nCachedExpression mechanism. The compiled expressions used to be\nstored in that context, but now the plancache is responsible for\nthem and we are never putting anything in the cast_hash_context.\nSo we might as well get rid of that and save 8K of wasted memory.\nThis allows some simplification in the hashtable setup code too.\n\nIn short, I think we need something more like the attached.\n\n(Note to self: we can't remove the cast_hash_context field in\nback branches for fear of causing an ABI break for pldebugger.\nBut we can leave it unused, I think.)\n\n regards, tom lane",
"msg_date": "Mon, 24 Apr 2023 16:58:08 +0530",
"msg_from": "Ajit Awekar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "On 2023-Apr-21, Tom Lane wrote:\n\n> (Note to self: we can't remove the cast_hash_context field in\n> back branches for fear of causing an ABI break for pldebugger.\n> But we can leave it unused, I think.)\n\nHmm, we can leave it unused in our code, but it still needs to be\ninitialized to some valid memory context anyway; otherwise hypothetical\ncode that uses it would still crash. This seems halfway obvious, but\nsince the submitted patch doesn't have this part, I thought better to\npoint it out.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:55:41 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n>> (Note to self: we can't remove the cast_hash_context field in\n>> back branches for fear of causing an ABI break for pldebugger.\n>> But we can leave it unused, I think.)\n\n> Hmm, we can leave it unused in our code, but it still needs to be\n> initialized to some valid memory context anyway; otherwise hypothetical\n> code that uses it would still crash.\n\nI think we want that to happen, actually, because it's impossible\nto guess what such hypothetical code needs the context to be.\nAs things stand now, that field points to a long-lived context\nin some cases and a short-lived one in others. We risk either\ndata structure corruption or a session-lifespan memory leak\nif we guess about such usage ... which really shouldn't exist\nanyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Apr 2023 10:11:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "I wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Hmm, we can leave it unused in our code, but it still needs to be\n>> initialized to some valid memory context anyway; otherwise hypothetical\n>> code that uses it would still crash.\n\n> I think we want that to happen, actually, because it's impossible\n> to guess what such hypothetical code needs the context to be.\n\nI guess we could have the back branches continue to create a\nshared_cast_context and just not use it in core. Seems rather\nexpensive for a very hypothetical compatibility measure, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Apr 2023 10:44:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "On 2023-Apr-24, Tom Lane wrote:\n\n> I wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> >> Hmm, we can leave it unused in our code, but it still needs to be\n> >> initialized to some valid memory context anyway; otherwise hypothetical\n> >> code that uses it would still crash.\n> \n> > I think we want that to happen, actually, because it's impossible\n> > to guess what such hypothetical code needs the context to be.\n> \n> I guess we could have the back branches continue to create a\n> shared_cast_context and just not use it in core. Seems rather\n> expensive for a very hypothetical compatibility measure, though.\n\nI think a session-long memory leak is not so bad, compared to a possible\ncrash. However, after looking at the code again, as well as pldebugger\nand plpgsql_check, I agree that there's no point in doing anything other\nthan keeping the field there.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n",
"msg_date": "Mon, 24 Apr 2023 18:04:19 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Apr-24, Tom Lane wrote:\n>> I guess we could have the back branches continue to create a\n>> shared_cast_context and just not use it in core. Seems rather\n>> expensive for a very hypothetical compatibility measure, though.\n\n> I think a session-long memory leak is not so bad, compared to a possible\n> crash. However, after looking at the code again, as well as pldebugger\n> and plpgsql_check, I agree that there's no point in doing anything other\n> than keeping the field there.\n\nYeah, I can't see any plausible reason for outside code to be using\nthat field (and I don't see any evidence in Debian Code Search that\nanyone is). I'll push it like this then. Thanks for looking!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Apr 2023 13:03:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in CachememoryContext"
}
] |
[
{
"msg_contents": "Hi,\n\nI've attached a patch that removes some now redundant messaging about\nunsupported versions.\n\nRegards\n\nThom",
"msg_date": "Wed, 19 Apr 2023 13:37:30 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove references to pre-11 versions"
},
{
"msg_contents": "Thom Brown <[email protected]> writes:\n> I've attached a patch that removes some now redundant messaging about\n> unsupported versions.\n\nIf we want to make that a policy, I think a lot more could be done\n--- I remember noticing a documentation comment about some 8.x\nversion just recently.\n\nHowever, \"out of support\" is a lot different from \"nobody has any\ncode written for that version anymore\". So I'd be inclined to keep\nthe first hunk in your patch, the one explaining the regexp_matches-\nin-a-subselect trick. People will be trying to puzzle out why\nsomebody did it like that for years to come.\n\nI agree with simplifying the other two spots.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Apr 2023 09:58:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove references to pre-11 versions"
},
{
"msg_contents": "On Wed, 19 Apr 2023 at 14:58, Tom Lane <[email protected]> wrote:\n>\n> Thom Brown <[email protected]> writes:\n> > I've attached a patch that removes some now redundant messaging about\n> > unsupported versions.\n>\n> If we want to make that a policy, I think a lot more could be done\n> --- I remember noticing a documentation comment about some 8.x\n> version just recently.\n>\n> However, \"out of support\" is a lot different from \"nobody has any\n> code written for that version anymore\". So I'd be inclined to keep\n> the first hunk in your patch, the one explaining the regexp_matches-\n> in-a-subselect trick. People will be trying to puzzle out why\n> somebody did it like that for years to come.\n>\n> I agree with simplifying the other two spots.\n\nFair enough. I've updated the patch. However, feel free to ignore if\nthis marks the thin edge of the wedge.\n\nThom",
"msg_date": "Fri, 21 Apr 2023 12:53:14 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove references to pre-11 versions"
},
{
"msg_contents": "On 19.04.23 14:37, Thom Brown wrote:\n> I've attached a patch that removes some now redundant messaging about\n> unsupported versions.\n\nThe text in pg_basebackup.sgml describes behavior that still exists in \npg_basebackup. As long as that behavior exists, we should document it \naccurately.\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 19:09:38 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove references to pre-11 versions"
}
] |
[
{
"msg_contents": "Hi,\n\nOver in [1], we discussed removing the \"io\" prefix from the columns\n\"io_context\" and \"io_object\" in pg_stat_io since they seem redundant\ngiven the view name\n\nAttached patch does that.\n\n- Melanie\n\n[1]\nhttps://www.postgresql.org/message-id/20230215.164021.227543675435826022.horikyota.ntt%40gmail.com",
"msg_date": "Wed, 19 Apr 2023 12:26:43 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 1:27 PM Melanie Plageman <[email protected]>\nwrote:\n>\n> Hi,\n>\n> Over in [1], we discussed removing the \"io\" prefix from the columns\n> \"io_context\" and \"io_object\" in pg_stat_io since they seem redundant\n> given the view name\n>\n\nLGTM. All tests passed and were built without warnings.\n\nRegards\n\n--\nFabrízio de Royes Mello\n\nOn Wed, Apr 19, 2023 at 1:27 PM Melanie Plageman <[email protected]> wrote:>> Hi,>> Over in [1], we discussed removing the \"io\" prefix from the columns> \"io_context\" and \"io_object\" in pg_stat_io since they seem redundant> given the view name>LGTM. All tests passed and were built without warnings.Regards--Fabrízio de Royes Mello",
"msg_date": "Wed, 19 Apr 2023 13:54:21 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 01:54:21PM -0300, Fabrízio de Royes Mello wrote:\n> On Wed, Apr 19, 2023 at 1:27 PM Melanie Plageman <[email protected]> wrote:\n>> Over in [1], we discussed removing the \"io\" prefix from the columns\n>> \"io_context\" and \"io_object\" in pg_stat_io since they seem redundant\n>> given the view name\n> \n> LGTM. All tests passed and were built without warnings.\n\nThere are a lot of internal references to both of them mainly around\nthe buffer manager and the pgstat code, still I agree that the view\nfeels redundant as currently written, so agreed. It does not seem\nlike you have missed any references here, from what I can see.\n--\nMichael",
"msg_date": "Thu, 20 Apr 2023 09:42:08 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 8:42 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Apr 19, 2023 at 01:54:21PM -0300, Fabrízio de Royes Mello wrote:\n> > On Wed, Apr 19, 2023 at 1:27 PM Melanie Plageman <[email protected]> wrote:\n> >> Over in [1], we discussed removing the \"io\" prefix from the columns\n> >> \"io_context\" and \"io_object\" in pg_stat_io since they seem redundant\n> >> given the view name\n> >\n> > LGTM. All tests passed and were built without warnings.\n>\n> There are a lot of internal references to both of them mainly around\n> the buffer manager and the pgstat code, still I agree that the view\n> feels redundant as currently written, so agreed. It does not seem\n> like you have missed any references here, from what I can see.\n\nI thought about changing parameter and local variable names to remove\nthe prefix, but in the original discussion folks seemed to think it made\nsense to leave the \"C level\" references with an \"io\" prefix. I think we\ncould change many of them, but some of them may be required for clarity.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 19 Apr 2023 20:50:13 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 08:50:13PM -0400, Melanie Plageman wrote:\n> I thought about changing parameter and local variable names to remove\n> the prefix, but in the original discussion folks seemed to think it made\n> sense to leave the \"C level\" references with an \"io\" prefix. I think we\n> could change many of them, but some of them may be required for clarity.\n\nI agree with the feeling of not touching the internal variables. It\nmakes them easier to grep, and it seems that these are mostly on lines\nwhere there is little context about what they refer to..\n\nPerhaps others have comments or objections, so let's wait a bit, but\nI'd be OK to apply this one myself, with a catversion bump. (Happy to\nhelp.)\n--\nMichael",
"msg_date": "Thu, 20 Apr 2023 10:13:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 10:13:04AM +0900, Michael Paquier wrote:\n> On Wed, Apr 19, 2023 at 08:50:13PM -0400, Melanie Plageman wrote:\n> > I thought about changing parameter and local variable names to remove\n> > the prefix, but in the original discussion folks seemed to think it made\n> > sense to leave the \"C level\" references with an \"io\" prefix. I think we\n> > could change many of them, but some of them may be required for clarity.\n> \n> I agree with the feeling of not touching the internal variables. It\n> makes them easier to grep, and it seems that these are mostly on lines\n> where there is little context about what they refer to..\n> \n> Perhaps others have comments or objections, so let's wait a bit, but\n> I'd be OK to apply this one myself, with a catversion bump. (Happy to\n> help.)\n\nGreat, thanks! Once you feel an appropriate amount of time has passed,\nit would be great if you could apply it. I forgot to add a note about\nthe catalog version bump. oops!\n\n- Melanie\n\n\n",
"msg_date": "Wed, 19 Apr 2023 21:45:32 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 09:45:32PM -0400, Melanie Plageman wrote:\n> Great, thanks! Once you feel an appropriate amount of time has passed,\n> it would be great if you could apply it.\n\nSure. Probably on tomorrow morning, or Monday in the worst-case\nscenario, I think..\n\n> I forgot to add a note about the catalog version bump. oops!\n\nNo worries, committers should take care of that.\n--\nMichael",
"msg_date": "Thu, 20 Apr 2023 11:38:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "At Thu, 20 Apr 2023 10:13:04 +0900, Michael Paquier <[email protected]> wrote in \n> On Wed, Apr 19, 2023 at 08:50:13PM -0400, Melanie Plageman wrote:\n> > I thought about changing parameter and local variable names to remove\n> > the prefix, but in the original discussion folks seemed to think it made\n> > sense to leave the \"C level\" references with an \"io\" prefix. I think we\n> > could change many of them, but some of them may be required for clarity.\n> \n> I agree with the feeling of not touching the internal variables. It\n> makes them easier to grep, and it seems that these are mostly on lines\n> where there is little context about what they refer to..\n\nI find the names for local loop variables are a bit annoying, but I\ndon't feel strongly about removing the prifix there. I'm also not in\nfavor of removing the prefix in other cases, bacause it helps with\ngrep'ability.\n\n>\tif (backend_io->times[io_object][io_context][io_op] != 0 &&\n>\t\tbackend_io->counts[io_object][io_context][io_op] <= 0)\n\n> Perhaps others have comments or objections, so let's wait a bit, but\n> I'd be OK to apply this one myself, with a catversion bump. (Happy to\n> help.)\n\nSo, I don't have any issues with the patch overall. From what I can\ntell, there are no remaining instances of io_foobar that need to be\nrewritten.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Apr 2023 13:24:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 11:38:42AM +0900, Michael Paquier wrote:\n> No worries, committers should take care of that.\n\nDone as of 0ecb87e, as I can keep an eye on the buildfarm today, with\na catversion bump.\n--\nMichael",
"msg_date": "Fri, 21 Apr 2023 07:38:01 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
},
{
"msg_contents": "On 2023-04-21 07:38:01 +0900, Michael Paquier wrote:\n> On Thu, Apr 20, 2023 at 11:38:42AM +0900, Michael Paquier wrote:\n> > No worries, committers should take care of that.\n> \n> Done as of 0ecb87e, as I can keep an eye on the buildfarm today, with\n> a catversion bump.\n\nThanks!\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:15:59 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Remove io prefix from pg_stat_io columns"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that the numbers in pg_stat_io dont't quite add up to what I\nexpected in write heavy workloads. Particularly for checkpointer, the numbers\nfor \"write\" in log_checkpoints output are larger than what is visible in\npg_stat_io.\n\nThat partially is because log_checkpoints' \"write\" covers way too many things,\nbut there's an issue with pg_stat_io as well:\n\nCheckpoints, and some other sources of writes, will often end up doing a lot\nof smgrwriteback() calls - which pg_stat_io doesn't track. Nor do any\npre-existing forms of IO statistics.\n\nIt seems pretty clear that we should track writeback as well. I wonder if it's\nworth doing so for 16? It'd give a more complete picture that way. The\ncounter-argument I see is that we didn't track the time for it in existing\nstats either, and that nobody complained - but I suspect that's mostly because\nnobody knew to look.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:23:26 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On 4/19/23 1:23 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> I noticed that the numbers in pg_stat_io dont't quite add up to what I\r\n> expected in write heavy workloads. Particularly for checkpointer, the numbers\r\n> for \"write\" in log_checkpoints output are larger than what is visible in\r\n> pg_stat_io.\r\n> \r\n> That partially is because log_checkpoints' \"write\" covers way too many things,\r\n> but there's an issue with pg_stat_io as well:\r\n> \r\n> Checkpoints, and some other sources of writes, will often end up doing a lot\r\n> of smgrwriteback() calls - which pg_stat_io doesn't track. Nor do any\r\n> pre-existing forms of IO statistics.\r\n> \r\n> It seems pretty clear that we should track writeback as well. I wonder if it's\r\n> worth doing so for 16? It'd give a more complete picture that way. The\r\n> counter-argument I see is that we didn't track the time for it in existing\r\n> stats either, and that nobody complained - but I suspect that's mostly because\r\n> nobody knew to look.\r\n\r\n[RMT hat]\r\n\r\n(sorry for slow reply on this, I've been out for a few days).\r\n\r\nIt does sound generally helpful to track writeback to ensure anyone \r\nbuilding around pg_stat_io can see tthe more granular picture. How big \r\nof an effort is this? Do you think this helps to complete the feature \r\nfor v16?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 22 Apr 2023 15:25:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Sun, Apr 23, 2023 at 12:55 AM Jonathan S. Katz <[email protected]> wrote:\n>\n> On 4/19/23 1:23 PM, Andres Freund wrote:\n> > Hi,\n> >\n> > I noticed that the numbers in pg_stat_io dont't quite add up to what I\n> > expected in write heavy workloads. Particularly for checkpointer, the numbers\n> > for \"write\" in log_checkpoints output are larger than what is visible in\n> > pg_stat_io.\n> >\n> > That partially is because log_checkpoints' \"write\" covers way too many things,\n> > but there's an issue with pg_stat_io as well:\n> >\n> > Checkpoints, and some other sources of writes, will often end up doing a lot\n> > of smgrwriteback() calls - which pg_stat_io doesn't track. Nor do any\n> > pre-existing forms of IO statistics.\n> >\n> > It seems pretty clear that we should track writeback as well.\n\nAgreed. +1.\n\n> > I wonder if it's\n> > worth doing so for 16? It'd give a more complete picture that way. The\n> > counter-argument I see is that we didn't track the time for it in existing\n> > stats either, and that nobody complained - but I suspect that's mostly because\n> > nobody knew to look.\n>\n> [RMT hat]\n>\n> (sorry for slow reply on this, I've been out for a few days).\n>\n> It does sound generally helpful to track writeback to ensure anyone\n> building around pg_stat_io can see tthe more granular picture. How big\n> of an effort is this?\n>\n\nRight, I think this is the key factor to decide whether we can get\nthis in PG16 or not. If this is just adding a new column and a few\nexisting stats update calls then it should be okay to get in but if\nthis requires some more complex work then we can probably update the\ndocs.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 10:52:15 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 10:23:26AM -0700, Andres Freund wrote:\n> Hi,\n> \n> I noticed that the numbers in pg_stat_io dont't quite add up to what I\n> expected in write heavy workloads. Particularly for checkpointer, the numbers\n> for \"write\" in log_checkpoints output are larger than what is visible in\n> pg_stat_io.\n> \n> That partially is because log_checkpoints' \"write\" covers way too many things,\n> but there's an issue with pg_stat_io as well:\n> \n> Checkpoints, and some other sources of writes, will often end up doing a lot\n> of smgrwriteback() calls - which pg_stat_io doesn't track. Nor do any\n> pre-existing forms of IO statistics.\n> \n> It seems pretty clear that we should track writeback as well. I wonder if it's\n> worth doing so for 16? It'd give a more complete picture that way. The\n> counter-argument I see is that we didn't track the time for it in existing\n> stats either, and that nobody complained - but I suspect that's mostly because\n> nobody knew to look.\n\nNot complaining about making pg_stat_io more accurate, but what exactly\nwould we be tracking for smgrwriteback()? I assume you are talking about\nIO timing. AFAICT, on Linux, it does sync_file_range() with\nSYNC_FILE_RANGE_WRITE, which is asynchronous. Wouldn't we just be\ntracking the system call overhead time?\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Apr 2023 16:39:36 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 16:39:36 -0400, Melanie Plageman wrote:\n> On Wed, Apr 19, 2023 at 10:23:26AM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > I noticed that the numbers in pg_stat_io dont't quite add up to what I\n> > expected in write heavy workloads. Particularly for checkpointer, the numbers\n> > for \"write\" in log_checkpoints output are larger than what is visible in\n> > pg_stat_io.\n> > \n> > That partially is because log_checkpoints' \"write\" covers way too many things,\n> > but there's an issue with pg_stat_io as well:\n> > \n> > Checkpoints, and some other sources of writes, will often end up doing a lot\n> > of smgrwriteback() calls - which pg_stat_io doesn't track. Nor do any\n> > pre-existing forms of IO statistics.\n> > \n> > It seems pretty clear that we should track writeback as well. I wonder if it's\n> > worth doing so for 16? It'd give a more complete picture that way. The\n> > counter-argument I see is that we didn't track the time for it in existing\n> > stats either, and that nobody complained - but I suspect that's mostly because\n> > nobody knew to look.\n> \n> Not complaining about making pg_stat_io more accurate, but what exactly\n> would we be tracking for smgrwriteback()? I assume you are talking about\n> IO timing. AFAICT, on Linux, it does sync_file_range() with\n> SYNC_FILE_RANGE_WRITE, which is asynchronous. Wouldn't we just be\n> tracking the system call overhead time?\n\nIt starts blocking once \"enough\" IO is in flight. For things like an immediate\ncheckpoint, that can happen fairly quickly, unless you have a very fast IO\nsubsystem. So often it'll not matter whether we track smgrwriteback(), but\nwhen it matter, it can matter a lot.\n\nAs an example, I inited' a pgbench w/ scale 1000, on a decent but not great\nNVMe SSD. Created dirty data with:\n\n c=96;/srv/dev/build/m-opt/src/bin/pgbench/pgbench --random-seed=0 -n -M prepared -c$c -j$c -T30 -P1\nand then measured the checkpoint:\n perf trace -s -p $pid_of_checkpointer psql -x -c \"SELECT pg_stat_reset_shared('io')\" -c \"checkpoint\"\n\n postgres (367444), 1891280 events, 100.0%\n\n syscall calls errors total min avg max stddev\n (msec) (msec) (msec) (msec) (%)\n --------------- -------- ------ -------- --------- --------- --------- ------\n sync_file_range 359176 0 4560.670 0.002 0.013 238.955 10.36%\n pwrite64 582964 0 2874.634 0.003 0.005 0.156 0.06%\n fsync 242 0 251.631 0.001 1.040 42.166 18.81%\n openat 317 65 2.171 0.002 0.007 0.068 5.69%\n rename 69 0 1.973 0.012 0.029 0.084 5.81%\n fdatasync 1 0 1.543 1.543 1.543 1.543 0.00%\n unlink 150 137 1.278 0.002 0.009 0.062 10.69%\n close 250 0 0.694 0.001 0.003 0.007 3.14%\n newfstatat 140 68 0.667 0.001 0.005 0.022 7.26%\n write 5 0 0.067 0.005 0.013 0.025 24.55%\n lseek 14 0 0.050 0.001 0.004 0.018 33.87%\n getdents64 8 0 0.047 0.002 0.006 0.022 39.51%\n kill 3 0 0.029 0.008 0.010 0.011 10.18%\n epoll_wait 2 1 0.006 0.000 0.003 0.006 100.00%\n read 1 0 0.004 0.004 0.004 0.004 0.00%\n\nLog output:\n\n2023-04-24 14:11:59.234 PDT [367444][checkpointer][:0][] LOG: checkpoint starting: immediate force wait\n2023-04-24 14:12:09.236 PDT [367444][checkpointer][:0][] LOG: checkpoint complete: wrote 595974 buffers (28.4%); 0 WAL file(s) added, 0 removed, 68 recycled; write=9.740 s, sync=0.057 s, total=10.002 s; sync files=27, longest=0.043 s, average=0.003 s; distance=4467386 kB, estimate=4467386 kB; lsn=6/E5D33F98, redo lsn=6/E5D33F28\n\n\n# SELECT writes, write_time, fsyncs, fsync_time FROM pg_stat_io WHERE backend_type = 'checkpointer';\n┌────────┬────────────────────┬────────┬────────────┐\n│ writes │ write_time │ fsyncs │ fsync_time │\n├────────┼────────────────────┼────────┼────────────┤\n│ 595914 │ 4002.1730000000002 │ 24 │ 46.359 │\n└────────┴────────────────────┴────────┴────────────┘\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:14:32 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 02:14:32PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2023-04-24 16:39:36 -0400, Melanie Plageman wrote:\n> > On Wed, Apr 19, 2023 at 10:23:26AM -0700, Andres Freund wrote:\n> > > Hi,\n> > > \n> > > I noticed that the numbers in pg_stat_io dont't quite add up to what I\n> > > expected in write heavy workloads. Particularly for checkpointer, the numbers\n> > > for \"write\" in log_checkpoints output are larger than what is visible in\n> > > pg_stat_io.\n> > > \n> > > That partially is because log_checkpoints' \"write\" covers way too many things,\n> > > but there's an issue with pg_stat_io as well:\n> > > \n> > > Checkpoints, and some other sources of writes, will often end up doing a lot\n> > > of smgrwriteback() calls - which pg_stat_io doesn't track. Nor do any\n> > > pre-existing forms of IO statistics.\n> > > \n> > > It seems pretty clear that we should track writeback as well. I wonder if it's\n> > > worth doing so for 16? It'd give a more complete picture that way. The\n> > > counter-argument I see is that we didn't track the time for it in existing\n> > > stats either, and that nobody complained - but I suspect that's mostly because\n> > > nobody knew to look.\n> > \n> > Not complaining about making pg_stat_io more accurate, but what exactly\n> > would we be tracking for smgrwriteback()? I assume you are talking about\n> > IO timing. AFAICT, on Linux, it does sync_file_range() with\n> > SYNC_FILE_RANGE_WRITE, which is asynchronous. Wouldn't we just be\n> > tracking the system call overhead time?\n> \n> It starts blocking once \"enough\" IO is in flight. For things like an immediate\n> checkpoint, that can happen fairly quickly, unless you have a very fast IO\n> subsystem. So often it'll not matter whether we track smgrwriteback(), but\n> when it matter, it can matter a lot.\n\nI see. So, it sounds like this is most likely to happen for checkpointer\nand not likely to happen for other backends who call\nScheduleBufferTagForWriteback(). Also, it seems like this (given the\ncurrent code) is only reachable for permanent relations (i.e. not for IO\nobject temp relation). If other backend types than checkpointer may call\nsmgrwriteback(), we likely have to consider the IO context. I would\nimagine that we want to smgrwriteback() timing to writes/write time for\nthe relevant IO context and backend type.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Apr 2023 17:37:48 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 17:37:48 -0400, Melanie Plageman wrote:\n> On Mon, Apr 24, 2023 at 02:14:32PM -0700, Andres Freund wrote:\n> > It starts blocking once \"enough\" IO is in flight. For things like an immediate\n> > checkpoint, that can happen fairly quickly, unless you have a very fast IO\n> > subsystem. So often it'll not matter whether we track smgrwriteback(), but\n> > when it matter, it can matter a lot.\n> \n> I see. So, it sounds like this is most likely to happen for checkpointer\n> and not likely to happen for other backends who call\n> ScheduleBufferTagForWriteback().\n\nIt's more likely, but once the IO subsystem is busy, it'll also happen for\nother users ScheduleBufferTagForWriteback().\n\n\n> Also, it seems like this (given the current code) is only reachable for\n> permanent relations (i.e. not for IO object temp relation). If other backend\n> types than checkpointer may call smgrwriteback(), we likely have to consider\n> the IO context.\n\nI think we should take it into account - it'd e.g. interesting to see a COPY\nis bottlenecked on smgrwriteback() rather than just writing the data.\n\n\n> I would imagine that we want to smgrwriteback() timing to writes/write time\n> for the relevant IO context and backend type.\n\nYes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:13:00 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 10:52:15 +0530, Amit Kapila wrote:\n> On Sun, Apr 23, 2023 at 12:55 AM Jonathan S. Katz <[email protected]> wrote:\n> > > I wonder if it's\n> > > worth doing so for 16? It'd give a more complete picture that way. The\n> > > counter-argument I see is that we didn't track the time for it in existing\n> > > stats either, and that nobody complained - but I suspect that's mostly because\n> > > nobody knew to look.\n> >\n> > [RMT hat]\n> >\n> > (sorry for slow reply on this, I've been out for a few days).\n> >\n> > It does sound generally helpful to track writeback to ensure anyone\n> > building around pg_stat_io can see tthe more granular picture. How big\n> > of an effort is this?\n> >\n> \n> Right, I think this is the key factor to decide whether we can get\n> this in PG16 or not. If this is just adding a new column and a few\n> existing stats update calls then it should be okay to get in but if\n> this requires some more complex work then we can probably update the\n> docs.\n\nI suspect it should really just be adding a few stats calls. The only possible\ncomplication that I can see is that we might need to pass a bit more context\ndown in a place or two.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:14:20 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 6:13 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2023-04-24 17:37:48 -0400, Melanie Plageman wrote:\n> > On Mon, Apr 24, 2023 at 02:14:32PM -0700, Andres Freund wrote:\n> > > It starts blocking once \"enough\" IO is in flight. For things like an\n> immediate\n> > > checkpoint, that can happen fairly quickly, unless you have a very\n> fast IO\n> > > subsystem. So often it'll not matter whether we track smgrwriteback(),\n> but\n> > > when it matter, it can matter a lot.\n> >\n> > I see. So, it sounds like this is most likely to happen for checkpointer\n> > and not likely to happen for other backends who call\n> > ScheduleBufferTagForWriteback().\n>\n> It's more likely, but once the IO subsystem is busy, it'll also happen for\n> other users ScheduleBufferTagForWriteback().\n>\n>\n> > Also, it seems like this (given the current code) is only reachable for\n> > permanent relations (i.e. not for IO object temp relation). If other\n> backend\n> > types than checkpointer may call smgrwriteback(), we likely have to\n> consider\n> > the IO context.\n>\n> I think we should take it into account - it'd e.g. interesting to see a\n> COPY\n> is bottlenecked on smgrwriteback() rather than just writing the data.\n>\n\nWith the quick and dirty attached patch and using your example but with a\npgbench -T200 on my rather fast local NVMe SSD, you can still see quite\na difference.\nThis is with a stats reset before the checkpoint.\n\nunpatched:\n\n backend_type | object | context | writes | write_time |\n fsyncs | fsync_time\n---------------------+---------------+-----------+---------+------------+---------+------------\n background writer | relation | normal | 443 | 1.408 |\n 0 | 0\n checkpointer | relation | normal | 187804 | 396.829 |\n 47 | 254.226\n\npatched:\n\n backend_type | object | context | writes | write_time\n | fsyncs | fsync_time\n---------------------+---------------+-----------+---------+--------------------+--------+------------\n background writer | relation | normal | 917 |\n4.4670000000000005 | 0 | 0\n checkpointer | relation | normal | 375798 |\n 977.354 | 48 | 202.514\n\nI did compare client backend stats before and after pgbench and it made\nbasically no difference. I'll do a COPY example like you mentioned.\n\nPatch needs cleanup/comments and a bit more work, but I could do with\na sanity check review on the approach.\n\n- Melanie",
"msg_date": "Mon, 24 Apr 2023 18:36:24 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 18:36:24 -0400, Melanie Plageman wrote:\n> On Mon, Apr 24, 2023 at 6:13 PM Andres Freund <[email protected]> wrote:\n> > > Also, it seems like this (given the current code) is only reachable for\n> > > permanent relations (i.e. not for IO object temp relation). If other\n> > backend\n> > > types than checkpointer may call smgrwriteback(), we likely have to\n> > consider\n> > > the IO context.\n> >\n> > I think we should take it into account - it'd e.g. interesting to see a\n> > COPY\n> > is bottlenecked on smgrwriteback() rather than just writing the data.\n> >\n> \n> With the quick and dirty attached patch and using your example but with a\n> pgbench -T200 on my rather fast local NVMe SSD, you can still see quite\n> a difference.\n\nQuite a difference between what?\n\nWhat scale of pgbench did you use?\n\n-T200 is likely not a good idea, because a timed checkpoint might \"interfere\",\nunless you use a non-default checkpoint_timeout. A timed checkpoint won't show\nthe issue as easily, because checkpointer spend most of the time sleeping.\n\n\n> This is with a stats reset before the checkpoint.\n> \n> unpatched:\n> \n> backend_type | object | context | writes | write_time |\n> fsyncs | fsync_time\n> ---------------------+---------------+-----------+---------+------------+---------+------------\n> background writer | relation | normal | 443 | 1.408 |\n> 0 | 0\n> checkpointer | relation | normal | 187804 | 396.829 |\n> 47 | 254.226\n> \n> patched:\n> \n> backend_type | object | context | writes | write_time\n> | fsyncs | fsync_time\n> ---------------------+---------------+-----------+---------+--------------------+--------+------------\n> background writer | relation | normal | 917 |\n> 4.4670000000000005 | 0 | 0\n> checkpointer | relation | normal | 375798 |\n> 977.354 | 48 | 202.514\n> \n> I did compare client backend stats before and after pgbench and it made\n> basically no difference. I'll do a COPY example like you mentioned.\n\n\n> Patch needs cleanup/comments and a bit more work, but I could do with\n> a sanity check review on the approach.\n\nI was thinking we'd track writeback separately from the write, rather than\nattributing the writeback to \"write\". Otherwise it looks good, based on a\nquick skim.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Apr 2023 15:56:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 03:56:54PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2023-04-24 18:36:24 -0400, Melanie Plageman wrote:\n> > On Mon, Apr 24, 2023 at 6:13 PM Andres Freund <[email protected]> wrote:\n> > > > Also, it seems like this (given the current code) is only reachable for\n> > > > permanent relations (i.e. not for IO object temp relation). If other\n> > > backend\n> > > > types than checkpointer may call smgrwriteback(), we likely have to\n> > > consider\n> > > > the IO context.\n> > >\n> > > I think we should take it into account - it'd e.g. interesting to see a\n> > > COPY\n> > > is bottlenecked on smgrwriteback() rather than just writing the data.\n> > >\n> > \n> > With the quick and dirty attached patch and using your example but with a\n> > pgbench -T200 on my rather fast local NVMe SSD, you can still see quite\n> > a difference.\n> \n> Quite a difference between what?\n\nWith and without the patch. Meaning: clearly tracking writeback is a good idea.\n\n> \n> What scale of pgbench did you use?\n\n1000, as you did\n\n> \n> -T200 is likely not a good idea, because a timed checkpoint might \"interfere\",\n> unless you use a non-default checkpoint_timeout. A timed checkpoint won't show\n> the issue as easily, because checkpointer spend most of the time sleeping.\n\nAh, I see. I did not use a non-default checkpoint timeout.\n\n> > Patch needs cleanup/comments and a bit more work, but I could do with\n> > a sanity check review on the approach.\n> \n> I was thinking we'd track writeback separately from the write, rather than\n> attributing the writeback to \"write\". Otherwise it looks good, based on a\n> quick skim.\n\nLike you want a separate IOOp IOOP_WRITEBACK? Interesting. Okay.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 24 Apr 2023 19:02:47 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "\n\nOn 4/24/23 6:14 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-04-24 10:52:15 +0530, Amit Kapila wrote:\n>> On Sun, Apr 23, 2023 at 12:55 AM Jonathan S. Katz <[email protected]> wrote:\n>>>> I wonder if it's\n>>>> worth doing so for 16? It'd give a more complete picture that way. The\n>>>> counter-argument I see is that we didn't track the time for it in existing\n>>>> stats either, and that nobody complained - but I suspect that's mostly because\n>>>> nobody knew to look.\n>>>\n>>> [RMT hat]\n>>>\n>>> (sorry for slow reply on this, I've been out for a few days).\n>>>\n>>> It does sound generally helpful to track writeback to ensure anyone\n>>> building around pg_stat_io can see tthe more granular picture. How big\n>>> of an effort is this?\n>>>\n>>\n>> Right, I think this is the key factor to decide whether we can get\n>> this in PG16 or not. If this is just adding a new column and a few\n>> existing stats update calls then it should be okay to get in but if\n>> this requires some more complex work then we can probably update the\n>> docs.\n> \n> I suspect it should really just be adding a few stats calls. The only possible\n> complication that I can see is that we might need to pass a bit more context\n> down in a place or two.\n\nOK. So far it sounds reasonable to include. I think we should add this \nas an open item. I don't know if we need to set a deadline just yet, but \nwe should try to keep go/nogo to earlier in the beta cycle.\n\nThanks,\n\nJonathan\n\n\n",
"msg_date": "Mon, 24 Apr 2023 16:20:30 -0700",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 7:02 PM Melanie Plageman <[email protected]>\nwrote:\n\n> On Mon, Apr 24, 2023 at 03:56:54PM -0700, Andres Freund wrote:\n> >\n> > I was thinking we'd track writeback separately from the write, rather\n> than\n> > attributing the writeback to \"write\". Otherwise it looks good, based on\n> a\n> > quick skim.\n>\n> Like you want a separate IOOp IOOP_WRITEBACK? Interesting. Okay.\n>\n\n\nOkay, attached v2 does this (adds IOOP_WRITEBACK).\n\nWith my patch applied and the same pgbench setup as you (for -T30):\n\nAfter pgbench:\n\n backend_type | object | context | writes | write_time |\nwritebacks | writeback_time | fsyncs | fsync_time |\n---------------------+---------------+-----------+----------+------------+------------+----------------+---------+------------+\n background writer | relation | normal | 5581 | 23.416 |\n 5568 | 32.33 | 0 | 0 |\n checkpointer | relation | normal | 89116 | 295.576 |\n 89106 | 416.5 | 84 | 5242.764 |\n\n\nand then after a stats reset followed by an explicit checkpoint:\n\n\n backend_type | object | context | writes | write_time\n | writebacks | writeback_time | fsyncs | fsync_time |\n---------------------+---------------+-----------+---------+--------------------+------------+----------------+---------+------------+\n checkpointer | relation | normal | 229807 |\n457.43600000000004 | 229817 | 532.84 | 52 | 378.652 |\n\n\nI've yet to cook up a client backend test case (e.g. with COPY). I've taken\nthat as a todo.\n\nI have a few outstanding questions:\n\n1) Does it make sense for writebacks to count the number of blocks for\nwhich writeback was requested or the number of calls to smgrwriteback() or\nthe number of actual syscalls made? We don't actually know from outside\nof mdwriteback() how many FileWriteback() calls we will make.\n\n2) I'm a little nervous about not including IOObject in the writeback\ncontext. Technically, there is nothing stopping local buffer code from\ncalling IssuePendingWritebacks(). Right now, local buffer code doesn't\ndo ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\nhardcode in IOOBJECT_RELATION when there is nothing wrong with\nrequesting writeback of local buffers (AFAIK). What do you think?\n\n3) Should any restrictions be added to pgstat_tracks_io_object() or\npgstat_tracks_io_op()? I couldn't think of any backend types or IO\ncontexts which would not do writeback as a rule. Also, though we don't\ndo writeback for temp tables now, it isn't nonsensical to do so. In\nthis version, I didn't add any restrictions.\n\nDocs need work. I added a placeholder for the new columns. I'll update it\nonce we decide what writebacks should actually count. And, I don't think\nwe can do any kind of ongoing test.\n\n- Melanie",
"msg_date": "Mon, 24 Apr 2023 21:29:48 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 9:29 PM Melanie Plageman\n<[email protected]> wrote:\n> I've yet to cook up a client backend test case (e.g. with COPY). I've taken\n> that as a todo.\n\nIt was trivial to see client backend writebacks in almost any scenario\nonce I set backend_flush_after. I wonder if it is worth mentioning the\nvarious \"*flush_after\" gucs in the docs?\n\n> I have a few outstanding questions:\n>\n> 1) Does it make sense for writebacks to count the number of blocks for\n> which writeback was requested or the number of calls to smgrwriteback() or\n> the number of actual syscalls made? We don't actually know from outside\n> of mdwriteback() how many FileWriteback() calls we will make.\n\nSo, in the attached v3, I've kept the first method: writebacks are the\nnumber of blocks which the backend has requested writeback of. I'd like\nit to be clear in the docs exactly what writebacks are (so that people\nknow not to add them together with writes or something like that). I\nmade an effort but could use further docs review.\n\n> 2) I'm a little nervous about not including IOObject in the writeback\n> context. Technically, there is nothing stopping local buffer code from\n> calling IssuePendingWritebacks(). Right now, local buffer code doesn't\n> do ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\n> hardcode in IOOBJECT_RELATION when there is nothing wrong with\n> requesting writeback of local buffers (AFAIK). What do you think?\n\nI've gone ahead and added IOObject to the WritebackContext.\n\n- Melanie",
"msg_date": "Wed, 26 Apr 2023 17:08:14 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "At Wed, 26 Apr 2023 17:08:14 -0400, Melanie Plageman <[email protected]> wrote in \r\n> On Mon, Apr 24, 2023 at 9:29 PM Melanie Plageman\r\n> <[email protected]> wrote:\r\n> > I've yet to cook up a client backend test case (e.g. with COPY). I've taken\r\n> > that as a todo.\r\n> \r\n> It was trivial to see client backend writebacks in almost any scenario\r\n> once I set backend_flush_after. I wonder if it is worth mentioning the\r\n> various \"*flush_after\" gucs in the docs?\r\n> \r\n> > I have a few outstanding questions:\r\n> >\r\n> > 1) Does it make sense for writebacks to count the number of blocks for\r\n> > which writeback was requested or the number of calls to smgrwriteback() or\r\n> > the number of actual syscalls made? We don't actually know from outside\r\n> > of mdwriteback() how many FileWriteback() calls we will make.\r\n> \r\n> So, in the attached v3, I've kept the first method: writebacks are the\r\n> number of blocks which the backend has requested writeback of. I'd like\r\n> it to be clear in the docs exactly what writebacks are (so that people\r\n> know not to add them together with writes or something like that). I\r\n> made an effort but could use further docs review.\r\n\r\n+ Number of units of size <varname>op_bytes</varname> which the backend\r\n+ requested the kernel write out to permanent storage.\r\n\r\nI just want to mention that it is not necessarily the same as the\r\nnumber of system calls, but I'm not sure what others think about that.\r\n\r\n+ Time spent in writeback operations in milliseconds (if\r\n+ <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero). This\r\n+ does not necessarily count the time spent by the kernel writing the\r\n+ data out. The backend will initiate write-out of the dirty pages and\r\n+ wait only if the request queue is full.\r\n\r\nThe last sentence looks like it's taken from the sync_file_range() man\r\npage, but I think it's a bit too detailed. We could just say, \"The\r\ntime usually only includes the time it takes to queue write-out\r\nrequests.\", bit I'm not sure wh...\r\n\r\n> > 2) I'm a little nervous about not including IOObject in the writeback\r\n> > context. Technically, there is nothing stopping local buffer code from\r\n> > calling IssuePendingWritebacks(). Right now, local buffer code doesn't\r\n> > do ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\r\n> > hardcode in IOOBJECT_RELATION when there is nothing wrong with\r\n> > requesting writeback of local buffers (AFAIK). What do you think?\r\n> \r\n> I've gone ahead and added IOObject to the WritebackContext.\r\n\r\nThe smgropen call in IssuePendingWritebacks below clearly shows that\r\nthe function only deals with shared buffers.\r\n\r\n>\t\t/* and finally tell the kernel to write the data to storage */\r\n>\t\treln = smgropen(currlocator, InvalidBackendId);\r\n>\t\tsmgrwriteback(reln, BufTagGetForkNum(&tag), tag.blockNum, nblocks);\r\n\r\nThe callback-related code fully depends on callers following its\r\nexpectation. So we can rewrite the following comment added to\r\nInitBufferPoll with a more confident tone.\r\n\r\n+\t * Initialize per-backend file flush context. IOObject is initialized to\r\n+\t * IOOBJECT_RELATION and IOContext to IOCONTEXT_NORMAL since these are the\r\n+\t * most likely targets for writeback. The backend can overwrite these as\r\n+\t * appropriate.\r\n\r\nOr I actually think we might not even need to pass around the io_*\r\nparameters and could just pass immediate values to the\r\npgstat_count_io_op_time call. If we ever start using shared buffers\r\nfor thing other than relation files (for example SLRU?), we'll have to\r\nconsider the target individually for each buffer block. That being\r\nsaid, I'm fine with how it is either.\r\n\r\nRegards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 27 Apr 2023 11:22:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Thanks for the review!\n\nOn Wed, Apr 26, 2023 at 10:22 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Wed, 26 Apr 2023 17:08:14 -0400, Melanie Plageman <[email protected]> wrote in\n> > On Mon, Apr 24, 2023 at 9:29 PM Melanie Plageman\n> > <[email protected]> wrote:\n> > > I've yet to cook up a client backend test case (e.g. with COPY). I've taken\n> > > that as a todo.\n> >\n> > It was trivial to see client backend writebacks in almost any scenario\n> > once I set backend_flush_after. I wonder if it is worth mentioning the\n> > various \"*flush_after\" gucs in the docs?\n> >\n> > > I have a few outstanding questions:\n> > >\n> > > 1) Does it make sense for writebacks to count the number of blocks for\n> > > which writeback was requested or the number of calls to smgrwriteback() or\n> > > the number of actual syscalls made? We don't actually know from outside\n> > > of mdwriteback() how many FileWriteback() calls we will make.\n> >\n> > So, in the attached v3, I've kept the first method: writebacks are the\n> > number of blocks which the backend has requested writeback of. I'd like\n> > it to be clear in the docs exactly what writebacks are (so that people\n> > know not to add them together with writes or something like that). I\n> > made an effort but could use further docs review.\n>\n> + Number of units of size <varname>op_bytes</varname> which the backend\n> + requested the kernel write out to permanent storage.\n>\n> I just want to mention that it is not necessarily the same as the\n> number of system calls, but I'm not sure what others think about that.\n\nMy thinking is that some other IO operations, for example, extends,\ncount the number of blocks extended and not the number of syscalls.\n\n> + Time spent in writeback operations in milliseconds (if\n> + <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero). This\n> + does not necessarily count the time spent by the kernel writing the\n> + data out. The backend will initiate write-out of the dirty pages and\n> + wait only if the request queue is full.\n>\n> The last sentence looks like it's taken from the sync_file_range() man\n> page, but I think it's a bit too detailed. We could just say, \"The\n> time usually only includes the time it takes to queue write-out\n> requests.\", bit I'm not sure wh...\n\nAh, yes, I indeed took heavy inspiration from the sync_file_range()\nman page :) I've modified this comment in the attached v4. I didn't want\nto say \"usually\" since I imagine it is quite workload and configuration\ndependent.\n\n> > > 2) I'm a little nervous about not including IOObject in the writeback\n> > > context. Technically, there is nothing stopping local buffer code from\n> > > calling IssuePendingWritebacks(). Right now, local buffer code doesn't\n> > > do ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\n> > > hardcode in IOOBJECT_RELATION when there is nothing wrong with\n> > > requesting writeback of local buffers (AFAIK). What do you think?\n> >\n> > I've gone ahead and added IOObject to the WritebackContext.\n>\n> The smgropen call in IssuePendingWritebacks below clearly shows that\n> the function only deals with shared buffers.\n>\n> > /* and finally tell the kernel to write the data to storage */\n> > reln = smgropen(currlocator, InvalidBackendId);\n> > smgrwriteback(reln, BufTagGetForkNum(&tag), tag.blockNum, nblocks);\n\nYes, as it is currently, IssuePendingWritebacks() is only used for shared\nbuffers. My rationale for including IOObject is that localbuf.c calls\nsmgr* functions and there isn't anything stopping it from calling\nsmgrwriteback() or using WritebackContexts (AFAICT).\n\n> The callback-related code fully depends on callers following its\n> expectation. So we can rewrite the following comment added to\n> InitBufferPoll with a more confident tone.\n>\n> + * Initialize per-backend file flush context. IOObject is initialized to\n> + * IOOBJECT_RELATION and IOContext to IOCONTEXT_NORMAL since these are the\n> + * most likely targets for writeback. The backend can overwrite these as\n> + * appropriate.\n\nI have updated this comment to be more confident and specific.\n\n> Or I actually think we might not even need to pass around the io_*\n> parameters and could just pass immediate values to the\n> pgstat_count_io_op_time call. If we ever start using shared buffers\n> for thing other than relation files (for example SLRU?), we'll have to\n> consider the target individually for each buffer block. That being\n> said, I'm fine with how it is either.\n\nIn IssuePendingWritebacks() we don't actually know which IOContext we\nare issuing writebacks for when we call pgstat_count_io_op_time() (we do\nissue pending writebacks for other IOContexts than IOCONTEXT_NORMAL). I\nagree IOObject is not strictly necessary right now. I've kept IOObject a\nmember of WritebackContext for the reasons I mention above, however, I\nam open to removing it if it adds confusion.\n\n- Melanie",
"msg_date": "Thu, 27 Apr 2023 11:36:49 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On 4/27/23 11:36 AM, Melanie Plageman wrote:\r\n> Thanks for the review!\r\n> \r\n> On Wed, Apr 26, 2023 at 10:22 PM Kyotaro Horiguchi\r\n> <[email protected]> wrote:\r\n>>\r\n>> At Wed, 26 Apr 2023 17:08:14 -0400, Melanie Plageman <[email protected]> wrote in\r\n>>> On Mon, Apr 24, 2023 at 9:29 PM Melanie Plageman\r\n>>> <[email protected]> wrote:\r\n>>>> I've yet to cook up a client backend test case (e.g. with COPY). I've taken\r\n>>>> that as a todo.\r\n>>>\r\n>>> It was trivial to see client backend writebacks in almost any scenario\r\n>>> once I set backend_flush_after. I wonder if it is worth mentioning the\r\n>>> various \"*flush_after\" gucs in the docs?\r\n>>>\r\n>>>> I have a few outstanding questions:\r\n>>>>\r\n>>>> 1) Does it make sense for writebacks to count the number of blocks for\r\n>>>> which writeback was requested or the number of calls to smgrwriteback() or\r\n>>>> the number of actual syscalls made? We don't actually know from outside\r\n>>>> of mdwriteback() how many FileWriteback() calls we will make.\r\n>>>\r\n>>> So, in the attached v3, I've kept the first method: writebacks are the\r\n>>> number of blocks which the backend has requested writeback of. I'd like\r\n>>> it to be clear in the docs exactly what writebacks are (so that people\r\n>>> know not to add them together with writes or something like that). I\r\n>>> made an effort but could use further docs review.\r\n>>\r\n>> + Number of units of size <varname>op_bytes</varname> which the backend\r\n>> + requested the kernel write out to permanent storage.\r\n>>\r\n>> I just want to mention that it is not necessarily the same as the\r\n>> number of system calls, but I'm not sure what others think about that.\r\n> \r\n> My thinking is that some other IO operations, for example, extends,\r\n> count the number of blocks extended and not the number of syscalls.\r\n> \r\n>> + Time spent in writeback operations in milliseconds (if\r\n>> + <xref linkend=\"guc-track-io-timing\"/> is enabled, otherwise zero). This\r\n>> + does not necessarily count the time spent by the kernel writing the\r\n>> + data out. The backend will initiate write-out of the dirty pages and\r\n>> + wait only if the request queue is full.\r\n>>\r\n>> The last sentence looks like it's taken from the sync_file_range() man\r\n>> page, but I think it's a bit too detailed. We could just say, \"The\r\n>> time usually only includes the time it takes to queue write-out\r\n>> requests.\", bit I'm not sure wh...\r\n> \r\n> Ah, yes, I indeed took heavy inspiration from the sync_file_range()\r\n> man page :) I've modified this comment in the attached v4. I didn't want\r\n> to say \"usually\" since I imagine it is quite workload and configuration\r\n> dependent.\r\n> \r\n>>>> 2) I'm a little nervous about not including IOObject in the writeback\r\n>>>> context. Technically, there is nothing stopping local buffer code from\r\n>>>> calling IssuePendingWritebacks(). Right now, local buffer code doesn't\r\n>>>> do ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\r\n>>>> hardcode in IOOBJECT_RELATION when there is nothing wrong with\r\n>>>> requesting writeback of local buffers (AFAIK). What do you think?\r\n>>>\r\n>>> I've gone ahead and added IOObject to the WritebackContext.\r\n>>\r\n>> The smgropen call in IssuePendingWritebacks below clearly shows that\r\n>> the function only deals with shared buffers.\r\n>>\r\n>>> /* and finally tell the kernel to write the data to storage */\r\n>>> reln = smgropen(currlocator, InvalidBackendId);\r\n>>> smgrwriteback(reln, BufTagGetForkNum(&tag), tag.blockNum, nblocks);\r\n> \r\n> Yes, as it is currently, IssuePendingWritebacks() is only used for shared\r\n> buffers. My rationale for including IOObject is that localbuf.c calls\r\n> smgr* functions and there isn't anything stopping it from calling\r\n> smgrwriteback() or using WritebackContexts (AFAICT).\r\n> \r\n>> The callback-related code fully depends on callers following its\r\n>> expectation. So we can rewrite the following comment added to\r\n>> InitBufferPoll with a more confident tone.\r\n>>\r\n>> + * Initialize per-backend file flush context. IOObject is initialized to\r\n>> + * IOOBJECT_RELATION and IOContext to IOCONTEXT_NORMAL since these are the\r\n>> + * most likely targets for writeback. The backend can overwrite these as\r\n>> + * appropriate.\r\n> \r\n> I have updated this comment to be more confident and specific.\r\n> \r\n>> Or I actually think we might not even need to pass around the io_*\r\n>> parameters and could just pass immediate values to the\r\n>> pgstat_count_io_op_time call. If we ever start using shared buffers\r\n>> for thing other than relation files (for example SLRU?), we'll have to\r\n>> consider the target individually for each buffer block. That being\r\n>> said, I'm fine with how it is either.\r\n> \r\n> In IssuePendingWritebacks() we don't actually know which IOContext we\r\n> are issuing writebacks for when we call pgstat_count_io_op_time() (we do\r\n> issue pending writebacks for other IOContexts than IOCONTEXT_NORMAL). I\r\n> agree IOObject is not strictly necessary right now. I've kept IOObject a\r\n> member of WritebackContext for the reasons I mention above, however, I\r\n> am open to removing it if it adds confusion.\r\n\r\n[RMT hat]\r\n\r\nHoriguchi-san: do the changes that Melanie made address your feedback?\r\n\r\nIt'd be good if we can get this into Beta 1 if everyone is comfortable \r\nwith the patch.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 3 May 2023 11:36:10 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-27 11:36:49 -0400, Melanie Plageman wrote:\n> > > /* and finally tell the kernel to write the data to storage */\n> > > reln = smgropen(currlocator, InvalidBackendId);\n> > > smgrwriteback(reln, BufTagGetForkNum(&tag), tag.blockNum, nblocks);\n> \n> Yes, as it is currently, IssuePendingWritebacks() is only used for shared\n> buffers. My rationale for including IOObject is that localbuf.c calls\n> smgr* functions and there isn't anything stopping it from calling\n> smgrwriteback() or using WritebackContexts (AFAICT).\n\nI think it's extremely unlikely that we'll ever do that, because it's very\ncommon to have temp tables that are bigger than temp_buffers. We basically\nhope that the kernel can do good caching for us there.\n\n\n> > Or I actually think we might not even need to pass around the io_*\n> > parameters and could just pass immediate values to the\n> > pgstat_count_io_op_time call. If we ever start using shared buffers\n> > for thing other than relation files (for example SLRU?), we'll have to\n> > consider the target individually for each buffer block. That being\n> > said, I'm fine with how it is either.\n> \n> In IssuePendingWritebacks() we don't actually know which IOContext we\n> are issuing writebacks for when we call pgstat_count_io_op_time() (we do\n> issue pending writebacks for other IOContexts than IOCONTEXT_NORMAL). I\n> agree IOObject is not strictly necessary right now. I've kept IOObject a\n> member of WritebackContext for the reasons I mention above, however, I\n> am open to removing it if it adds confusion.\n\nI don't think it's really worth adding struct members for potential future\nsafety. We can just add them later if we end up needing them.\n\n\n> From 7cdd6fc78ed82180a705ab9667714f80d08c5f7d Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Mon, 24 Apr 2023 18:21:54 -0400\n> Subject: [PATCH v4] Add writeback to pg_stat_io\n> \n> 28e626bde00 added the notion of IOOps but neglected to include\n> writeback. With the addition of IO timing to pg_stat_io in ac8d53dae5,\n> the omission of writeback caused some confusion. Checkpointer write\n> timing in pg_stat_io often differed greatly from the write timing\n> written to the log. To fix this, add IOOp IOOP_WRITEBACK and track\n> writebacks and writeback timing in pg_stat_io.\n\nFor the future: It'd be good to note that catversion needs to be increased.\n\nOff-topic: I've been wondering about computing a \"catversion hash\" based on\nall the files going into initdb. At least to have some tooling to detect\nmissing catversion increases...\n\n\n> index 99f7f95c39..27b6f1a0a0 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -3867,6 +3867,32 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> </entry>\n> </row>\n> \n> + <row>\n> + <entry role=\"catalog_table_entry\">\n> + <para role=\"column_definition\">\n> + <structfield>writebacks</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of units of size <varname>op_bytes</varname> which the backend\n> + requested the kernel write out to permanent storage.\n> + </para>\n> + </entry>\n> + </row>\n\nI think the reference to \"backend\" here is somewhat misplaced - it could be\ncheckpointer or bgwriter as well. We don't reference the backend in other\ncomparable columns of pgsio either...\n\n\n> diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c\n> index 0057443f0c..a7182fe95a 100644\n> --- a/src/backend/storage/buffer/buf_init.c\n> +++ b/src/backend/storage/buffer/buf_init.c\n> @@ -145,9 +145,15 @@ InitBufferPool(void)\n> \t/* Init other shared buffer-management stuff */\n> \tStrategyInitialize(!foundDescs);\n> \n> -\t/* Initialize per-backend file flush context */\n> -\tWritebackContextInit(&BackendWritebackContext,\n> -\t\t\t\t\t\t &backend_flush_after);\n> +\t/*\n> +\t * Initialize per-backend file flush context. IOContext is initialized to\n> +\t * IOCONTEXT_NORMAL because this is the most common context. IOObject is\n> +\t * initialized to IOOBJECT_RELATION because writeback is currently only\n> +\t * requested for permanent relations in shared buffers. The backend can\n> +\t * overwrite these as appropriate.\n> +\t */\n> +\tWritebackContextInit(&BackendWritebackContext, IOOBJECT_RELATION,\n> +\t\t\t\t\t\t IOCONTEXT_NORMAL, &backend_flush_after);\n> }\n>\n\nThis seems somewhat icky.\n\n\n> /*\n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index 1fa689052e..116910cdfe 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -1685,6 +1685,8 @@ again:\n> \t\tFlushBuffer(buf_hdr, NULL, IOOBJECT_RELATION, io_context);\n> \t\tLWLockRelease(content_lock);\n> \n> +\t\tBackendWritebackContext.io_object = IOOBJECT_RELATION;\n> +\t\tBackendWritebackContext.io_context = io_context;\n> \t\tScheduleBufferTagForWriteback(&BackendWritebackContext,\n> \t\t\t\t\t\t\t\t\t &buf_hdr->tag);\n> \t}\n\nWhat about passing the io_context to ScheduleBufferTagForWriteback instead?\n\n\n> --- a/src/test/regress/sql/stats.sql\n> +++ b/src/test/regress/sql/stats.sql\n\nHm. Could we add a test for this? While it's not implemented everywhere, we\nstill issue the smgrwriteback() afaics. The default for the _flush_after GUCs\nchanges, but backend_flush_after is USERSET, so we could just change it for a\nsingle command.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 May 2023 09:44:02 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-03 11:36:10 -0400, Jonathan S. Katz wrote:\n> It'd be good if we can get this into Beta 1 if everyone is comfortable with\n> the patch.\n\nI think we need one more iteration, then I think it can be committed. The\nchanges are docs phrasing and polishing the API a bit, which shouldn't be too\nhard. I'll try to look more tomorrow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 May 2023 09:46:54 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-24 21:29:48 -0400, Melanie Plageman wrote:\n> 1) Does it make sense for writebacks to count the number of blocks for\n> which writeback was requested or the number of calls to smgrwriteback() or\n> the number of actual syscalls made? We don't actually know from outside\n> of mdwriteback() how many FileWriteback() calls we will make.\n\nThe number of blocks is the right thing IMO. The rest is just some\nmicro-optimization.\n\n\n> 2) I'm a little nervous about not including IOObject in the writeback\n> context. Technically, there is nothing stopping local buffer code from\n> calling IssuePendingWritebacks(). Right now, local buffer code doesn't\n> do ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\n> hardcode in IOOBJECT_RELATION when there is nothing wrong with\n> requesting writeback of local buffers (AFAIK). What do you think?\n\nI think it'd be wrong on performance grounds ;). We could add an assertion to\nScheduleBufferTagForWriteback(), I guess, to document that fact?\n\n\n> 3) Should any restrictions be added to pgstat_tracks_io_object() or\n> pgstat_tracks_io_op()? I couldn't think of any backend types or IO\n> contexts which would not do writeback as a rule. Also, though we don't\n> do writeback for temp tables now, it isn't nonsensical to do so. In\n> this version, I didn't add any restrictions.\n\nI think the temp table restriction could be encoded for now, I don't forsee\nthat changing anytime soon.\n\nAbout other restrictions: Anything that triggers catalog access can trigger\nbuffers to be written back. Checkpointer and bgwriter don't do catalog access,\nbut have explicit writeback calls. WAL receiver, WAL writer, syslogger and\narchiver are excluded on a backend type basis. I think the startup process\nwill use the normal backend logic, and thus also trigger writebacks?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 May 2023 09:57:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On 5/4/23 12:46 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2023-05-03 11:36:10 -0400, Jonathan S. Katz wrote:\r\n>> It'd be good if we can get this into Beta 1 if everyone is comfortable with\r\n>> the patch.\r\n> \r\n> I think we need one more iteration, then I think it can be committed. The\r\n> changes are docs phrasing and polishing the API a bit, which shouldn't be too\r\n> hard. I'll try to look more tomorrow.\r\n\r\n[RMT hat, personal opinion]\r\n\r\nGreat to hear. From my skim of the patch, I had thought the conclusion \r\nwould be something similar, but did want to hear from you & Melanie on that.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 4 May 2023 15:41:57 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "v5 attached.\n\nOn Thu, May 4, 2023 at 12:44 PM Andres Freund <[email protected]> wrote:\n> On 2023-04-27 11:36:49 -0400, Melanie Plageman wrote:\n> > > > /* and finally tell the kernel to write the data to storage */\n> > > > reln = smgropen(currlocator, InvalidBackendId);\n> > > > smgrwriteback(reln, BufTagGetForkNum(&tag), tag.blockNum, nblocks);\n> >\n> > Yes, as it is currently, IssuePendingWritebacks() is only used for shared\n> > buffers. My rationale for including IOObject is that localbuf.c calls\n> > smgr* functions and there isn't anything stopping it from calling\n> > smgrwriteback() or using WritebackContexts (AFAICT).\n>\n> I think it's extremely unlikely that we'll ever do that, because it's very\n> common to have temp tables that are bigger than temp_buffers. We basically\n> hope that the kernel can do good caching for us there.\n>\n>\n> > > Or I actually think we might not even need to pass around the io_*\n> > > parameters and could just pass immediate values to the\n> > > pgstat_count_io_op_time call. If we ever start using shared buffers\n> > > for thing other than relation files (for example SLRU?), we'll have to\n> > > consider the target individually for each buffer block. That being\n> > > said, I'm fine with how it is either.\n> >\n> > In IssuePendingWritebacks() we don't actually know which IOContext we\n> > are issuing writebacks for when we call pgstat_count_io_op_time() (we do\n> > issue pending writebacks for other IOContexts than IOCONTEXT_NORMAL). I\n> > agree IOObject is not strictly necessary right now. I've kept IOObject a\n> > member of WritebackContext for the reasons I mention above, however, I\n> > am open to removing it if it adds confusion.\n>\n> I don't think it's really worth adding struct members for potential future\n> safety. We can just add them later if we end up needing them.\n\nI've removed both members of WritebackContext and hard-coded\nIOOBJECT_RELATION in the call to pgstat_count_io_op_time().\n\n> > From 7cdd6fc78ed82180a705ab9667714f80d08c5f7d Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Mon, 24 Apr 2023 18:21:54 -0400\n> > Subject: [PATCH v4] Add writeback to pg_stat_io\n> >\n> > 28e626bde00 added the notion of IOOps but neglected to include\n> > writeback. With the addition of IO timing to pg_stat_io in ac8d53dae5,\n> > the omission of writeback caused some confusion. Checkpointer write\n> > timing in pg_stat_io often differed greatly from the write timing\n> > written to the log. To fix this, add IOOp IOOP_WRITEBACK and track\n> > writebacks and writeback timing in pg_stat_io.\n>\n> For the future: It'd be good to note that catversion needs to be increased.\n\nNoted. I've added it to the commit message since I did a new version\nanyway.\n\n> > index 99f7f95c39..27b6f1a0a0 100644\n> > --- a/doc/src/sgml/monitoring.sgml\n> > +++ b/doc/src/sgml/monitoring.sgml\n> > @@ -3867,6 +3867,32 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n> > </entry>\n> > </row>\n> >\n> > + <row>\n> > + <entry role=\"catalog_table_entry\">\n> > + <para role=\"column_definition\">\n> > + <structfield>writebacks</structfield> <type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Number of units of size <varname>op_bytes</varname> which the backend\n> > + requested the kernel write out to permanent storage.\n> > + </para>\n> > + </entry>\n> > + </row>\n>\n> I think the reference to \"backend\" here is somewhat misplaced - it could be\n> checkpointer or bgwriter as well. We don't reference the backend in other\n> comparable columns of pgsio either...\n\nSo, I tried to come up with something that doesn't make reference to\nany \"requester\" of the writeback and the best I could do was:\n\n\"Number of units of size op_bytes requested that the kernel write out.\"\n\nThis is awfully awkward sounding.\n\n\"backend_type\" is the name of the column in pg_stat_io. Client backends\nare always referred to as such in the pg_stat_io documentation. Thus, I\nthink it is reasonable to use the word \"backend\" and assume people\nunderstand it could be any type of backend.\n\nHowever, since the existing docs for pg_stat_bgwriter use \"backend\" to\nmean \"client backend\", and I see a few uses of the word \"process\" in the\nstats docs, I've changed my use of the word \"backend\" to \"process\".\n\n> > diff --git a/src/backend/storage/buffer/buf_init.c b/src/backend/storage/buffer/buf_init.c\n> > index 0057443f0c..a7182fe95a 100644\n> > --- a/src/backend/storage/buffer/buf_init.c\n> > +++ b/src/backend/storage/buffer/buf_init.c\n> > @@ -145,9 +145,15 @@ InitBufferPool(void)\n> > /* Init other shared buffer-management stuff */\n> > StrategyInitialize(!foundDescs);\n> >\n> > - /* Initialize per-backend file flush context */\n> > - WritebackContextInit(&BackendWritebackContext,\n> > - &backend_flush_after);\n> > + /*\n> > + * Initialize per-backend file flush context. IOContext is initialized to\n> > + * IOCONTEXT_NORMAL because this is the most common context. IOObject is\n> > + * initialized to IOOBJECT_RELATION because writeback is currently only\n> > + * requested for permanent relations in shared buffers. The backend can\n> > + * overwrite these as appropriate.\n> > + */\n> > + WritebackContextInit(&BackendWritebackContext, IOOBJECT_RELATION,\n> > + IOCONTEXT_NORMAL, &backend_flush_after);\n> > }\n> >\n>\n> This seems somewhat icky.\n\nI've removed both IOObject and IOContext from WritebackContext.\n\n> > /*\n> > diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> > index 1fa689052e..116910cdfe 100644\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n> > @@ -1685,6 +1685,8 @@ again:\n> > FlushBuffer(buf_hdr, NULL, IOOBJECT_RELATION, io_context);\n> > LWLockRelease(content_lock);\n> >\n> > + BackendWritebackContext.io_object = IOOBJECT_RELATION;\n> > + BackendWritebackContext.io_context = io_context;\n> > ScheduleBufferTagForWriteback(&BackendWritebackContext,\n> > &buf_hdr->tag);\n> > }\n>\n> What about passing the io_context to ScheduleBufferTagForWriteback instead?\n\nI assume we don't want to include the time spent in\nsort_pending_writebacks(), so I've added io_context as a parameter to\nScheduleBufferTagForWriteback and threaded it through\nIssuePendingWritebacks() as well.\n\nBecause WritebackContext was just called \"context\" as a function\nparameter to these functions and that was easy to confuse with\n\"io_context\", I've changed the name of the WritebackContext function\nparameter to \"wb_context\". I separated this rename into its own commit so\nthat the diff for the commit adding writeback is more clear. I assume\nthe committer will squash those commits.\n\n> > --- a/src/test/regress/sql/stats.sql\n> > +++ b/src/test/regress/sql/stats.sql\n>\n> Hm. Could we add a test for this? While it's not implemented everywhere, we\n> still issue the smgrwriteback() afaics. The default for the _flush_after GUCs\n> changes, but backend_flush_after is USERSET, so we could just change it for a\n> single command.\n\nI couldn't come up with a way to write a test for this.\nGetVictimBuffer() is only called when flushing a dirty buffer. I tried\nadding backend_flush_after = 1 and sum(writebacks) to the test for\nreuses with vacuum strategy, but there were never more writebacks (in\nany context) after doing the full table rewrite with VACUUM. I presume\nthis is because checkpointer or bgwriter is writing out the dirty\nbuffers before our client backend gets to reusing them. And, since\nbgwriter/checkpointer_flush_after are not USERSET, I don't think we can\nguarantee this will cause writeback operations. Just anecdotally, I\nincreased size of the table to exceed checkpoint_flush_after on my\nPostgres instance and I could get the test to cause writeback, but that\ndoesn't work for a portable test. This was the same reason we couldn't\ntest writes for VACUUM strategy.\n\nI did notice while working on this that, with the addition of the VACUUM\nparameter BUFFER_USAGE_LIMIT, we could decrease the size of the table in\nthe vacuum strategy reuses test. Not sure if this is legit to commit now\nsince it isn't required for the writeback patch set, but I included a\npatch for it in this patchset.\n\nOn Thu, May 4, 2023 at 12:57 PM Andres Freund <[email protected]> wrote:\n> On 2023-04-24 21:29:48 -0400, Melanie Plageman wrote:\n>\n> > 2) I'm a little nervous about not including IOObject in the writeback\n> > context. Technically, there is nothing stopping local buffer code from\n> > calling IssuePendingWritebacks(). Right now, local buffer code doesn't\n> > do ScheduleBufferTagForWriteback(). But it doesn't seem quite right to\n> > hardcode in IOOBJECT_RELATION when there is nothing wrong with\n> > requesting writeback of local buffers (AFAIK). What do you think?\n>\n> I think it'd be wrong on performance grounds ;). We could add an assertion to\n> ScheduleBufferTagForWriteback(), I guess, to document that fact?\n\nNow that it doesn't have access to IOObject, no need. I've added\ncomments elsewhere.\n\n> > 3) Should any restrictions be added to pgstat_tracks_io_object() or\n> > pgstat_tracks_io_op()? I couldn't think of any backend types or IO\n> > contexts which would not do writeback as a rule. Also, though we don't\n> > do writeback for temp tables now, it isn't nonsensical to do so. In\n> > this version, I didn't add any restrictions.\n>\n> I think the temp table restriction could be encoded for now, I don't forsee\n> that changing anytime soon.\n\nI've done that in the attached v5.\n\n- Melanie",
"msg_date": "Sat, 6 May 2023 13:30:39 -0400",
"msg_from": "Melanie Plageman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On 5/6/23 1:30 PM, Melanie Plageman wrote:\r\n\r\n> I've done that in the attached v5.\r\n\r\n[RMT hat]\r\n\r\nRMT nudge on this thread, as we're approaching the Beta 1 cutoff. From \r\nthe above discussion, it sounds like it's pretty close to being ready.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 16 May 2023 10:30:27 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "Hi,\n\nOn 2023-05-16 10:30:27 -0400, Jonathan S. Katz wrote:\n> On 5/6/23 1:30 PM, Melanie Plageman wrote:\n> \n> > I've done that in the attached v5.\n> \n> [RMT hat]\n> \n> RMT nudge on this thread, as we're approaching the Beta 1 cutoff. From the\n> above discussion, it sounds like it's pretty close to being ready.\n\nThanks for the nudge. I just pushed the changes, with some very minor changes\n(a newline, slight changes in commit messages).\n\nI'll go and mark the item as closed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 May 2023 12:19:24 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
},
{
"msg_contents": "On 5/17/23 3:19 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2023-05-16 10:30:27 -0400, Jonathan S. Katz wrote:\r\n>> On 5/6/23 1:30 PM, Melanie Plageman wrote:\r\n>>\r\n>>> I've done that in the attached v5.\r\n>>\r\n>> [RMT hat]\r\n>>\r\n>> RMT nudge on this thread, as we're approaching the Beta 1 cutoff. From the\r\n>> above discussion, it sounds like it's pretty close to being ready.\r\n> \r\n> Thanks for the nudge. I just pushed the changes, with some very minor changes\r\n> (a newline, slight changes in commit messages).\r\n> \r\n> I'll go and mark the item as closed.\r\n\r\nNice! Thank you,\r\n\r\nJonathan",
"msg_date": "Wed, 17 May 2023 15:21:25 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_io not tracking smgrwriteback() is confusing"
}
] |
[
{
"msg_contents": "I wondered at [1] why ExecAggTransReparent doesn't do what you'd\nguess from the name, that is reparent a supplied R/W expanded object\nunder the aggcontext even if it's not there already. I tried to\nmake it do that, and after I'd finished bandaging my wounds,\nI wrote some commentary explaining why that won't work. (Possibly\nwe knew this at some point, but if so we sure failed to provide\ndocumentation about it.)\n\nI think we should at least apply the attached commentary-only\npatch. I wonder though if we should change the name of this\nfunction, and if so to what. Maybe ExecAggCopyTransValue?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2199319.1681662388%40sss.pgh.pa.us",
"msg_date": "Wed, 19 Apr 2023 14:50:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "ExecAggTransReparent is underdocumented and badly named"
}
] |
[
{
"msg_contents": "As far as I know, when a index page is full, if you insert a new tuple here, you will split it into two pages.\r\nBut pg won't delete the half tuples in the old page in real. So if there is another tuple inserted into this old\r\npage, will pg split it again? I think that's not true, so how it solve this one? please give me a code example,thanks.\r\n\r\n\r\[email protected]\r\n\n\nAs far as I know, when a index page is full, if you insert a new tuple here, you will split it into two pages.\nBut pg won't delete the half tuples in the old page in real. So if there is another tuple inserted into this oldpage, will pg split it again? I think that's not true, so how it solve this one? please give me a code example,thanks.\[email protected]",
"msg_date": "Thu, 20 Apr 2023 09:40:32 +0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Howdoes; pg; index; page; optimize; dead; tuples?;"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 9:40 PM [email protected] <[email protected]> wrote:\n\n> As far as I know, when a index page is full, if you insert a new tuple\n> here, you will split it into two pages.\n> But pg won't delete the half tuples in the old page in real. So if there\n> is another tuple inserted into this old\n> page, will pg split it again? I think that's not true, so how it solve\n> this one? please give me a code example,thanks\n>\n\nThis is not how the hackers list works; you need to do your own research.\nThe Postgres code is pretty straightforward and giving you examples in\nisolation makes no sense. If you want to understand how things actually\nwork, you need to read the code in context and understand how the system\nworks, minimally, at a component level.\n\n\n\n-- \nJonah H. Harris\n\nOn Wed, Apr 19, 2023 at 9:40 PM [email protected] <[email protected]> wrote:\nAs far as I know, when a index page is full, if you insert a new tuple here, you will split it into two pages.\nBut pg won't delete the half tuples in the old page in real. So if there is another tuple inserted into this oldpage, will pg split it again? I think that's not true, so how it solve this one? please give me a code example,thanksThis is not how the hackers list works; you need to do your own research. The Postgres code is pretty straightforward and giving you examples in isolation makes no sense. If you want to understand how things actually work, you need to read the code in context and understand how the system works, minimally, at a component level.-- Jonah H. Harris",
"msg_date": "Wed, 19 Apr 2023 21:48:43 -0400",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Howdoes; pg; index; page; optimize; dead; tuples?;"
}
] |
[
{
"msg_contents": "As far as I know, when a index page is full, if you insert a new tuple here, you will split it into two pages.\r\nBut pg won't delete the half tuples in the old page in real. So if there is another tuple inserted into this old\r\npage, will pg split it again? I think that's not true, so how it solve this one? please give me a code example,thanks.\r\n\r\n\r\[email protected]\r\n\n\nAs far as I know, when a index page is full, if you insert a new tuple here, you will split it into two pages.But pg won't delete the half tuples in the old page in real. So if there is another tuple inserted into this oldpage, will pg split it again? I think that's not true, so how it solve this one? please give me a code example,thanks.\[email protected]",
"msg_date": "Thu, 20 Apr 2023 09:42:26 +0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How does pg index page optimize dead tuples?"
}
] |
[
{
"msg_contents": "Hi,\n\nConsidering two partitionned tables with a FK between them:\n\n DROP TABLE IF EXISTS p, c, c_1 CASCADE;\n\n ----------------------------------\n -- Parent table + partition + data\n CREATE TABLE p (\n id bigint PRIMARY KEY\n )\n PARTITION BY list (id);\n\n CREATE TABLE p_1 PARTITION OF p FOR VALUES IN (1);\n\n INSERT INTO p VALUES (1);\n\n ------------------------------------\n -- Child table + partition + data\n CREATE TABLE c (\n id bigint PRIMARY KEY,\n p_id bigint NOT NULL,\n FOREIGN KEY (p_id) REFERENCES p (id)\n )\n PARTITION BY list (id);\n\n CREATE TABLE c_1 PARTITION OF c FOR VALUES IN (1);\n\n INSERT INTO c VALUES (1,1);\n\nAfter DETACHing the \"c_1\" partition, current implementation make sure it\nkeeps the FK herited from its previous top table \"c\":\n\n ALTER TABLE c DETACH PARTITION c_1;\n \\d c_1\n -- outputs:\n -- [...]\n -- Foreign-key constraints:\n -- \"c_p_id_fkey\" FOREIGN KEY (p_id) REFERENCES p(id)\n\nHowever, because the referenced side is partionned, this FK is half backed, with\nonly the referencing (insert/update on c_1) side enforced, but not the\nreferenced side (update/delete on p):\n\n INSERT INTO c_1 VALUES (2,2); -- fails as EXPECTED\n -- ERROR: insert or update on table \"child_1\" violates foreign key [...]\n\n DELETE FROM p; -- should actually fail\n -- DELETE 1\n\n SELECT * FROM c_1;\n -- id | parent_id \n -- ----+-----------\n -- 1 | 1\n -- (1 row)\n\n SELECT * FROM p;\n -- id \n -- ----\n -- (0 rows)\n\nWhen detaching \"c_1\", current implementation adds two triggers to enforce\nUPDATE/DELETE on \"p\" are restricted if \"c_1\" keeps referencing the\nrelated rows... But it forgets to add them on partitions of \"p_1\", where the\ntriggers should actually fire.\n\nTo make it clear, the FK c_1 -> p constraint and triggers after DETACHING c_1\nare:\n\n SELECT c.oid AS conid, c.conname, c.conparentid AS conparent,\n r2.relname AS pkrel,\n t.tgrelid::regclass AS tgrel,\n p.proname\n FROM pg_constraint c \n JOIN pg_class r ON c.conrelid = r.oid\n JOIN pg_class r2 ON c.confrelid = r2.oid\n JOIN pg_trigger t ON t.tgconstraint = c.oid\n JOIN pg_proc p ON p.oid = t.tgfoid\n WHERE r.relname = 'c_1' AND r2.relname LIKE 'p%'\n ORDER BY r.relname, c.conname, t.tgrelid::regclass::text, p.proname;\n\n -- conid | conname | conparent | pkrel | tgrel | proname \n -- -------+-------------+-----------+-------+-------+----------------------\n -- 18454 | c_p_id_fkey | 0 | p | c_1 | RI_FKey_check_ins\n -- 18454 | c_p_id_fkey | 0 | p | c_1 | RI_FKey_check_upd\n -- 18454 | c_p_id_fkey | 0 | p | p | RI_FKey_noaction_del\n -- 18454 | c_p_id_fkey | 0 | p | p | RI_FKey_noaction_upd\n\nWhere they should be:\n\n -- conid | conname | conparent | pkrel | tgrel | proname \n -- -------+--------------+-----------+-------+-------+----------------------\n -- 18454 | c_p_id_fkey | 0 | p | c_1 | RI_FKey_check_ins\n -- 18454 | c_p_id_fkey | 0 | p | c_1 | RI_FKey_check_upd\n -- 18454 | c_p_id_fkey | 0 | p | p | RI_FKey_noaction_del\n -- 18454 | c_p_id_fkey | 0 | p | p | RI_FKey_noaction_upd\n -- NEW!! | c_p_id_fkey1 | 18454 | p_1 | p_1 | RI_FKey_noaction_del\n -- NEW!! | c_p_id_fkey1 | 18454 | p_1 | p_1 | RI_FKey_noaction_upd\n\nI poked around DetachPartitionFinalize() to try to find a way to fix this, but\nit looks like it would duplicate a bunch of code from other code path (eg.\nfrom CloneFkReferenced).\n\nInstead of tweaking existing FK, keeping old constraint name (wouldn't\n\"c_1_p_id_fkey\" be better after detach?) and duplicating some code around, what\nabout cleaning up the FK constraints from the detached table and\nrecreating a cleaner one using the known code path ATAddForeignKeyConstraint() ?\n\nThanks for reading me down to here!\n\n++\n\n\n",
"msg_date": "Thu, 20 Apr 2023 14:43:44 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <[email protected]>",
"msg_from_op": true,
"msg_subject": "[BUG] FK broken after DETACHing referencing part"
}
] |
[
{
"msg_contents": "Hi All,\nLockMethodLocalHash hash table is allocated in memory context \"\"LOCALLOCK hash\".\nLockAcquireExtended() fetches an entry from this hash table and\nallocates memory for locallock->lockOwners in TopMemoryContext. Thus\nthe entry locallock and an array pointed from this entry are allocated\nin two different memory contexts.\n\nTheoretically if LockMethodLocalHash was destroyed with some entries\nin it, it would leave some dangling pointers in TopMemoryContext. It\nlooks to me that the array lockOwners should be allocated in same\ncontext as LOCALLOCK hash or its child. Something like below\n\ndiff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c\nindex 42595b38b2..32804b1c2c 100644\n--- a/src/backend/storage/lmgr/lock.c\n+++ b/src/backend/storage/lmgr/lock.c\n@@ -843,7 +843,7 @@ LockAcquireExtended(const LOCKTAG *locktag,\n locallock->maxLockOwners = 8;\n locallock->lockOwners = NULL; /* in case next line fails */\n locallock->lockOwners = (LOCALLOCKOWNER *)\n- MemoryContextAlloc(TopMemoryContext,\n+ MemoryContextAlloc(GetMemoryChunkContext(locallock),\n\nlocallock->maxLockOwners * sizeof(LOCALLOCKOWNER));\n }\n else\n\nLockMethodLocalHash is hash_destroyed() in InitLocks(). The comment\nthere mentions that possibly there are no entries in that hash table\nat that time. So the problem described above is rare or non-existent\nas of now. But it's possibly worth fixing while it's not too serious.\n\nThere were some untraceable bugs related to locks reported earlier\n[1]. Those may be linked to this. But we couldn't establish the\nconnection.\n\n[1] https://www.postgresql.org/message-id/flat/5227.1315428924%40sss.pgh.pa.us#00116525613b7ddb82669d2ba358b31e\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 20 Apr 2023 19:01:49 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": true,
"msg_subject": "memory context mismatch in LockMethodLocalHash entries."
}
] |
[
{
"msg_contents": "Hi,\n\nMoving this to -hackers.\n\nOn 2023-04-20 11:49:23 +0200, Palle Girgensohn wrote:\n> I was recently made aware of a problem building postgresql using LLVM binutils.\n> \n> A summary:\n> \n> --\n> \n> pgsql's build has requested to strip all non-global symbols (strip -x), but\n> there is at least one non-global symbol that in fact cannot be stripped\n> because it is referenced by a relocation.\n\nAn important detail here is that this happens when stripping static\nlibraries. It's not too surprising that one needs the symbols referenced by\nrelocations until after linking with the static lib, even if they're not\nglobally visible symbols.\n\n\n> Both GNU strip and ELF Tool Chain strip silently handle this case (and just retain the local symbol), but LLVM strip is stricter and emits an error upon request to strip a non-removable local symbol.\n> \n> There is an LLVM ticket open for this at https://github.com/llvm/llvm-project/issues/47468, and it may make sense for LLVM strip to behave the same as GNU and ELF Tool Chain strip. That said, pgsql should just not use strip -x when there are symbols that cannot be stripped.\n\nPersonally I'd say stripping symbols is something that should just not be done\nanymore, but ...\n\n> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=270769\n> \n> https://reviews.freebsd.org/D39590\n> \n> \n> Any toughts about this? Should send the suggested patch upstreams? Or do we just consider LLVM to behave badly?\n\nPeter, it's unlikely given the timeframe, but do you happen to remember why\nyou specified -x when stripping static libs? This seems to be all the way back\nfrom\n\ncommit 563673e15db995b6f531b44be7bb162330ac157a\nAuthor: Peter Eisentraut <[email protected]>\nDate: 2002-04-10 16:45:25 +0000\n\n Add make install-strip target.\n\n\nAfaict the only safe thing to use when stripping static libs is\n-g/--strip-debug.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 20 Apr 2023 08:33:38 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> Afaict the only safe thing to use when stripping static libs is\n> -g/--strip-debug.\n\nThe previous complaint about this [1] suggested we use --strip-unneeded\nfor all cases with GNU strip, same as we've long done for shared libs.\nIt's an easy enough change, but I wonder if anyone will complain.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/17898-5308d09543463266%40postgresql.org\n\n\n",
"msg_date": "Thu, 20 Apr 2023 12:43:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-20 12:43:48 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Afaict the only safe thing to use when stripping static libs is\n> > -g/--strip-debug.\n>\n> The previous complaint about this [1] suggested we use --strip-unneeded\n> for all cases with GNU strip, same as we've long done for shared libs.\n> It's an easy enough change, but I wonder if anyone will complain.\n\n--strip-unneeded output is smaller than -x output:\n\n19M src/interfaces/libpq/libpq.a\n364K src/interfaces/libpq/libpq.a.strip.gnu.g\n352K src/interfaces/libpq/libpq.a.strip.gnu.unneeded\n356K src/interfaces/libpq/libpq.a.strip.gnu.x\n352K src/interfaces/libpq/libpq.a.strip.gnu.x.g\n\nstrip --version\nGNU strip (GNU Binutils for Debian) 2.40\n\nPartially that's because --strip-unneeded implies -g. Interestingly -x -g\noutput isn't quite the same as --strip-unneeded. The latter also removes the\n_GLOBAL_OFFSET_TABLE_ symbol.\n\n\nI doubt anybody wants to strip symbols and keep debug information, so I doubt\nthere's much ground for complaints?\n\n\nOddly the output of llvm-strip confuses binutils objdump enough that it claims\nthat \"file format not recognized\". Not sure which side is broken there.\n\nllvm-strip's output is a lot larger than gnu strip's:\n 19M src/interfaces/libpq/libpq.a\n 19M src/interfaces/libpq/libpq.a.strip.llvm.X\n908K src/interfaces/libpq/libpq.a.strip.llvm.g\n892K src/interfaces/libpq/libpq.a.strip.llvm.unneeded\n892K src/interfaces/libpq/libpq.a.strip.llvm.unneeded.g\n364K src/interfaces/libpq/libpq.a.strip.gnu.g\n356K src/interfaces/libpq/libpq.a.strip.gnu.x\n352K src/interfaces/libpq/libpq.a.strip.gnu.x.g\n352K src/interfaces/libpq/libpq.a.strip.gnu.unneeded\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 20 Apr 2023 12:35:42 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-04-20 12:43:48 -0400, Tom Lane wrote:\n>> The previous complaint about this [1] suggested we use --strip-unneeded\n>> for all cases with GNU strip, same as we've long done for shared libs.\n>> It's an easy enough change, but I wonder if anyone will complain.\n\n> I doubt anybody wants to strip symbols and keep debug information, so I doubt\n> there's much ground for complaints?\n\nAgreed. It doesn't look like --strip-unneeded is a worse choice than -x.\n\n> Oddly the output of llvm-strip confuses binutils objdump enough that it claims\n> that \"file format not recognized\". Not sure which side is broken there.\n\nNot our problem, I'd say ...\n\n> llvm-strip's output is a lot larger than gnu strip's:\n\n... nor that. These things do suggest that llvm-strip isn't all that\nclose to being ready for prime time, but if FreeBSD wants to push the\nenvelope on toolchain usage, who are we to stand in the way?\n\nI'll go make it so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Apr 2023 17:21:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "On 20.04.23 17:33, Andres Freund wrote:\n> Peter, it's unlikely given the timeframe, but do you happen to remember why\n> you specified -x when stripping static libs? This seems to be all the way back\n> from\n> \n> commit 563673e15db995b6f531b44be7bb162330ac157a\n> Author: Peter Eisentraut<[email protected]>\n> Date: 2002-04-10 16:45:25 +0000\n> \n> Add make install-strip target.\n> \n> \n> Afaict the only safe thing to use when stripping static libs is\n> -g/--strip-debug.\n\nI suspect this was copied from GNU Libtool. Libtool still has that but \nlater changed the stripping of static libraries on darwin to \"strip -S\". \n Maybe should adopt that.\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 18:41:54 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "On 21.04.23 18:41, Peter Eisentraut wrote:\n> On 20.04.23 17:33, Andres Freund wrote:\n>> Peter, it's unlikely given the timeframe, but do you happen to \n>> remember why\n>> you specified -x when stripping static libs? This seems to be all the \n>> way back\n>> from\n>>\n>> commit 563673e15db995b6f531b44be7bb162330ac157a\n>> Author: Peter Eisentraut<[email protected]>\n>> Date: 2002-04-10 16:45:25 +0000\n>>\n>> Add make install-strip target.\n>>\n>>\n>> Afaict the only safe thing to use when stripping static libs is\n>> -g/--strip-debug.\n> \n> I suspect this was copied from GNU Libtool. Libtool still has that but \n> later changed the stripping of static libraries on darwin to \"strip -S\". \n> Maybe should adopt that.\n\nHere is the current logic in GNU Libtool:\n\nhttps://github.com/autotools-mirror/libtool/blob/master/m4/libtool.m4#L2214\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 18:49:04 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 20.04.23 17:33, Andres Freund wrote:\n>> Peter, it's unlikely given the timeframe, but do you happen to remember why\n>> you specified -x when stripping static libs?\n\n> I suspect this was copied from GNU Libtool. Libtool still has that but \n> later changed the stripping of static libraries on darwin to \"strip -S\". \n> Maybe should adopt that.\n\nI tried that, but it seems strictly worse on output file size:\n\n$ ll lib*/libpq.a\n-rw-r--r-- 1 tgl staff 715312 Apr 21 12:52 lib-no-strip/libpq.a\n-rw-r--r-- 1 tgl staff 209984 Apr 21 12:51 lib-strip-S/libpq.a\n-rw-r--r-- 1 tgl staff 208456 Apr 21 12:50 lib-strip-x/libpq.a\n$ ll lib*/libecpg.a\n-rw-r--r-- 1 tgl staff 324952 Apr 21 12:52 lib-no-strip/libecpg.a\n-rw-r--r-- 1 tgl staff 102752 Apr 21 12:51 lib-strip-S/libecpg.a\n-rw-r--r-- 1 tgl staff 102088 Apr 21 12:50 lib-strip-x/libecpg.a\n\nIf you use both -x and -S, you get the same file sizes as with -x\nalone. Not sure why we should change anything here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:00:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "On 21.04.23 19:00, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> On 20.04.23 17:33, Andres Freund wrote:\n>>> Peter, it's unlikely given the timeframe, but do you happen to remember why\n>>> you specified -x when stripping static libs?\n> \n>> I suspect this was copied from GNU Libtool. Libtool still has that but\n>> later changed the stripping of static libraries on darwin to \"strip -S\".\n>> Maybe should adopt that.\n> \n> I tried that, but it seems strictly worse on output file size:\n> \n> $ ll lib*/libpq.a\n> -rw-r--r-- 1 tgl staff 715312 Apr 21 12:52 lib-no-strip/libpq.a\n> -rw-r--r-- 1 tgl staff 209984 Apr 21 12:51 lib-strip-S/libpq.a\n> -rw-r--r-- 1 tgl staff 208456 Apr 21 12:50 lib-strip-x/libpq.a\n> $ ll lib*/libecpg.a\n> -rw-r--r-- 1 tgl staff 324952 Apr 21 12:52 lib-no-strip/libecpg.a\n> -rw-r--r-- 1 tgl staff 102752 Apr 21 12:51 lib-strip-S/libecpg.a\n> -rw-r--r-- 1 tgl staff 102088 Apr 21 12:50 lib-strip-x/libecpg.a\n> \n> If you use both -x and -S, you get the same file sizes as with -x\n> alone. Not sure why we should change anything here.\n\nThe complaint was that -x doesn't work correctly, no?\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 19:11:17 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> On 21.04.23 19:00, Tom Lane wrote:\n>> If you use both -x and -S, you get the same file sizes as with -x\n>> alone. Not sure why we should change anything here.\n\n> The complaint was that -x doesn't work correctly, no?\n\nThe complaint was that it doesn't work correctly if you are using\nllvm-strip. However, llvm-strip will be picked up by the previous\ntest for GNU strip, independently of what platform you're on.\nThe code in question here is only concerned with Apple's strip.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:30:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LLVM strip -x fails"
}
] |
[
{
"msg_contents": "The Core Team would like to extend our congratulations to\nNathan Bossart, Amit Langote, and Masahiko Sawada, who have\naccepted invitations to become our newest Postgres committers.\n\nPlease join me in wishing them much success and few reverts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 Apr 2023 13:40:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Thu, 20 Apr 2023 at 21:40, Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n> regards, tom lane\nGreat news!\nIt's much deserved! Congratulations, Nathan, Amit and Masahiko!\n\nPavel Borisov\n\n\n",
"msg_date": "Thu, 20 Apr 2023 21:56:11 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 10:56 AM Pavel Borisov <[email protected]> wrote:\n> It's much deserved! Congratulations, Nathan, Amit and Masahiko!\n\nCongratulations to all three!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 20 Apr 2023 11:12:12 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On 2023-04-20 Th 14:12, Peter Geoghegan wrote:\n> On Thu, Apr 20, 2023 at 10:56 AM Pavel Borisov<[email protected]> wrote:\n>> It's much deserved! Congratulations, Nathan, Amit and Masahiko!\n> Congratulations to all three!\n\n\n+3\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-20 Th 14:12, Peter Geoghegan\n wrote:\n\n\nOn Thu, Apr 20, 2023 at 10:56 AM Pavel Borisov <[email protected]> wrote:\n\n\nIt's much deserved! Congratulations, Nathan, Amit and Masahiko!\n\n\n\nCongratulations to all three!\n\n\n\n+3\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 20 Apr 2023 15:32:14 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 01:40:49PM -0400, Tom Lane wrote:\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n> \n> Please join me in wishing them much success and few reverts.\n\nCongratulations to all of you!\n\nMay the buildfarm show kindness to your commits.\n--\nMichael",
"msg_date": "Fri, 21 Apr 2023 07:04:34 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Fri, 21 Apr 2023 at 05:40, Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n\nGreat news! Congratulations to all three!\n\nDavid\n\n\n",
"msg_date": "Fri, 21 Apr 2023 10:10:14 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 12:40 AM Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n\nCongratulations to all!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Apr 21, 2023 at 12:40 AM Tom Lane <[email protected]> wrote:>> The Core Team would like to extend our congratulations to> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have> accepted invitations to become our newest Postgres committers.>> Please join me in wishing them much success and few reverts.Congratulations to all! --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Apr 2023 08:40:11 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Thu, 20 Apr 2023 at 23:10, Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n\nVery deserved. Congratulations Nathan, Amit and Masahiko.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 21 Apr 2023 07:37:18 +0530",
"msg_from": "vignesh C <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "At Thu, 20 Apr 2023 13:40:49 -0400, Tom Lane <[email protected]> wrote in \n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n> \n> Please join me in wishing them much success and few reverts.\n\nCongratulations to all, from me, too, wishing much success!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 21 Apr 2023 11:30:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 11:10 PM Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n\nCongratulations to the new committers!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:01:47 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 5:40 AM Tom Lane <[email protected]> wrote:\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n\n+100. Great news.\n\n\n",
"msg_date": "Fri, 21 Apr 2023 15:34:04 +1200",
"msg_from": "Thomas Munro <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "Greate news! Hearty congratulations to Nathan, Amit and Masahiko.\n\nOn Thu, Apr 20, 2023 at 11:10 PM Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n> regards, tom lane\n>\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:57:26 +0530",
"msg_from": "Ashutosh Bapat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 2:40 AM Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n\nThank you and everyone for the wishes! Congratulations to the other\nnew committers.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 17:52:05 +0900",
"msg_from": "Masahiko Sawada <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "Many congratulations to all.\n\nOn Thursday, April 20, 2023, Tom Lane <[email protected]> wrote:\n\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\nMany congratulations to all.On Thursday, April 20, 2023, Tom Lane <[email protected]> wrote:The Core Team would like to extend our congratulations to\nNathan Bossart, Amit Langote, and Masahiko Sawada, who have\naccepted invitations to become our newest Postgres committers.\n\nPlease join me in wishing them much success and few reverts.\n\n regards, tom lane\n\n\n-- Regards,Amul SulEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Apr 2023 14:28:06 +0530",
"msg_from": "Amul Sul <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 17:52 Masahiko Sawada <[email protected]> wrote:\n\n> On Fri, Apr 21, 2023 at 2:40 AM Tom Lane <[email protected]> wrote:\n> >\n> > The Core Team would like to extend our congratulations to\n> > Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> > accepted invitations to become our newest Postgres committers.\n> >\n> > Please join me in wishing them much success and few reverts.\n> >\n>\n> Thank you and everyone for the wishes! Congratulations to the other\n> new committers.\n\n\n+1, thank you core team for the opportunity.\n\nThank you all for the wishes.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Fri, Apr 21, 2023 at 17:52 Masahiko Sawada <[email protected]> wrote:On Fri, Apr 21, 2023 at 2:40 AM Tom Lane <[email protected]> wrote:\n>\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n\nThank you and everyone for the wishes! Congratulations to the other\nnew committers.+1, thank you core team for the opportunity.Thank you all for the wishes.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Apr 2023 20:10:56 +0900",
"msg_from": "Amit Langote <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 08:10:56PM +0900, Amit Langote wrote:\n> On Fri, Apr 21, 2023 at 17:52 Masahiko Sawada <[email protected]> wrote:\n>> Thank you and everyone for the wishes! Congratulations to the other\n>> new committers.\n> \n> +1, thank you core team for the opportunity.\n> \n> Thank you all for the wishes.\n\nThanks everyone! And congratulations to Masahiko and Amit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:33:17 -0700",
"msg_from": "Nathan Bossart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On 4/20/23 1:40 PM, Tom Lane wrote:\r\n> The Core Team would like to extend our congratulations to\r\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\r\n> accepted invitations to become our newest Postgres committers.\r\n> \r\n> Please join me in wishing them much success and few reverts.\r\n\r\nCongratulations Nathan, Amit, and Masahiko!\r\n\r\nJonathan",
"msg_date": "Sat, 22 Apr 2023 15:29:03 -0400",
"msg_from": "\"Jonathan S. Katz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "\n\n> On 20 Apr 2023, at 22:40, Tom Lane <[email protected]> wrote:\n> \n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n> \n> Please join me in wishing them much success and few reverts.\n\n\nCool! Congratulations Nathan, Amit, and Masahiko!\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 24 Apr 2023 12:50:49 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
},
{
"msg_contents": "On 4/20/23 13:40, Tom Lane wrote:\n> The Core Team would like to extend our congratulations to\n> Nathan Bossart, Amit Langote, and Masahiko Sawada, who have\n> accepted invitations to become our newest Postgres committers.\n>\n> Please join me in wishing them much success and few reverts.\n>\n\nHuge congrats !\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Mon, 24 Apr 2023 04:32:16 -0400",
"msg_from": "Jesper Pedersen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New committers: Nathan Bossart, Amit Langote, Masahiko Sawada"
}
] |
[
{
"msg_contents": ">I searched the codes and found some other places where the manipulation\n>of lists can be improved in a similar way.\n\n>* lappend(list_copy(list), datum) as in get_required_extension().\n>This is not very efficient as after list_copy it would need to enlarge\n>the list immediately. It can be improved by inventing a new function,\n>maybe called list_append_copy, that do the copy and append all together.\n\n>* lcons(datum, list_copy(list)) as in get_query_def().\n>This is also not efficient. Immediately after list_copy, we'd need to\n>enlarge the list and move all the entries. It can also be improved by\n>doing all these things all together in one function.\n\n>* lcons(datum, list_delete_nth_cell(list_copy(list), n)) as in\n>sort_inner_and_outer.\n>It'd need to copy all the elements, and then delete the n'th entry which\n>would cause all following entries be moved, and then move all the\n>remaining entries for lcons. Maybe we can invent a new function for it?\n\n>So is it worthwhile to improve these places?\n\nI think yes. It's very inefficient coping and moving, unnecessarily.\n\nPerhaps, like the attached patch?\n\nlcons_copy_delete needs a careful review.\n\n\n>I wonder if we can invent function list_nth_xid to do it, to keep\n>consistent with list_nth/list_nth_int/list_nth_oid.\n\nPerhaps list_nth_xid(const List *list, int n)?\n\nregards,\n\nRanier Vilela",
"msg_date": "Thu, 20 Apr 2023 18:33:20 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incremental sort for access method with ordered scan support\n (amcanorderbyop)"
}
] |
[
{
"msg_contents": "When reviewing a recently committed patch [1] I noticed the odd usage\nof a format specifier:\n\n+ libpq_append_conn_error(conn, \"invalid %s value: \\\"%s\\\"\",\n+ \"load_balance_hosts\",\n+ conn->load_balance_hosts);\n\nThe oddity is that the first %s is unnecessary, since the value we\nwant there is a constant. Typically a format specifier used to get the\nvalue stored in a variable.\n\nUpon closer look, it looks like this is a common pattern in\nfe-connect.c; there are many places where a %s format specifier is\nbeing used in the format sting, where the name of the parameter would\nhave sufficed.\n\nUpon some research, the only explanation I could come up with was that\nthis pattern of specifying error messages helps with message\ntranslations. This way there's just one message to be translated. For\nexample:\n\n.../libpq/po/es.po-#: fe-connect.c:1268 fe-connect.c:1294\nfe-connect.c:1336 fe-connect.c:1345\n.../libpq/po/es.po-#: fe-connect.c:1378 fe-connect.c:1422\n.../libpq/po/es.po-#, c-format\n.../libpq/po/es.po:msgid \"invalid %s value: \\\"%s\\\"\\n\"\n\nThere's just one exception to this pattern, though.\n\n> libpq_append_conn_error(conn, \"invalid require_auth method: \\\"%s\\\"\",\n> method);\n\nSo, to make it consistent throughout the file, we should either\nreplace all such %s format specifiers with the respective strings, or\nuse the same pattern for the message string used for require_method,\nas well. Attached patch [2] does the former, and patch [3] does the\nlatter.\n\nPick your favorite one.\n\n[1]: Support connection load balancing in libpq\n7f5b19817eaf38e70ad1153db4e644ee9456853e\n\n[2]: Replace placeholders with known strings\nv1-0001-Replace-placeholders-with-known-strings.patch\n\n[3]: Make require_auth error message similar to surrounding messages\nv1-0001-Make-require_auth-error-message-similar-to-surrou.patch\n\nBest regards,\nGurjeet http://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com",
"msg_date": "Thu, 20 Apr 2023 21:28:01 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Make message strings in fe-connect.c consistent"
},
{
"msg_contents": "Gurjeet Singh <[email protected]> writes:\n> When reviewing a recently committed patch [1] I noticed the odd usage\n> of a format specifier:\n\n> + libpq_append_conn_error(conn, \"invalid %s value: \\\"%s\\\"\",\n> + \"load_balance_hosts\",\n> + conn->load_balance_hosts);\n\n> The oddity is that the first %s is unnecessary, since the value we\n> want there is a constant. Typically a format specifier used to get the\n> value stored in a variable.\n\nThis is actually intentional, on the grounds that it reduces the\nnumber of format strings that require translation.\n\n> There's just one exception to this pattern, though.\n\n>> libpq_append_conn_error(conn, \"invalid require_auth method: \\\"%s\\\"\",\n>> method);\n\nYup, this one did not get the memo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Apr 2023 00:31:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make message strings in fe-connect.c consistent"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 9:31 PM Tom Lane <[email protected]> wrote:\n>\n> Gurjeet Singh <[email protected]> writes:\n> > When reviewing a recently committed patch [1] I noticed the odd usage\n> > of a format specifier:\n>\n> > + libpq_append_conn_error(conn, \"invalid %s value: \\\"%s\\\"\",\n> > + \"load_balance_hosts\",\n> > + conn->load_balance_hosts);\n>\n> > The oddity is that the first %s is unnecessary, since the value we\n> > want there is a constant. Typically a format specifier used to get the\n> > value stored in a variable.\n>\n> This is actually intentional, on the grounds that it reduces the\n> number of format strings that require translation.\n\nThat's the only reason I too could come up with.\n\n> > There's just one exception to this pattern, though.\n>\n> >> libpq_append_conn_error(conn, \"invalid require_auth method: \\\"%s\\\"\",\n> >> method);\n>\n> Yup, this one did not get the memo.\n\nThat explains why I could not find any translation for this error message.\n\nBest regards,\nGurjeet http://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com\n\n\n",
"msg_date": "Thu, 20 Apr 2023 22:02:35 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Make message strings in fe-connect.c consistent"
},
{
"msg_contents": "> On 21 Apr 2023, at 07:02, Gurjeet Singh <[email protected]> wrote:\n> On Thu, Apr 20, 2023 at 9:31 PM Tom Lane <[email protected]> wrote:\n\n>>>> libpq_append_conn_error(conn, \"invalid require_auth method: \\\"%s\\\"\",\n>>>> method);\n>> \n>> Yup, this one did not get the memo.\n\nI've pushed this, with the change to use the common \"invalid %s value\" format\nthat we use for all other libpq options. This makes this string make use of\nalready existing translations and makes error reporting consistent.\n\n> That explains why I could not find any translation for this error message.\n\nThe feature is new in master so any translations for it are yet to be merged\nfrom the translation repo.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 10:34:41 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Make message strings in fe-connect.c consistent"
}
] |
[
{
"msg_contents": "Commit [1] implements Fisher-Yates shuffling algorithm to shuffle\nconnection addresses, in two places.\n\nThe attached patch moves the duplicated code to a function, and calls\nit in those 2 places.\n\n[1]: Support connection load balancing in libpq\n7f5b19817eaf38e70ad1153db4e644ee9456853e\n\nBest regards,\nGurjeet https://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com",
"msg_date": "Thu, 20 Apr 2023 22:29:35 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Minor code de-duplication in fe-connect.c"
},
{
"msg_contents": "> On 21 Apr 2023, at 07:29, Gurjeet Singh <[email protected]> wrote:\n> \n> Commit [1] implements Fisher-Yates shuffling algorithm to shuffle\n> connection addresses, in two places.\n> \n> The attached patch moves the duplicated code to a function, and calls\n> it in those 2 places.\n\nThe reason I left it like this when reviewing and committing is that I think it\nmakes for more readable code. The amount of lines saved is pretty small, and\n\"shuffle\" isn't an exact term so by reading the code it isn't immediate clear\nwhat such a function would do. By having the shuffle algorithm where it's used\nit's clear what the code does and what the outcome is. If others disagree I\ncan go ahead and refactor of course, but I personally would not deem it a net\nwin in code quality.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 14:24:53 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Minor code de-duplication in fe-connect.c"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 8:25 AM Daniel Gustafsson <[email protected]> wrote:\n> The reason I left it like this when reviewing and committing is that I think it\n> makes for more readable code. The amount of lines saved is pretty small, and\n> \"shuffle\" isn't an exact term so by reading the code it isn't immediate clear\n> what such a function would do. By having the shuffle algorithm where it's used\n> it's clear what the code does and what the outcome is. If others disagree I\n> can go ahead and refactor of course, but I personally would not deem it a net\n> win in code quality.\n\nI think we should avoid nitpicking stuff like this. I likely would\nhave used a subroutine if I'd done it myself, but I definitely\nwouldn't have submitted a patch to change whatever the last person did\nwithout some tangible reason for so doing. It's not a good use of\nreviewer and committer time to litigate things like this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 10:47:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Minor code de-duplication in fe-connect.c"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 7:47 AM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Apr 21, 2023 at 8:25 AM Daniel Gustafsson <[email protected]> wrote:\n> > The reason I left it like this when reviewing and committing is that I think it\n> > makes for more readable code. The amount of lines saved is pretty small, and\n> > \"shuffle\" isn't an exact term so by reading the code it isn't immediate clear\n> > what such a function would do. By having the shuffle algorithm where it's used\n> > it's clear what the code does and what the outcome is. If others disagree I\n> > can go ahead and refactor of course, but I personally would not deem it a net\n> > win in code quality.\n>\n> I think we should avoid nitpicking stuff like this. I likely would\n> have used a subroutine if I'd done it myself, but I definitely\n> wouldn't have submitted a patch to change whatever the last person did\n> without some tangible reason for so doing.\n\nCode duplication, and the possibility that a change in one location\n(bugfix or otherwise) does not get applied to the other locations, is\na good enough reason to submit a patch, IMHO.\n\n> It's not a good use of\n> reviewer and committer time to litigate things like this.\n\nPostgres has a very high bar for code quality, and for documentation.\nIt is a major reason that attracts people to, and keeps them in the\nPostgres ecosystem, as users and contributors. If anything, we should\nencourage folks to point out such inconsistencies in code and docs\nthat keep the quality high.\n\nThis is not a attack on any one commit, or author/committer of the\ncommit; sorry if it appeared that way. I was merely reviewing the\ncommit that introduced a nice libpq feature. This patch is merely a\nminor improvement to the code that I think deserves a consideration.\nIt's not a litigation, by any means.\n\nBest regards,\nGurjeet https://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:38:53 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Minor code de-duplication in fe-connect.c"
},
{
"msg_contents": "> On 21 Apr 2023, at 18:38, Gurjeet Singh <[email protected]> wrote:\n> \n> On Fri, Apr 21, 2023 at 7:47 AM Robert Haas <[email protected]> wrote:\n>> \n>> On Fri, Apr 21, 2023 at 8:25 AM Daniel Gustafsson <[email protected]> wrote:\n>>> The reason I left it like this when reviewing and committing is that I think it\n>>> makes for more readable code. The amount of lines saved is pretty small, and\n>>> \"shuffle\" isn't an exact term so by reading the code it isn't immediate clear\n>>> what such a function would do. By having the shuffle algorithm where it's used\n>>> it's clear what the code does and what the outcome is. If others disagree I\n>>> can go ahead and refactor of course, but I personally would not deem it a net\n>>> win in code quality.\n>> \n>> I think we should avoid nitpicking stuff like this. I likely would\n>> have used a subroutine if I'd done it myself, but I definitely\n>> wouldn't have submitted a patch to change whatever the last person did\n>> without some tangible reason for so doing.\n> \n> Code duplication, and the possibility that a change in one location\n> (bugfix or otherwise) does not get applied to the other locations, is\n> a good enough reason to submit a patch, IMHO.\n> \n>> It's not a good use of\n>> reviewer and committer time to litigate things like this.\n> \n> Postgres has a very high bar for code quality, and for documentation.\n> It is a major reason that attracts people to, and keeps them in the\n> Postgres ecosystem, as users and contributors. If anything, we should\n> encourage folks to point out such inconsistencies in code and docs\n> that keep the quality high.\n> \n> This is not a attack on any one commit, or author/committer of the\n> commit; sorry if it appeared that way. I was merely reviewing the\n> commit that introduced a nice libpq feature. This patch is merely a\n> minor improvement to the code that I think deserves a consideration.\n> It's not a litigation, by any means.\n\nI didn't actually read the patch earlier, but since Robert gave a +1 to the\nidea of refactoring I had a look. I have a few comments:\n\n+static void\n+shuffleAddresses(PGConn *conn)\nThis fails to compile since the struct is typedefed PGconn.\n\n\n-\t/*\n-\t * This is the \"inside-out\" variant of the Fisher-Yates shuffle\n-\t * algorithm. Notionally, we append each new value to the array\n-\t * and then swap it with a randomly-chosen array element (possibly\n-\t * including itself, else we fail to generate permutations with\n-\t * the last integer last). The swap step can be optimized by\n-\t * combining it with the insertion.\n-\t *\n \t * We don't need to initialize conn->prng_state here, because that\n \t * already happened in connectOptions2.\n \t */\nThis also fails to compile since it removes the starting /* marker of the\ncomment resulting in a comment starting on *.\n\n\n-\tfor (i = 1; i < conn->nconnhost; i++)\n-\tfor (int i = 1; i < conn->naddr; i++)\nYou are replacing these loops for shuffling an array with a function that does\nthis:\n+\tfor (int i = 1; i < conn->naddr; i++)\nThis is not the same thing, one is shuffling the hosts and the other is\nshuffling the addresses resolved for the hosts (which may be a n:m\nrelationship). If you had compiled and run the tests you would have seen that\nthe libpq tests now fail with this applied, as would be expected.\n\n\nShuffling the arrays can of course be made into a subroutine, but what to\nshuffle and where needs to be passed in to it and at that point I doubt the\ncode readability is much improved over the current coding.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 24 Apr 2023 14:14:09 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Minor code de-duplication in fe-connect.c"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 5:14 AM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 21 Apr 2023, at 18:38, Gurjeet Singh <[email protected]> wrote:\n> >\n> > On Fri, Apr 21, 2023 at 7:47 AM Robert Haas <[email protected]>\n> wrote:\n> >>\n> >> On Fri, Apr 21, 2023 at 8:25 AM Daniel Gustafsson <[email protected]>\n> wrote:\n> >>> The reason I left it like this when reviewing and committing is that I\n> think it\n> >>> makes for more readable code. The amount of lines saved is pretty\n> small, and\n> >>> \"shuffle\" isn't an exact term so by reading the code it isn't\n> immediate clear\n> >>> what such a function would do. By having the shuffle algorithm where\n> it's used\n> >>> it's clear what the code does and what the outcome is. If others\n> disagree I\n> >>> can go ahead and refactor of course, but I personally would not deem\n> it a net\n> >>> win in code quality.\n> >>\n> >> I think we should avoid nitpicking stuff like this. I likely would\n> >> have used a subroutine if I'd done it myself, but I definitely\n> >> wouldn't have submitted a patch to change whatever the last person did\n> >> without some tangible reason for so doing.\n> >\n> > Code duplication, and the possibility that a change in one location\n> > (bugfix or otherwise) does not get applied to the other locations, is\n> > a good enough reason to submit a patch, IMHO.\n> >\n> >> It's not a good use of\n> >> reviewer and committer time to litigate things like this.\n> >\n> > Postgres has a very high bar for code quality, and for documentation.\n> > It is a major reason that attracts people to, and keeps them in the\n> > Postgres ecosystem, as users and contributors. If anything, we should\n> > encourage folks to point out such inconsistencies in code and docs\n> > that keep the quality high.\n> >\n> > This is not a attack on any one commit, or author/committer of the\n> > commit; sorry if it appeared that way. I was merely reviewing the\n> > commit that introduced a nice libpq feature. This patch is merely a\n> > minor improvement to the code that I think deserves a consideration.\n> > It's not a litigation, by any means.\n>\n> I didn't actually read the patch earlier, but since Robert gave a +1 to the\n> idea of refactoring I had a look. I have a few comments:\n>\n> +static void\n> +shuffleAddresses(PGConn *conn)\n> This fails to compile since the struct is typedefed PGconn.\n>\n>\n> - /*\n> - * This is the \"inside-out\" variant of the Fisher-Yates shuffle\n> - * algorithm. Notionally, we append each new value to the array\n> - * and then swap it with a randomly-chosen array element (possibly\n> - * including itself, else we fail to generate permutations with\n> - * the last integer last). The swap step can be optimized by\n> - * combining it with the insertion.\n> - *\n> * We don't need to initialize conn->prng_state here, because that\n> * already happened in connectOptions2.\n> */\n> This also fails to compile since it removes the starting /* marker of the\n> comment resulting in a comment starting on *.\n>\n>\n> - for (i = 1; i < conn->nconnhost; i++)\n> - for (int i = 1; i < conn->naddr; i++)\n> You are replacing these loops for shuffling an array with a function that\n> does\n> this:\n> + for (int i = 1; i < conn->naddr; i++)\n> This is not the same thing, one is shuffling the hosts and the other is\n> shuffling the addresses resolved for the hosts (which may be a n:m\n> relationship). If you had compiled and run the tests you would have seen\n> that\n> the libpq tests now fail with this applied, as would be expected.\n>\n>\n> Shuffling the arrays can of course be made into a subroutine, but what to\n> shuffle and where needs to be passed in to it and at that point I doubt the\n> code readability is much improved over the current coding.\n>\n>\nSorry about the errors. This seems to be the older version of the patch\nthat I had generated before fixing these mistakes. I do remember\nencountering the compiler errors and revising the code to fix these,\nespecially the upper vs. lower case and the partial removal of the coment.\nAway from my keyboard, so please expect the newer patch some time later.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\nOn Mon, Apr 24, 2023 at 5:14 AM Daniel Gustafsson <[email protected]> wrote:> On 21 Apr 2023, at 18:38, Gurjeet Singh <[email protected]> wrote:\n> \n> On Fri, Apr 21, 2023 at 7:47 AM Robert Haas <[email protected]> wrote:\n>> \n>> On Fri, Apr 21, 2023 at 8:25 AM Daniel Gustafsson <[email protected]> wrote:\n>>> The reason I left it like this when reviewing and committing is that I think it\n>>> makes for more readable code. The amount of lines saved is pretty small, and\n>>> \"shuffle\" isn't an exact term so by reading the code it isn't immediate clear\n>>> what such a function would do. By having the shuffle algorithm where it's used\n>>> it's clear what the code does and what the outcome is. If others disagree I\n>>> can go ahead and refactor of course, but I personally would not deem it a net\n>>> win in code quality.\n>> \n>> I think we should avoid nitpicking stuff like this. I likely would\n>> have used a subroutine if I'd done it myself, but I definitely\n>> wouldn't have submitted a patch to change whatever the last person did\n>> without some tangible reason for so doing.\n> \n> Code duplication, and the possibility that a change in one location\n> (bugfix or otherwise) does not get applied to the other locations, is\n> a good enough reason to submit a patch, IMHO.\n> \n>> It's not a good use of\n>> reviewer and committer time to litigate things like this.\n> \n> Postgres has a very high bar for code quality, and for documentation.\n> It is a major reason that attracts people to, and keeps them in the\n> Postgres ecosystem, as users and contributors. If anything, we should\n> encourage folks to point out such inconsistencies in code and docs\n> that keep the quality high.\n> \n> This is not a attack on any one commit, or author/committer of the\n> commit; sorry if it appeared that way. I was merely reviewing the\n> commit that introduced a nice libpq feature. This patch is merely a\n> minor improvement to the code that I think deserves a consideration.\n> It's not a litigation, by any means.\n\nI didn't actually read the patch earlier, but since Robert gave a +1 to the\nidea of refactoring I had a look. I have a few comments:\n\n+static void\n+shuffleAddresses(PGConn *conn)\nThis fails to compile since the struct is typedefed PGconn.\n\n\n- /*\n- * This is the \"inside-out\" variant of the Fisher-Yates shuffle\n- * algorithm. Notionally, we append each new value to the array\n- * and then swap it with a randomly-chosen array element (possibly\n- * including itself, else we fail to generate permutations with\n- * the last integer last). The swap step can be optimized by\n- * combining it with the insertion.\n- *\n * We don't need to initialize conn->prng_state here, because that\n * already happened in connectOptions2.\n */\nThis also fails to compile since it removes the starting /* marker of the\ncomment resulting in a comment starting on *.\n\n\n- for (i = 1; i < conn->nconnhost; i++)\n- for (int i = 1; i < conn->naddr; i++)\nYou are replacing these loops for shuffling an array with a function that does\nthis:\n+ for (int i = 1; i < conn->naddr; i++)\nThis is not the same thing, one is shuffling the hosts and the other is\nshuffling the addresses resolved for the hosts (which may be a n:m\nrelationship). If you had compiled and run the tests you would have seen that\nthe libpq tests now fail with this applied, as would be expected.\n\n\nShuffling the arrays can of course be made into a subroutine, but what to\nshuffle and where needs to be passed in to it and at that point I doubt the\ncode readability is much improved over the current coding.\nSorry about the errors. This seems to be the older version of the patch that I had generated before fixing these mistakes. I do remember encountering the compiler errors and revising the code to fix these, especially the upper vs. lower case and the partial removal of the coment. Away from my keyboard, so please expect the newer patch some time later.Best regards,Gurjeethttp://Gurje.et",
"msg_date": "Mon, 24 Apr 2023 07:41:05 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Minor code de-duplication in fe-connect.c"
}
] |
[
{
"msg_contents": "Commit 7f5b198 introduced TAP tests that use string literals to mark\nthe presence of a query in server logs. For no explicable reason, the\ntests with the marker 'connect2' occur before the tests that use\n'connect1' marker.\n\nThe attached patch swaps the connection marker strings so that a\nreader doesn't have to spend extra deciphering why 'connect2' tests\nappear before 'connect1' tests.\n\nBest regards,\nGurjeet https://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com",
"msg_date": "Thu, 20 Apr 2023 23:00:00 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reorder connection markers in libpq TAP tests"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 11:00 PM Gurjeet Singh <[email protected]> wrote:\n>\n> Commit 7f5b198 introduced TAP tests that use string literals to mark\n> the presence of a query in server logs. For no explicable reason, the\n> tests with the marker 'connect2' occur before the tests that use\n> 'connect1' marker.\n>\n> The attached patch swaps the connection marker strings so that a\n> reader doesn't have to spend extra deciphering why 'connect2' tests\n> appear before 'connect1' tests.\n\nPlease see attached v2 of the patch. It now includes same fix in\nanother TAP tests file.\n\n\nBest regards,\nGurjeet https://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com",
"msg_date": "Thu, 20 Apr 2023 23:38:25 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reorder connection markers in libpq TAP tests"
},
{
"msg_contents": "LGTM I guess this was an unintended leftover from me reordering the tests a\nbit in the final stages of getting these patches in.\n\nOn Fri, 21 Apr 2023 at 08:38, Gurjeet Singh <[email protected]> wrote:\n\n> On Thu, Apr 20, 2023 at 11:00 PM Gurjeet Singh <[email protected]> wrote:\n> >\n> > Commit 7f5b198 introduced TAP tests that use string literals to mark\n> > the presence of a query in server logs. For no explicable reason, the\n> > tests with the marker 'connect2' occur before the tests that use\n> > 'connect1' marker.\n> >\n> > The attached patch swaps the connection marker strings so that a\n> > reader doesn't have to spend extra deciphering why 'connect2' tests\n> > appear before 'connect1' tests.\n>\n> Please see attached v2 of the patch. It now includes same fix in\n> another TAP tests file.\n>\n>\n> Best regards,\n> Gurjeet https://Gurje.et\n> Postgres Contributors Team, http://aws.amazon.com\n>\n\nLGTM I guess this was an unintended leftover from me reordering the tests a bit in the final stages of getting these patches in.On Fri, 21 Apr 2023 at 08:38, Gurjeet Singh <[email protected]> wrote:On Thu, Apr 20, 2023 at 11:00 PM Gurjeet Singh <[email protected]> wrote:\n>\n> Commit 7f5b198 introduced TAP tests that use string literals to mark\n> the presence of a query in server logs. For no explicable reason, the\n> tests with the marker 'connect2' occur before the tests that use\n> 'connect1' marker.\n>\n> The attached patch swaps the connection marker strings so that a\n> reader doesn't have to spend extra deciphering why 'connect2' tests\n> appear before 'connect1' tests.\n\nPlease see attached v2 of the patch. It now includes same fix in\nanother TAP tests file.\n\n\nBest regards,\nGurjeet https://Gurje.et\nPostgres Contributors Team, http://aws.amazon.com",
"msg_date": "Fri, 21 Apr 2023 11:41:50 +0200",
"msg_from": "Jelte Fennema <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reorder connection markers in libpq TAP tests"
},
{
"msg_contents": "> On 21 Apr 2023, at 08:38, Gurjeet Singh <[email protected]> wrote:\n> \n> On Thu, Apr 20, 2023 at 11:00 PM Gurjeet Singh <[email protected]> wrote:\n>> \n>> Commit 7f5b198 introduced TAP tests that use string literals to mark\n>> the presence of a query in server logs. For no explicable reason, the\n>> tests with the marker 'connect2' occur before the tests that use\n>> 'connect1' marker.\n>> \n>> The attached patch swaps the connection marker strings so that a\n>> reader doesn't have to spend extra deciphering why 'connect2' tests\n>> appear before 'connect1' tests.\n> \n> Please see attached v2 of the patch. It now includes same fix in\n> another TAP tests file.\n\n-\t 'Potentially unsafe test load_balance not enabled in PG_TEST_EXTRA';\n+\t 'Potentially unsafe test; load_balance not enabled in PG_TEST_EXTRA';\n\nWe have this spelling without a ';' in multiple places so I left this alone.\n\nI have applied this version of the patch apart from the above hunk. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 12:55:44 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reorder connection markers in libpq TAP tests"
}
] |
[
{
"msg_contents": "Hi -hackers,\n\nI would like to ask if it wouldn't be good idea to copy the\nhttps://wiki.postgresql.org/wiki/TOAST#Total_table_size_limit\ndiscussion (out-of-line OID usage per TOAST-ed columns / potential\nlimitation) to the official \"Appendix K. PostgreSQL Limits\" with also\nlittle bonus mentioning the \"still searching for an unused OID in\nrelation\" notice. Although it is pretty obvious information for some\nand from commit 7fbcee1b2d5f1012c67942126881bd492e95077e and the\ndiscussion in [1], I wonder if the information shouldn't be a little\nmore well known via the limitation (especially to steer people away\nfrom designing very wide non-partitioned tables).\n\nRegards,\n-J.\n\n[1] - https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org\n\n\n",
"msg_date": "Fri, 21 Apr 2023 08:39:10 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "Hi!\n\nThis limitation applies not only to wide tables - it also applies to tables\nwhere TOASTed values\nare updated very often. You would soon be out of available TOAST value ID\nbecause in case of\nhigh frequency updates autovacuum cleanup rate won't keep up with the\nupdates. It is discussed\nin [1].\n\n\nOn Fri, Apr 21, 2023 at 9:39 AM Jakub Wartak <[email protected]>\nwrote:\n\n> Hi -hackers,\n>\n> I would like to ask if it wouldn't be good idea to copy the\n> https://wiki.postgresql.org/wiki/TOAST#Total_table_size_limit\n> discussion (out-of-line OID usage per TOAST-ed columns / potential\n> limitation) to the official \"Appendix K. PostgreSQL Limits\" with also\n> little bonus mentioning the \"still searching for an unused OID in\n> relation\" notice. Although it is pretty obvious information for some\n> and from commit 7fbcee1b2d5f1012c67942126881bd492e95077e and the\n> discussion in [1], I wonder if the information shouldn't be a little\n> more well known via the limitation (especially to steer people away\n> from designing very wide non-partitioned tables).\n>\n> Regards,\n> -J.\n>\n> [1] -\n> https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org\n>\n>\n>\n[1]\nhttps://www.postgresql.org/message-id/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!This limitation applies not only to wide tables - it also applies to tables where TOASTed valuesare updated very often. You would soon be out of available TOAST value ID because in case ofhigh frequency updates autovacuum cleanup rate won't keep up with the updates. It is discussedin [1].On Fri, Apr 21, 2023 at 9:39 AM Jakub Wartak <[email protected]> wrote:Hi -hackers,\n\nI would like to ask if it wouldn't be good idea to copy the\nhttps://wiki.postgresql.org/wiki/TOAST#Total_table_size_limit\ndiscussion (out-of-line OID usage per TOAST-ed columns / potential\nlimitation) to the official \"Appendix K. PostgreSQL Limits\" with also\nlittle bonus mentioning the \"still searching for an unused OID in\nrelation\" notice. Although it is pretty obvious information for some\nand from commit 7fbcee1b2d5f1012c67942126881bd492e95077e and the\ndiscussion in [1], I wonder if the information shouldn't be a little\nmore well known via the limitation (especially to steer people away\nfrom designing very wide non-partitioned tables).\n\nRegards,\n-J.\n\n[1] - https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org\n\n\n[1] https://www.postgresql.org/message-id/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Fri, 21 Apr 2023 10:14:01 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 12:14 AM Nikita Malakhov <[email protected]> wrote:\n> This limitation applies not only to wide tables - it also applies to tables where TOASTed values\n> are updated very often. You would soon be out of available TOAST value ID because in case of\n> high frequency updates autovacuum cleanup rate won't keep up with the updates. It is discussed\n> in [1].\n>\n> On Fri, Apr 21, 2023 at 9:39 AM Jakub Wartak <[email protected]> wrote:\n>> I would like to ask if it wouldn't be good idea to copy the\n>> https://wiki.postgresql.org/wiki/TOAST#Total_table_size_limit\n>> discussion (out-of-line OID usage per TOAST-ed columns / potential\n>> limitation) to the official \"Appendix K. PostgreSQL Limits\" with also\n>> little bonus mentioning the \"still searching for an unused OID in\n>> relation\" notice. Although it is pretty obvious information for some\n>> and from commit 7fbcee1b2d5f1012c67942126881bd492e95077e and the\n>> discussion in [1], I wonder if the information shouldn't be a little\n>> more well known via the limitation (especially to steer people away\n>> from designing very wide non-partitioned tables).\n>>\n>> [1] - https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org\n>\n> [1] https://www.postgresql.org/message-id/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com\n\nThese 2 discussions show that it's a painful experience to run into\nthis problem, and that the hackers have ideas on how to fix it, but\nthose fixes haven't materialized for years. So I would say that, yes,\nthis info belongs in the hard-limits section, because who knows how\nlong it'll take this to be fixed.\n\nPlease submit a patch.\n\nI anticipate that edits to Appendix K Postgres Limits will prompt\nimproving the note in there about the maximum column limit, That note\nis too wordy, and sometimes confusing, especially for the audience\nthat it's written for: newcomers to Postgres ecosystem.\n\nBest regards,\nGurjeet https://Gurje.et\nhttp://aws.amazon.com\n\n\n",
"msg_date": "Sat, 22 Apr 2023 08:42:29 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "Hi,\n\nThis is a production case for large databases with high update rates, but\nis mistaken\nwith reaching table size limit, although size limit is processed correctly.\n\nThe note on TOAST limitation does not mention that TOAST values are not\nactually\nupdated on UPDATE operation - old value is marked as dead and new one is\ninserted,\nand dead values should be vacuumed before value OID could be reused. The\nworst\nis that the INSERT/UPDATE clause does not fail if there is no OID available\n- it is\nlooped in an infinite loop of sorting out OIDs.\n\nOn Sat, Apr 22, 2023 at 6:42 PM Gurjeet Singh <[email protected]> wrote:\n\n> On Fri, Apr 21, 2023 at 12:14 AM Nikita Malakhov <[email protected]>\n> wrote:\n> > This limitation applies not only to wide tables - it also applies to\n> tables where TOASTed values\n> > are updated very often. You would soon be out of available TOAST value\n> ID because in case of\n> > high frequency updates autovacuum cleanup rate won't keep up with the\n> updates. It is discussed\n> > in [1].\n> >\n> > On Fri, Apr 21, 2023 at 9:39 AM Jakub Wartak <\n> [email protected]> wrote:\n> >> I would like to ask if it wouldn't be good idea to copy the\n> >> https://wiki.postgresql.org/wiki/TOAST#Total_table_size_limit\n> >> discussion (out-of-line OID usage per TOAST-ed columns / potential\n> >> limitation) to the official \"Appendix K. PostgreSQL Limits\" with also\n> >> little bonus mentioning the \"still searching for an unused OID in\n> >> relation\" notice. Although it is pretty obvious information for some\n> >> and from commit 7fbcee1b2d5f1012c67942126881bd492e95077e and the\n> >> discussion in [1], I wonder if the information shouldn't be a little\n> >> more well known via the limitation (especially to steer people away\n> >> from designing very wide non-partitioned tables).\n> >>\n> >> [1] -\n> https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org\n> >\n> > [1]\n> https://www.postgresql.org/message-id/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com\n>\n> These 2 discussions show that it's a painful experience to run into\n> this problem, and that the hackers have ideas on how to fix it, but\n> those fixes haven't materialized for years. So I would say that, yes,\n> this info belongs in the hard-limits section, because who knows how\n> long it'll take this to be fixed.\n>\n> Please submit a patch.\n>\n> I anticipate that edits to Appendix K Postgres Limits will prompt\n> improving the note in there about the maximum column limit, That note\n> is too wordy, and sometimes confusing, especially for the audience\n> that it's written for: newcomers to Postgres ecosystem.\n>\n> Best regards,\n> Gurjeet https://Gurje.et\n> http://aws.amazon.com\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,This is a production case for large databases with high update rates, but is mistakenwith reaching table size limit, although size limit is processed correctly.The note on TOAST limitation does not mention that TOAST values are not actuallyupdated on UPDATE operation - old value is marked as dead and new one is inserted,and dead values should be vacuumed before value OID could be reused. The worstis that the INSERT/UPDATE clause does not fail if there is no OID available - it islooped in an infinite loop of sorting out OIDs.On Sat, Apr 22, 2023 at 6:42 PM Gurjeet Singh <[email protected]> wrote:On Fri, Apr 21, 2023 at 12:14 AM Nikita Malakhov <[email protected]> wrote:\n> This limitation applies not only to wide tables - it also applies to tables where TOASTed values\n> are updated very often. You would soon be out of available TOAST value ID because in case of\n> high frequency updates autovacuum cleanup rate won't keep up with the updates. It is discussed\n> in [1].\n>\n> On Fri, Apr 21, 2023 at 9:39 AM Jakub Wartak <[email protected]> wrote:\n>> I would like to ask if it wouldn't be good idea to copy the\n>> https://wiki.postgresql.org/wiki/TOAST#Total_table_size_limit\n>> discussion (out-of-line OID usage per TOAST-ed columns / potential\n>> limitation) to the official \"Appendix K. PostgreSQL Limits\" with also\n>> little bonus mentioning the \"still searching for an unused OID in\n>> relation\" notice. Although it is pretty obvious information for some\n>> and from commit 7fbcee1b2d5f1012c67942126881bd492e95077e and the\n>> discussion in [1], I wonder if the information shouldn't be a little\n>> more well known via the limitation (especially to steer people away\n>> from designing very wide non-partitioned tables).\n>>\n>> [1] - https://www.postgresql.org/message-id/flat/16722-93043fb459a41073%40postgresql.org\n>\n> [1] https://www.postgresql.org/message-id/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com\n\nThese 2 discussions show that it's a painful experience to run into\nthis problem, and that the hackers have ideas on how to fix it, but\nthose fixes haven't materialized for years. So I would say that, yes,\nthis info belongs in the hard-limits section, because who knows how\nlong it'll take this to be fixed.\n\nPlease submit a patch.\n\nI anticipate that edits to Appendix K Postgres Limits will prompt\nimproving the note in there about the maximum column limit, That note\nis too wordy, and sometimes confusing, especially for the audience\nthat it's written for: newcomers to Postgres ecosystem.\n\nBest regards,\nGurjeet https://Gurje.et\nhttp://aws.amazon.com\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 24 Apr 2023 10:12:00 +0300",
"msg_from": "Nikita Malakhov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "Hi,\n\n>> These 2 discussions show that it's a painful experience to run into\n>> this problem, and that the hackers have ideas on how to fix it, but\n>> those fixes haven't materialized for years. So I would say that, yes,\n>> this info belongs in the hard-limits section, because who knows how\n>> long it'll take this to be fixed.\n>>\n>> Please submit a patch.\n>>\n> This is a production case for large databases with high update rates, but is mistaken\n> with reaching table size limit, although size limit is processed correctly.\n>\n> The note on TOAST limitation does not mention that TOAST values are not actually\n> updated on UPDATE operation - old value is marked as dead and new one is inserted,\n> and dead values should be vacuumed before value OID could be reused. The worst\n> is that the INSERT/UPDATE clause does not fail if there is no OID available - it is\n> looped in an infinite loop of sorting out OIDs.\n\nOK, so here is the documentation patch proposal. I've also added two\nrows touching the subject of pg_largeobjects, as it is also related to\nthe OIDs topic. Please feel free to send adjusted patches.\n\nRegards,\n-J.",
"msg_date": "Wed, 26 Apr 2023 12:18:39 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Sun, 23 Apr 2023, 3:42 am Gurjeet Singh, <[email protected]> wrote:\n\n> I anticipate that edits to Appendix K Postgres Limits will prompt\n> improving the note in there about the maximum column limit, That note\n> is too wordy, and sometimes confusing, especially for the audience\n> that it's written for: newcomers to Postgres ecosystem.\n>\n\nI doubt it, but feel free to submit a patch yourself which improves it\nwithout losing the information which the paragraph is trying to convey.\n\nDavid\n\n>\n\nOn Sun, 23 Apr 2023, 3:42 am Gurjeet Singh, <[email protected]> wrote:\nI anticipate that edits to Appendix K Postgres Limits will prompt\nimproving the note in there about the maximum column limit, That note\nis too wordy, and sometimes confusing, especially for the audience\nthat it's written for: newcomers to Postgres ecosystem.I doubt it, but feel free to submit a patch yourself which improves it without losing the information which the paragraph is trying to convey.David",
"msg_date": "Wed, 26 Apr 2023 23:48:30 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 5:18 PM Jakub Wartak <[email protected]>\nwrote:\n\n> OK, so here is the documentation patch proposal. I've also added two\n> rows touching the subject of pg_largeobjects, as it is also related to\n> the OIDs topic.\n\n- <entry>partition keys</entry>\n- <entry>32</entry>\n- <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n+ <entry>partition keys</entry>\n+ <entry>32</entry>\n+ <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n\nSpurious whitespace.\n\n- <entry>limited by the number of tuples that can fit onto\n4,294,967,295 pages</entry>\n- <entry></entry>\n+ <entry>limited by the number of tuples that can fit onto\n4,294,967,295 pages or using up to 2^32 OIDs for TOASTed values</entry>\n+ <entry>please see discussion below about OIDs</entry>\n\nI would keep the first as is, and change the second for consistency to \"see\nnote below on TOAST\".\n\nAlso, now that we have more than one note, we should make them more\nseparate. That's something to discuss, no need to do anything just yet.\n\nThe new note needs a lot of editing to fit its new home. For starters:\n\n+ <para>\n+ For every TOAST-ed columns\n\ncolumn\n\n+ (that is for field values wider than TOAST_TUPLE_TARGET\n+ [2040 bytes by default]), due to internal PostgreSQL implementation of\nusing one\n+ shared global OID counter - today you cannot have more than\n\n+ 2^32\n\nPerhaps it should match full numbers elsewhere in the page.\n\n+(unsigned integer;\n\nTrue but irrelevant.\n\n+ 4 billion)\n\nImprecise and redundant.\n\n+ out-of-line values in a single table, because there would have to be\n+ duplicated OIDs in its TOAST table.\n\nThe part after \"because\" should be left off.\n\n+ Please note that that the limit of 2^32\n+ out-of-line TOAST values applies to the sum of both visible and\ninvisible tuples.\n\nWe didn't feel the need to mention this for normal tuples...\n\n+ It is therefore crucial that the autovacuum manages to keep up with\ncleaning the\n+ bloat and free the unused OIDs.\n+ </para>\n\nOut of place.\n\n+ <para>\n+ In practice, you want to have considerably less than that many TOASTed\nvalues\n+ per table, because as the OID space fills up the system might spend large\n+ amounts of time searching for the next free OID when it needs to\ngenerate a new\n+ out-of-line value.\n\ns/might spend large/will spend larger/ ?\n\n+ After 1000000 failed attempts to get a free OID, a first log\n+ message is emitted \"still searching for an unused OID in relation\", but\noperation\n+ won't stop and will try to continue until it finds the free OID.\n\nToo much detail?\n\n+ Therefore,\n+ the OID shortages may (in very extreme cases) cause slowdowns to the\n+ INSERTs/UPDATE/COPY statements.\n\ns/may (in very extreme cases)/will eventually/\n\n+ It's also worth emphasizing that\n\nUnnecessary.\n\n+ only field\n+ values wider than 2KB\n\nTOAST_TUPLE_TARGET\n\n+ will consume TOAST OIDs in this way. So, in practice,\n+ reaching this limit would require many terabytes of data in a single\ntable,\n\nIt may be worth mentioning what Nikita said above about updates.\n\n+ especially if you have a wide range of value widths.\n\nI never understood this part.\n\n+ <row>\n+ <entry>large objects size</entry>\n+ <entry>subject to the same limitations as single <symbol>relation\nsize</symbol></entry>\n+ <entry>LOs are stored in single pg_largeobjects relation</entry>\n+ </row>\n\nAre you under the impression that we can store a single large object up to\ntable size? The limit is 4TB, as documented elsewhere.\n\n+ <row>\n+ <entry>large objects number</entry>\n\n\"large objects per database\"\n\n+ <entry>subject to the same limitations as <symbol>rows per\ntable</symbol></entry>\n\nThat implies table size is the only factor. Max OID is also a factor, which\nwas your stated reason to include LOs here in the first place.\n\n+ <entry>LOs are stored in single pg_largeobjects relation</entry>\n\nI would just say \"also limited by relation size\".\n\n(note: Our catalogs are named in the singular.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Apr 26, 2023 at 5:18 PM Jakub Wartak <[email protected]> wrote:> OK, so here is the documentation patch proposal. I've also added two> rows touching the subject of pg_largeobjects, as it is also related to> the OIDs topic. - <entry>partition keys</entry>- <entry>32</entry>- <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>+ <entry>partition keys</entry>+ <entry>32</entry>+ <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>Spurious whitespace.- <entry>limited by the number of tuples that can fit onto 4,294,967,295 pages</entry>- <entry></entry>+ <entry>limited by the number of tuples that can fit onto 4,294,967,295 pages or using up to 2^32 OIDs for TOASTed values</entry>+ <entry>please see discussion below about OIDs</entry>I would keep the first as is, and change the second for consistency to \"see note below on TOAST\".Also, now that we have more than one note, we should make them more separate. That's something to discuss, no need to do anything just yet.The new note needs a lot of editing to fit its new home. For starters:+ <para>+ For every TOAST-ed columns column+ (that is for field values wider than TOAST_TUPLE_TARGET+ [2040 bytes by default]), due to internal PostgreSQL implementation of using one+ shared global OID counter - today you cannot have more than + 2^32 Perhaps it should match full numbers elsewhere in the page.+(unsigned integer;True but irrelevant.+ 4 billion) Imprecise and redundant.+ out-of-line values in a single table, because there would have to be+ duplicated OIDs in its TOAST table. The part after \"because\" should be left off.+ Please note that that the limit of 2^32+ out-of-line TOAST values applies to the sum of both visible and invisible tuples.We didn't feel the need to mention this for normal tuples...+ It is therefore crucial that the autovacuum manages to keep up with cleaning the+ bloat and free the unused OIDs.+ </para>Out of place.+ <para>+ In practice, you want to have considerably less than that many TOASTed values+ per table, because as the OID space fills up the system might spend large+ amounts of time searching for the next free OID when it needs to generate a new+ out-of-line value.s/might spend large/will spend larger/ ?+ After 1000000 failed attempts to get a free OID, a first log+ message is emitted \"still searching for an unused OID in relation\", but operation+ won't stop and will try to continue until it finds the free OID. Too much detail?+ Therefore,+ the OID shortages may (in very extreme cases) cause slowdowns to the+ INSERTs/UPDATE/COPY statements.s/may (in very extreme cases)/will eventually/+ It's also worth emphasizing that Unnecessary.+ only field+ values wider than 2KB TOAST_TUPLE_TARGET+ will consume TOAST OIDs in this way. So, in practice,+ reaching this limit would require many terabytes of data in a single table,It may be worth mentioning what Nikita said above about updates.+ especially if you have a wide range of value widths. I never understood this part.+ <row>+ <entry>large objects size</entry>+ <entry>subject to the same limitations as single <symbol>relation size</symbol></entry>+ <entry>LOs are stored in single pg_largeobjects relation</entry>+ </row>Are you under the impression that we can store a single large object up to table size? The limit is 4TB, as documented elsewhere.+ <row>+ <entry>large objects number</entry>\"large objects per database\"+ <entry>subject to the same limitations as <symbol>rows per table</symbol></entry>That implies table size is the only factor. Max OID is also a factor, which was your stated reason to include LOs here in the first place.+ <entry>LOs are stored in single pg_largeobjects relation</entry>I would just say \"also limited by relation size\".(note: Our catalogs are named in the singular.)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 Apr 2023 10:55:10 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "Hi John,\n\nThanks for your review. Here's v2 attached.\n\n> - <entry>partition keys</entry>\n> - <entry>32</entry>\n> - <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>\n> + <entry>partition keys</entry>\n> + <entry>32</entry>\n> + <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>\n>\n> Spurious whitespace.\n\nHopefully fixed, I've tried to align with the other entries tags.\n\n> - <entry>limited by the number of tuples that can fit onto 4,294,967,295 pages</entry>\n> - <entry></entry>\n> + <entry>limited by the number of tuples that can fit onto 4,294,967,295 pages or using up to 2^32 OIDs for TOASTed values</entry>\n> + <entry>please see discussion below about OIDs</entry>\n>\n> I would keep the first as is, and change the second for consistency to \"see note below on TOAST\".\n\nFixed.\n\n> Also, now that we have more than one note, we should make them more separate. That's something to discuss, no need to do anything just yet.\n\nOK.\n\n> The new note needs a lot of editing to fit its new home. For starters:\n>\n> + <para>\n> + For every TOAST-ed columns\n>\n> column\n\nFixed.\n\n> + (that is for field values wider than TOAST_TUPLE_TARGET\n> + [2040 bytes by default]), due to internal PostgreSQL implementation of using one\n> + shared global OID counter - today you cannot have more than\n>\n> + 2^32\n>\n> Perhaps it should match full numbers elsewhere in the page.\n\nFixed.\n\n>\n> +(unsigned integer;\n>\n> True but irrelevant.\n>\n> + 4 billion)\n>\n> Imprecise and redundant.\n\nRemoved both.\n\n> + out-of-line values in a single table, because there would have to be\n> + duplicated OIDs in its TOAST table.\n>\n> The part after \"because\" should be left off.\n\nRemoved.\n\n> + Please note that that the limit of 2^32\n> + out-of-line TOAST values applies to the sum of both visible and invisible tuples.\n>\n> We didn't feel the need to mention this for normal tuples...\n\nRight, but this somewhat points reader to the queue-like scenario\nmentioned by Nikita.\n\n> + It is therefore crucial that the autovacuum manages to keep up with cleaning the\n> + bloat and free the unused OIDs.\n> + </para>\n>\n> Out of place.\n\nI have somewhat reworded it, again just to reference to the above.\n\n> + <para>\n> + In practice, you want to have considerably less than that many TOASTed values\n> + per table, because as the OID space fills up the system might spend large\n> + amounts of time searching for the next free OID when it needs to generate a new\n> + out-of-line value.\n>\n> s/might spend large/will spend larger/ ?\n\nFixed.\n\n> + After 1000000 failed attempts to get a free OID, a first log\n> + message is emitted \"still searching for an unused OID in relation\", but operation\n> + won't stop and will try to continue until it finds the free OID.\n>\n> Too much detail?\n\nOK - partially removed.\n\n> + Therefore,\n> + the OID shortages may (in very extreme cases) cause slowdowns to the\n> + INSERTs/UPDATE/COPY statements.\n>\n> s/may (in very extreme cases)/will eventually/\n\nFixed.\n\n> + It's also worth emphasizing that\n>\n> Unnecessary.\n\nRemoved.\n\n> + only field\n> + values wider than 2KB\n>\n> TOAST_TUPLE_TARGET\n\nGood catch, fixed.\n\n> + will consume TOAST OIDs in this way. So, in practice,\n> + reaching this limit would require many terabytes of data in a single table,\n>\n> It may be worth mentioning what Nikita said above about updates.\n\nI've tried (with the above statement with visible and invisible tuples).\n\n> + especially if you have a wide range of value widths.\n>\n> I never understood this part.\n\nI've changed it, but I wonder if the new \"large number of wide\ncolumns\" isn't too ambiguous due to \"large\" (?)\n\n> + <row>\n> + <entry>large objects size</entry>\n> + <entry>subject to the same limitations as single <symbol>relation size</symbol></entry>\n> + <entry>LOs are stored in single pg_largeobjects relation</entry>\n> + </row>\n>\n> Are you under the impression that we can store a single large object up to table size? The limit is 4TB, as documented elsewhere.\n\nI've wrongly put it, I've meant that pg_largeobject also consume OID\nand as such are subject to 32TB limit.\n\n>\n> + <row>\n> + <entry>large objects number</entry>\n>\n> \"large objects per database\"\n\nFixed.\n\n> + <entry>subject to the same limitations as <symbol>rows per table</symbol></entry>\n>\n> That implies table size is the only factor. Max OID is also a factor, which was your stated reason to include LOs here in the first place.\n\nExactly..\n\nRegards,\n-Jakub Wartak.",
"msg_date": "Thu, 27 Apr 2023 14:35:47 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 7:36 PM Jakub Wartak <[email protected]>\nwrote:\n\n> > Spurious whitespace.\n>\n> Hopefully fixed, I've tried to align with the other entries tags.\n\nHope springs eternal. ;-)\n\n--- a/doc/src/sgml/limits.sgml\n+++ b/doc/src/sgml/limits.sgml\n@@ -10,6 +10,7 @@\n hard limits are reached.\n </para>\n\n+\n <table id=\"limits-table\">\n\n@@ -92,11 +93,24 @@\n <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n </row>\n\n- <row>\n- <entry>partition keys</entry>\n- <entry>32</entry>\n- <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n- </row>\n+ <row>\n+ <entry>partition keys</entry>\n+ <entry>32</entry>\n+ <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n+ </row>\n\n\n- <entry></entry>\n+ <entry>see note below on TOAST</entry>\n\nMaybe:\n\n\"further limited by the number of TOAST-ed values; see note below\"\n\n> > + <row>\n> > + <entry>large objects size</entry>\n> > + <entry>subject to the same limitations as single <symbol>relation\nsize</symbol></entry>\n> > + <entry>LOs are stored in single pg_largeobjects relation</entry>\n> > + </row>\n> >\n> > Are you under the impression that we can store a single large object up\nto table size? The limit is 4TB, as documented elsewhere.\n>\n> I've wrongly put it, I've meant that pg_largeobject also consume OID\n> and as such are subject to 32TB limit.\n\nNo, OID has nothing to do with the table size limit, they have to do with\nthe max number of LOs in a DB.\n\nAlso, perhaps the LO entries should be split into a separate patch. Since\nthey are a special case and documented elsewhere, it's not clear their\nlimits fit well here. Maybe they could.\n\n+ <para>\n+ For every TOAST-ed column (that is for field values wider than\nTOAST_TUPLE_TARGET\n+ [2040 bytes by default]), due to internal PostgreSQL implementation of\nusing one\n+ shared global OID counter - today you cannot have more than\n4,294,967,296 out-of-line\n+ values in a single table.\n+ </para>\n+\n+ <para>\n\n\"column\" != \"field value\". Also the shared counter is the cause of the\nslowdown, but not the reason for the numeric limit. \"Today\" is irrelevant.\nNeeds polish.\n\n> > + After 1000000 failed attempts to get a free OID, a first log\n> > + message is emitted \"still searching for an unused OID in relation\",\nbut operation\n> > + won't stop and will try to continue until it finds the free OID.\n> >\n> > Too much detail?\n>\n> OK - partially removed.\n\n+ out-of-line value (The search for free OIDs won't stop until it finds\nthe free OID).\n\nStill too much detail, and not very illuminating. If it *did* stop, how\ndoes that make it any less of a problem?\n\n+ Therefore, the OID shortages will eventually cause slowdowns to the\n+ INSERTs/UPDATE/COPY statements.\n\nMaybe this whole sentence is better as\n\n\"This will eventually cause slowdowns for INSERT, UPDATE, and COPY\nstatements.\"\n\n> > + Please note that that the limit of 2^32\n> > + out-of-line TOAST values applies to the sum of both visible and\ninvisible tuples.\n> >\n> > We didn't feel the need to mention this for normal tuples...\n>\n> Right, but this somewhat points reader to the queue-like scenario\n> mentioned by Nikita.\n\nThat seems to be in response to you mentioning \"especially to steer people\naway from designing very wide non-partitioned tables\". In any case, I'm now\nthinking that everything in this sentence and after doesn't belong here. We\ndon't need to tell people to vacuum, and we don't need to tell them about\npartitioning as a workaround -- it's a workaround for the table size limit,\ntoo, but we are just documenting the limits here.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 27, 2023 at 7:36 PM Jakub Wartak <[email protected]> wrote:> > Spurious whitespace.>> Hopefully fixed, I've tried to align with the other entries tags.Hope springs eternal. ;-)--- a/doc/src/sgml/limits.sgml+++ b/doc/src/sgml/limits.sgml@@ -10,6 +10,7 @@ hard limits are reached. </para> + <table id=\"limits-table\">@@ -92,11 +93,24 @@ <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry> </row> - <row>- <entry>partition keys</entry>- <entry>32</entry>- <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>- </row>+ <row>+ <entry>partition keys</entry>+ <entry>32</entry>+ <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>+ </row>- <entry></entry>+ <entry>see note below on TOAST</entry>Maybe:\"further limited by the number of TOAST-ed values; see note below\"> > + <row>> > + <entry>large objects size</entry>> > + <entry>subject to the same limitations as single <symbol>relation size</symbol></entry>> > + <entry>LOs are stored in single pg_largeobjects relation</entry>> > + </row>> >> > Are you under the impression that we can store a single large object up to table size? The limit is 4TB, as documented elsewhere.>> I've wrongly put it, I've meant that pg_largeobject also consume OID> and as such are subject to 32TB limit.No, OID has nothing to do with the table size limit, they have to do with the max number of LOs in a DB. Also, perhaps the LO entries should be split into a separate patch. Since they are a special case and documented elsewhere, it's not clear their limits fit well here. Maybe they could.+ <para>+ For every TOAST-ed column (that is for field values wider than TOAST_TUPLE_TARGET+ [2040 bytes by default]), due to internal PostgreSQL implementation of using one+ shared global OID counter - today you cannot have more than 4,294,967,296 out-of-line+ values in a single table.+ </para>++ <para>\"column\" != \"field value\". Also the shared counter is the cause of the slowdown, but not the reason for the numeric limit. \"Today\" is irrelevant. Needs polish.> > + After 1000000 failed attempts to get a free OID, a first log> > + message is emitted \"still searching for an unused OID in relation\", but operation> > + won't stop and will try to continue until it finds the free OID.> >> > Too much detail?>> OK - partially removed.+ out-of-line value (The search for free OIDs won't stop until it finds the free OID).Still too much detail, and not very illuminating. If it *did* stop, how does that make it any less of a problem?+ Therefore, the OID shortages will eventually cause slowdowns to the+ INSERTs/UPDATE/COPY statements.Maybe this whole sentence is better as \"This will eventually cause slowdowns for INSERT, UPDATE, and COPY statements.\"> > + Please note that that the limit of 2^32> > + out-of-line TOAST values applies to the sum of both visible and invisible tuples.> >> > We didn't feel the need to mention this for normal tuples...>> Right, but this somewhat points reader to the queue-like scenario> mentioned by Nikita.That seems to be in response to you mentioning \"especially to steer people away from designing very wide non-partitioned tables\". In any case, I'm now thinking that everything in this sentence and after doesn't belong here. We don't need to tell people to vacuum, and we don't need to tell them about partitioning as a workaround -- it's a workaround for the table size limit, too, but we are just documenting the limits here.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Jun 2023 15:19:47 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 10:20 AM John Naylor <[email protected]>\nwrote:\n\nHi John,\n\nv3 is attached for review.\n\n> >\n> > - <entry></entry>\n> > + <entry>see note below on TOAST</entry>\n>\n> Maybe:\n> \"further limited by the number of TOAST-ed values; see note below\"\n\nFixed.\n\n> > I've wrongly put it, I've meant that pg_largeobject also consume OID\n> > and as such are subject to 32TB limit.\n> No, OID has nothing to do with the table size limit, they have to do with\nthe max number of LOs in a DB.\n\nClearly I needed more coffee back then...\n\n> Also, perhaps the LO entries should be split into a separate patch. Since\nthey are a special case and documented elsewhere, it's not clear their\nlimits fit well here. Maybe they could.\n\nWell, but those are *limits* of the engine and they seem to be pretty\nwidely chosen especially in migration scenarios (because they are the only\nones allowed to store over 1GB). I think we should warn the dangers of\nusing pg_largeobjects.\n\n> > + <para>\n> > + For every TOAST-ed column (that is for field values wider than\nTOAST_TUPLE_TARGET\n> > + [2040 bytes by default]), due to internal PostgreSQL implementation\nof using one\n> > + shared global OID counter - today you cannot have more than\n4,294,967,296 out-of-line\n> > + values in a single table.\n> > + </para>\n> > +\n> > + <para>\n\n> \"column\" != \"field value\". (..)\"Today\" is irrelevant. Needs polish.\n\nFixed.\n\n> Also the shared counter is the cause of the slowdown, but not the reason\nfor the numeric limit.\n\nIsn't it both? typedef Oid is unsigned int = 2^32, and according to\nGetNewOidWithIndex() logic if we exhaust the whole OID space it will hang\nindefinitely which has the same semantics as \"being impossible\"/permanent\nhang (?)\n\n> + out-of-line value (The search for free OIDs won't stop until it finds\nthe free OID).\n\n> Still too much detail, and not very illuminating. If it *did* stop, how\ndoes that make it any less of a problem?\n\nOK I see your point - so it's removed. As for the question: well, maybe we\ncould document that one day in known-performance-cliffs.sgml (or via Wiki)\ninstead of limits.sgml.\n\n> + Therefore, the OID shortages will eventually cause slowdowns to the\n> + INSERTs/UPDATE/COPY statements.\n\n> Maybe this whole sentence is better as \"This will eventually cause\nslowdowns for INSERT, UPDATE, and COPY statements.\"\n\nYes, it flows much better that way.\n\n> > > + Please note that that the limit of 2^32\n> > > + out-of-line TOAST values applies to the sum of both visible and\ninvisible tuples.\n> > >\n> > > We didn't feel the need to mention this for normal tuples...\n> >\n> > Right, but this somewhat points reader to the queue-like scenario\n> > mentioned by Nikita.\n\n> That seems to be in response to you mentioning \"especially to steer\npeople away from designing very wide non-partitioned tables\". In any case,\nI'm now thinking that everything in this sentence and after doesn't belong\nhere. We don't need to tell people to vacuum, and we don't need to tell\nthem about partitioning as a workaround -- it's a workaround for the table\nsize limit, too, but we are just documenting the limits here.\n\nOK, I've removed the visible/invisible fragments and workaround techniques.\n\n-J.",
"msg_date": "Wed, 5 Jul 2023 16:45:07 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 9:45 PM Jakub Wartak <[email protected]>\nwrote:\n\n> [v3]\n\n--- a/doc/src/sgml/limits.sgml\n+++ b/doc/src/sgml/limits.sgml\n@@ -10,6 +10,7 @@\n hard limits are reached.\n </para>\n\n+\n <table id=\"limits-table\">\n\n@@ -92,11 +93,25 @@\n <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n </row>\n\n- <row>\n- <entry>partition keys</entry>\n- <entry>32</entry>\n- <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n- </row>\n+ <row>\n+ <entry>partition keys</entry>\n+ <entry>32</entry>\n+ <entry>can be increased by recompiling\n<productname>PostgreSQL</productname></entry>\n+ </row>\n\nAhem.\n\n> > Also, perhaps the LO entries should be split into a separate patch.\nSince they are a special case and documented elsewhere, it's not clear\ntheir limits fit well here. Maybe they could.\n>\n> Well, but those are *limits* of the engine and they seem to be pretty\nwidely chosen especially in migration scenarios (because they are the only\nones allowed to store over 1GB). I think we should warn the dangers of\nusing pg_largeobjects.\n\nI see no argument here against splitting into a separate patch for later.\n\n> > Also the shared counter is the cause of the slowdown, but not the\nreason for the numeric limit.\n>\n> Isn't it both? typedef Oid is unsigned int = 2^32, and according to\nGetNewOidWithIndex() logic if we exhaust the whole OID space it will hang\nindefinitely which has the same semantics as \"being impossible\"/permanent\nhang (?)\n\nLooking again, I'm thinking the OID type size is more relevant for the\nfirst paragraph, and the shared/global aspect is more relevant for the\nsecond.\n\nThe last issue is how to separate the notes at the bottom, since there are\nnow two topics.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jul 5, 2023 at 9:45 PM Jakub Wartak <[email protected]> wrote:> [v3]--- a/doc/src/sgml/limits.sgml+++ b/doc/src/sgml/limits.sgml@@ -10,6 +10,7 @@ hard limits are reached. </para> + <table id=\"limits-table\">@@ -92,11 +93,25 @@ <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry> </row> - <row>- <entry>partition keys</entry>- <entry>32</entry>- <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>- </row>+ <row>+ <entry>partition keys</entry>+ <entry>32</entry>+ <entry>can be increased by recompiling <productname>PostgreSQL</productname></entry>+ </row>Ahem.> > Also, perhaps the LO entries should be split into a separate patch. Since they are a special case and documented elsewhere, it's not clear their limits fit well here. Maybe they could.>> Well, but those are *limits* of the engine and they seem to be pretty widely chosen especially in migration scenarios (because they are the only ones allowed to store over 1GB). I think we should warn the dangers of using pg_largeobjects. I see no argument here against splitting into a separate patch for later.> > Also the shared counter is the cause of the slowdown, but not the reason for the numeric limit.>> Isn't it both? typedef Oid is unsigned int = 2^32, and according to GetNewOidWithIndex() logic if we exhaust the whole OID space it will hang indefinitely which has the same semantics as \"being impossible\"/permanent hang (?)Looking again, I'm thinking the OID type size is more relevant for the first paragraph, and the shared/global aspect is more relevant for the second.The last issue is how to separate the notes at the bottom, since there are now two topics.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 8 Aug 2023 14:31:42 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Wed, Apr 26, 2023 at 4:48 AM David Rowley <[email protected]> wrote:\n>\n> On Sun, 23 Apr 2023, 3:42 am Gurjeet Singh, <[email protected]> wrote:\n>>\n>> I anticipate that edits to Appendix K Postgres Limits will prompt\n>> improving the note in there about the maximum column limit, That note\n>> is too wordy, and sometimes confusing, especially for the audience\n>> that it's written for: newcomers to Postgres ecosystem.\n>\n>\n> I doubt it, but feel free to submit a patch yourself which improves it without losing the information which the paragraph is trying to convey.\n\nI could not think of a way to reduce the wordiness without losing\ninformation. But since this page is usually consulted by those who are\nnew to Postgres, usually sent here by a search engine, I believe the\npage can be improved for that audience, without losing much in terms\nof accuracy.\n\nI agree the information provided in the paragraph about max-columns is\npertinent. But since the limits section is most often consulted by\npeople migrating from other database systems (hence the claim that\nthey're new to the Postgres ecosystem), I imagine the terminology used\nthere may cause confusion for the reader. So my suggestion is to make\nthat paragraph, and perhaps even that page, use fewer hacker/internals\nterms.\n\nTechnically, there may be a difference between table vs. relation, row\nvs. tuple, and column vs. field. But using those terms, seemingly\ninterchangeably on that page does not help the reader. The page\nneither describes the terms, nor links to their definitions, so a\nreader is left with more questions than before. For example,\n\n> rows per table:: limited by the number of tuples that can fit onto 4,294,967,295 pages\n\nA newcomer> what's a tuple in this context, and how is it similar\nto/different from a row?\n\nPlease see attached the proposed patch, which attempts to make that\nlanguage newcomer-friendly. The patch adds one link for TOAST, and\nreplaces Postgres-specific terms with generic ones.\n\nPS: I've retained line boundaries, so that `git diff --color-words\ndoc/src/sgml/limits.sgml` would make it easy to see the changes.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sun, 20 Aug 2023 23:32:30 -0700",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 1:33 PM Gurjeet Singh <[email protected]> wrote:\n>\n> Please see attached the proposed patch, which attempts to make that\n> language newcomer-friendly. The patch adds one link for TOAST, and\n> replaces Postgres-specific terms with generic ones.\n\nThis is off-topic for this thread (which has a CF entry), and overall I\ndon't find the changes to be an improvement. (It wouldn't hurt to link to\nthe TOAST section, though.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Aug 21, 2023 at 1:33 PM Gurjeet Singh <[email protected]> wrote:>> Please see attached the proposed patch, which attempts to make that> language newcomer-friendly. The patch adds one link for TOAST, and> replaces Postgres-specific terms with generic ones.This is off-topic for this thread (which has a CF entry), and overall I don't find the changes to be an improvement. (It wouldn't hurt to link to the TOAST section, though.)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 23 Aug 2023 11:02:44 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "\n\n> On 8 Aug 2023, at 12:31, John Naylor <[email protected]> wrote:\n> \n> > > Also the shared counter is the cause of the slowdown, but not the reason for the numeric limit.\n> >\n> > Isn't it both? typedef Oid is unsigned int = 2^32, and according to GetNewOidWithIndex() logic if we exhaust the whole OID space it will hang indefinitely which has the same semantics as \"being impossible\"/permanent hang (?)\n> \n> Looking again, I'm thinking the OID type size is more relevant for the first paragraph, and the shared/global aspect is more relevant for the second.\n> \n> The last issue is how to separate the notes at the bottom, since there are now two topics.\n\nJakub, do you have plans to address this feedback? Is the CF entry still relevant?\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 28 Mar 2024 17:09:30 +0500",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "Hi Andrey,\n\nOn Thu, Mar 28, 2024 at 1:09 PM Andrey M. Borodin <[email protected]> wrote:\n>\n>\n>\n> > On 8 Aug 2023, at 12:31, John Naylor <[email protected]> wrote:\n> >\n> > > > Also the shared counter is the cause of the slowdown, but not the reason for the numeric limit.\n> > >\n> > > Isn't it both? typedef Oid is unsigned int = 2^32, and according to GetNewOidWithIndex() logic if we exhaust the whole OID space it will hang indefinitely which has the same semantics as \"being impossible\"/permanent hang (?)\n> >\n> > Looking again, I'm thinking the OID type size is more relevant for the first paragraph, and the shared/global aspect is more relevant for the second.\n> >\n> > The last issue is how to separate the notes at the bottom, since there are now two topics.\n>\n> Jakub, do you have plans to address this feedback? Is the CF entry still relevant?\n\nYes; I've forgotten about this one and clearly I had problems\nformulating it in proper shape to be accepted. I've moved it to the\nnext CF now as this is not critical and I would prefer to help current\ntimesenistive CF. Anyone is welcome to help amend the patch...\n\n-J.\n\n\n",
"msg_date": "Wed, 3 Apr 2024 10:58:16 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 4:59 AM Jakub Wartak\n<[email protected]> wrote:\n> Yes; I've forgotten about this one and clearly I had problems\n> formulating it in proper shape to be accepted. I've moved it to the\n> next CF now as this is not critical and I would prefer to help current\n> timesenistive CF. Anyone is welcome to help amend the patch...\n\nI looked at your version and wrote something that is shorter and\ndoesn't touch any existing text. Here it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 14 May 2024 14:19:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Tue, May 14, 2024 at 8:19 PM Robert Haas <[email protected]> wrote:\n>\n> I looked at your version and wrote something that is shorter and\n> doesn't touch any existing text. Here it is.\n\nHi Robert, you are a real tactician here - thanks for whatever\nreferences the original problem! :) Maybe just slight hint nearby\nexpensive (to me indicating a just a CPU problem?):\n\nfinding an OID that is still free can become expensive ->\nfinding an OID that is still free can become expensive, thus\nsignificantly increasing INSERT/UPDATE response time.\n\n? (this potentially makes it easier in future to pinpoint the user's\nproblem to the this exact limitation; but feel free to ignore that too\nas I'm not attached to any of those versions)\n\n-J.\n\n\n",
"msg_date": "Mon, 20 May 2024 12:43:15 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Mon, May 20, 2024 at 5:43 PM Jakub Wartak\n<[email protected]> wrote:\n>\n> On Tue, May 14, 2024 at 8:19 PM Robert Haas <[email protected]> wrote:\n> >\n> > I looked at your version and wrote something that is shorter and\n> > doesn't touch any existing text. Here it is.\n>\n> Hi Robert, you are a real tactician here - thanks for whatever\n> references the original problem! :)\n\nI like this text as well.\n\n> Maybe just slight hint nearby\n> expensive (to me indicating a just a CPU problem?):\n>\n> finding an OID that is still free can become expensive ->\n> finding an OID that is still free can become expensive, thus\n> significantly increasing INSERT/UPDATE response time.\n>\n> ? (this potentially makes it easier in future to pinpoint the user's\n> problem to the this exact limitation; but feel free to ignore that too\n> as I'm not attached to any of those versions)\n\nExtra explicitness might be good. \"Response time\" seems like a\nnetworking concept, so possibly \", in turn slowing down INSERT/UPDATE\nstatements.\" I'm inclined to commit that way in a couple days, barring\nfurther comments.\n\nPS: Sorry for the delay in looking at the latest messages\n\n\n",
"msg_date": "Mon, 12 Aug 2024 11:01:57 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Mon, Aug 12, 2024 at 11:01 AM John Naylor <[email protected]> wrote:\n>\n> Extra explicitness might be good. \"Response time\" seems like a\n> networking concept, so possibly \", in turn slowing down INSERT/UPDATE\n> statements.\" I'm inclined to commit that way in a couple days, barring\n> further comments.\n\nThis is done.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 14:03:13 +0700",
"msg_from": "John Naylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
},
{
"msg_contents": "On Tue, Aug 20, 2024 at 9:03 AM John Naylor <[email protected]> wrote:\n\n> This is done.\n\nCool! Thanks John and Robert! :)\n\n-J.\n\n\n",
"msg_date": "Tue, 20 Aug 2024 09:34:26 +0200",
"msg_from": "Jakub Wartak <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Doc limitation update proposal: include out-of-line OID usage per\n TOAST-ed columns"
}
] |
[
{
"msg_contents": "There was discussion in [1] about improvements to list manipulation in\nseveral places. But since the discussion is not related to the topic in\nthat thread, fork a new thread here and attach a patch to show my\nthoughts.\n\nSome are just cosmetic changes by using macros. The others should have\nperformance gain from the avoidance of moving list entries. But I doubt\nthe performance gain can be noticed or measured, as currently there are\nonly a few places affected by the change. I still think the changes are\nworthwhile though, because it is very likely that future usage of the\nsame scenario can benefit from these changes.\n\n(Copying in David and Ranier. Ranier provided a patch about the changes\nin list.c, but I'm not using that one.)\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs49aakL%3DPP7NcTajCtDyaVUE-NMVMGpaLEKreYbQknkQWA%40mail.gmail.com\n\nThanks\nRichard",
"msg_date": "Fri, 21 Apr 2023 15:34:42 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve list manipulation in several places"
},
{
"msg_contents": "Em sex., 21 de abr. de 2023 às 04:34, Richard Guo <[email protected]>\nescreveu:\n\n> There was discussion in [1] about improvements to list manipulation in\n> several places. But since the discussion is not related to the topic in\n> that thread, fork a new thread here and attach a patch to show my\n> thoughts.\n>\n> Some are just cosmetic changes by using macros. The others should have\n> performance gain from the avoidance of moving list entries. But I doubt\n> the performance gain can be noticed or measured, as currently there are\n> only a few places affected by the change. I still think the changes are\n> worthwhile though, because it is very likely that future usage of the\n> same scenario can benefit from these changes.\n>\n+1\n\nPerhaps list_delete_nth_cell needs to check NIL too?\n+ if (list == NIL)\n+ return NIL;\n\n+lcons_copy(void *datum, const List *list)\n+lappend_copy(const List *list, void *datum)\nlist param pointer can be const here not?\n\nregards,\nRanier Vilela\n\nEm sex., 21 de abr. de 2023 às 04:34, Richard Guo <[email protected]> escreveu:There was discussion in [1] about improvements to list manipulation inseveral places. But since the discussion is not related to the topic inthat thread, fork a new thread here and attach a patch to show mythoughts.Some are just cosmetic changes by using macros. The others should haveperformance gain from the avoidance of moving list entries. But I doubtthe performance gain can be noticed or measured, as currently there areonly a few places affected by the change. I still think the changes areworthwhile though, because it is very likely that future usage of thesame scenario can benefit from these changes.+1 Perhaps list_delete_nth_cell needs to check NIL too?+\tif (list == NIL)+\t\treturn NIL;+lcons_copy(void *datum, const List *list)+lappend_copy(const List *list, void *datum)list param pointer can be const here not?regards,Ranier Vilela",
"msg_date": "Fri, 21 Apr 2023 08:16:01 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Fri, 21 Apr 2023 at 23:16, Ranier Vilela <[email protected]> wrote:\n> Perhaps list_delete_nth_cell needs to check NIL too?\n> + if (list == NIL)\n> + return NIL;\n\nWhich cell would you be deleting from an empty list?\n\nDavid\n\n\n",
"msg_date": "Sat, 22 Apr 2023 00:09:54 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "Em sex, 21 de abr de 2023 9:10 AM, David Rowley <[email protected]>\nescreveu:\n\n> On Fri, 21 Apr 2023 at 23:16, Ranier Vilela <[email protected]> wrote:\n> > Perhaps list_delete_nth_cell needs to check NIL too?\n> > + if (list == NIL)\n> > + return NIL;\n>\n> Which cell would you be deleting from an empty list?\n>\nNone.\nBut list_delete_nth_cel can checks a length of NIL list.\n\nPerhaps a assert?\n\nregards,\nRanier Vilela\n\nEm sex, 21 de abr de 2023 9:10 AM, David Rowley <[email protected]> escreveu:On Fri, 21 Apr 2023 at 23:16, Ranier Vilela <[email protected]> wrote:\n> Perhaps list_delete_nth_cell needs to check NIL too?\n> + if (list == NIL)\n> + return NIL;\n\nWhich cell would you be deleting from an empty list?None.But list_delete_nth_cel can checks a length of NIL list.Perhaps a assert?regards,Ranier Vilela",
"msg_date": "Fri, 21 Apr 2023 13:11:56 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On 21.04.23 09:34, Richard Guo wrote:\n> There was discussion in [1] about improvements to list manipulation in\n> several places. But since the discussion is not related to the topic in\n> that thread, fork a new thread here and attach a patch to show my\n> thoughts.\n> \n> Some are just cosmetic changes by using macros. The others should have\n> performance gain from the avoidance of moving list entries. But I doubt\n> the performance gain can be noticed or measured, as currently there are\n> only a few places affected by the change. I still think the changes are\n> worthwhile though, because it is very likely that future usage of the\n> same scenario can benefit from these changes.\n\nCan you explain the changes?\n\nMaybe this patch should be split up. It seems some of the changes are \ntrivial simplifications using existing APIs, while others introduce new \nfunctions.\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 18:55:14 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 12:55 AM Peter Eisentraut <\[email protected]> wrote:\n\n> On 21.04.23 09:34, Richard Guo wrote:\n> > There was discussion in [1] about improvements to list manipulation in\n> > several places. But since the discussion is not related to the topic in\n> > that thread, fork a new thread here and attach a patch to show my\n> > thoughts.\n> >\n> > Some are just cosmetic changes by using macros. The others should have\n> > performance gain from the avoidance of moving list entries. But I doubt\n> > the performance gain can be noticed or measured, as currently there are\n> > only a few places affected by the change. I still think the changes are\n> > worthwhile though, because it is very likely that future usage of the\n> > same scenario can benefit from these changes.\n>\n> Can you explain the changes?\n>\n> Maybe this patch should be split up. It seems some of the changes are\n> trivial simplifications using existing APIs, while others introduce new\n> functions.\n\n\nThanks for the suggestion. I've split the patch into two as attached.\n0001 is just a minor simplification by replacing lfirst(list_head(list))\nwith linitial(list). 0002 introduces new functions to reduce the\nmovement of list elements in several places so as to gain performance\nimprovement and benefit future callers.\n\nThanks\nRichard",
"msg_date": "Sun, 23 Apr 2023 14:42:53 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 7:16 PM Ranier Vilela <[email protected]> wrote:\n\n> +lcons_copy(void *datum, const List *list)\n> +lappend_copy(const List *list, void *datum)\n> list param pointer can be const here not?\n>\n\nCorrect. Good point. V2 patch does that.\n\nThanks\nRichard\n\nOn Fri, Apr 21, 2023 at 7:16 PM Ranier Vilela <[email protected]> wrote:+lcons_copy(void *datum, const List *list)+lappend_copy(const List *list, void *datum)list param pointer can be const here not?Correct. Good point. V2 patch does that.ThanksRichard",
"msg_date": "Sun, 23 Apr 2023 14:57:35 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "Richard Guo <[email protected]> 于2023年4月21日周五 15:35写道:\n\n> There was discussion in [1] about improvements to list manipulation in\n> several places. But since the discussion is not related to the topic in\n> that thread, fork a new thread here and attach a patch to show my\n> thoughts.\n>\n> Some are just cosmetic changes by using macros. The others should have\n> performance gain from the avoidance of moving list entries. But I doubt\n> the performance gain can be noticed or measured, as currently there are\n> only a few places affected by the change. I still think the changes are\n> worthwhile though, because it is very likely that future usage of the\n> same scenario can benefit from these changes.\n>\n\n I doubt the performance gain from lappend_copy func. new_tail_cell in\nlappend may not enter enlarge_list in most cases, because we\nmay allocate extra cells in new_list(see the comment in new_list).\n\n\n\n>\n> (Copying in David and Ranier. Ranier provided a patch about the changes\n> in list.c, but I'm not using that one.)\n>\n> [1]\n> https://www.postgresql.org/message-id/CAMbWs49aakL%3DPP7NcTajCtDyaVUE-NMVMGpaLEKreYbQknkQWA%40mail.gmail.com\n>\n> Thanks\n> Richard\n>\n\nRichard Guo <[email protected]> 于2023年4月21日周五 15:35写道:There was discussion in [1] about improvements to list manipulation inseveral places. But since the discussion is not related to the topic inthat thread, fork a new thread here and attach a patch to show mythoughts.Some are just cosmetic changes by using macros. The others should haveperformance gain from the avoidance of moving list entries. But I doubtthe performance gain can be noticed or measured, as currently there areonly a few places affected by the change. I still think the changes areworthwhile though, because it is very likely that future usage of thesame scenario can benefit from these changes. I doubt the performance gain from lappend_copy func. new_tail_cell in lappend may not enter enlarge_list in most cases, because wemay allocate extra cells in new_list(see the comment in new_list). (Copying in David and Ranier. Ranier provided a patch about the changesin list.c, but I'm not using that one.)[1] https://www.postgresql.org/message-id/CAMbWs49aakL%3DPP7NcTajCtDyaVUE-NMVMGpaLEKreYbQknkQWA%40mail.gmail.comThanksRichard",
"msg_date": "Sun, 23 Apr 2023 18:13:41 +0800",
"msg_from": "tender wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On 23.04.23 08:42, Richard Guo wrote:\n> Thanks for the suggestion. I've split the patch into two as attached.\n> 0001 is just a minor simplification by replacing lfirst(list_head(list))\n> with linitial(list). 0002 introduces new functions to reduce the\n> movement of list elements in several places so as to gain performance\n> improvement and benefit future callers.\n\nThese look sensible to me. If you could show some numbers that support \nthe claim that there is a performance advantage, it would be even more \nconvincing.\n\n\n\n",
"msg_date": "Mon, 8 May 2023 17:22:28 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On 2023-May-08, Peter Eisentraut wrote:\n\n> On 23.04.23 08:42, Richard Guo wrote:\n> > Thanks for the suggestion. I've split the patch into two as attached.\n> > 0001 is just a minor simplification by replacing lfirst(list_head(list))\n> > with linitial(list). 0002 introduces new functions to reduce the\n> > movement of list elements in several places so as to gain performance\n> > improvement and benefit future callers.\n> \n> These look sensible to me. If you could show some numbers that support the\n> claim that there is a performance advantage, it would be even more\n> convincing.\n\n0001 looks fine.\n\nThe problem I see is that each of these new functions has a single\ncaller, and the only one that looks like it could have a performance\nadvantage is list_copy_move_nth_to_head() (which is the weirdest of the\nlot). I'm inclined not to have any of these single-use functions unless\na performance case can be made for them.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 8 May 2023 19:25:58 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "Em seg., 8 de mai. de 2023 às 14:26, Alvaro Herrera <[email protected]>\nescreveu:\n\n> On 2023-May-08, Peter Eisentraut wrote:\n>\n> > On 23.04.23 08:42, Richard Guo wrote:\n> > > Thanks for the suggestion. I've split the patch into two as attached.\n> > > 0001 is just a minor simplification by replacing\n> lfirst(list_head(list))\n> > > with linitial(list). 0002 introduces new functions to reduce the\n> > > movement of list elements in several places so as to gain performance\n> > > improvement and benefit future callers.\n> >\n> > These look sensible to me. If you could show some numbers that support\n> the\n> > claim that there is a performance advantage, it would be even more\n> > convincing.\n>\n> 0001 looks fine.\n>\n> The problem I see is that each of these new functions has a single\n> caller, and the only one that looks like it could have a performance\n> advantage is list_copy_move_nth_to_head() (which is the weirdest of the\n> lot). I'm inclined not to have any of these single-use functions unless\n> a performance case can be made for them.\n>\nI think you missed list_nth_xid, It makes perfect sense to exist.\n\nregards,\nRanier Vilela\n\nEm seg., 8 de mai. de 2023 às 14:26, Alvaro Herrera <[email protected]> escreveu:On 2023-May-08, Peter Eisentraut wrote:\n\n> On 23.04.23 08:42, Richard Guo wrote:\n> > Thanks for the suggestion. I've split the patch into two as attached.\n> > 0001 is just a minor simplification by replacing lfirst(list_head(list))\n> > with linitial(list). 0002 introduces new functions to reduce the\n> > movement of list elements in several places so as to gain performance\n> > improvement and benefit future callers.\n> \n> These look sensible to me. If you could show some numbers that support the\n> claim that there is a performance advantage, it would be even more\n> convincing.\n\n0001 looks fine.\n\nThe problem I see is that each of these new functions has a single\ncaller, and the only one that looks like it could have a performance\nadvantage is list_copy_move_nth_to_head() (which is the weirdest of the\nlot). I'm inclined not to have any of these single-use functions unless\na performance case can be made for them.I think you missed list_nth_xid, It makes perfect sense to exist.regards,Ranier Vilela",
"msg_date": "Mon, 8 May 2023 14:48:28 -0300",
"msg_from": "Ranier Vilela <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Mon, May 8, 2023 at 11:22 PM Peter Eisentraut <\[email protected]> wrote:\n\n> On 23.04.23 08:42, Richard Guo wrote:\n> > Thanks for the suggestion. I've split the patch into two as attached.\n> > 0001 is just a minor simplification by replacing lfirst(list_head(list))\n> > with linitial(list). 0002 introduces new functions to reduce the\n> > movement of list elements in several places so as to gain performance\n> > improvement and benefit future callers.\n>\n> These look sensible to me. If you could show some numbers that support\n> the claim that there is a performance advantage, it would be even more\n> convincing.\n\n\nThanks Peter for looking at those patches. I tried to devise a query to\nshow performance gain but did not succeed :-(. So I begin to wonder if\n0002 is worthwhile to do, as it seems that it does not solve any real\nproblem.\n\nThanks\nRichard\n\nOn Mon, May 8, 2023 at 11:22 PM Peter Eisentraut <[email protected]> wrote:On 23.04.23 08:42, Richard Guo wrote:\n> Thanks for the suggestion. I've split the patch into two as attached.\n> 0001 is just a minor simplification by replacing lfirst(list_head(list))\n> with linitial(list). 0002 introduces new functions to reduce the\n> movement of list elements in several places so as to gain performance\n> improvement and benefit future callers.\n\nThese look sensible to me. If you could show some numbers that support \nthe claim that there is a performance advantage, it would be even more \nconvincing.Thanks Peter for looking at those patches. I tried to devise a query toshow performance gain but did not succeed :-(. So I begin to wonder if0002 is worthwhile to do, as it seems that it does not solve any realproblem.ThanksRichard",
"msg_date": "Tue, 9 May 2023 11:07:47 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Tue, May 9, 2023 at 1:26 AM Alvaro Herrera <[email protected]>\nwrote:\n\n> The problem I see is that each of these new functions has a single\n> caller, and the only one that looks like it could have a performance\n> advantage is list_copy_move_nth_to_head() (which is the weirdest of the\n> lot). I'm inclined not to have any of these single-use functions unless\n> a performance case can be made for them.\n\n\nYeah, maybe this is the reason I failed to devise a query that shows any\nperformance gain. I tried with a query which makes the 'all_pathkeys'\nin sort_inner_and_outer being length of 500 and still cannot see any\nnotable performance improvements gained by list_copy_move_nth_to_head.\nMaybe the cost of other parts of planning swamps the performance gain\nhere? Now I agree that maybe 0002 is not worthwhile to do.\n\nThanks\nRichard\n\nOn Tue, May 9, 2023 at 1:26 AM Alvaro Herrera <[email protected]> wrote:\nThe problem I see is that each of these new functions has a single\ncaller, and the only one that looks like it could have a performance\nadvantage is list_copy_move_nth_to_head() (which is the weirdest of the\nlot). I'm inclined not to have any of these single-use functions unless\na performance case can be made for them.Yeah, maybe this is the reason I failed to devise a query that shows anyperformance gain. I tried with a query which makes the 'all_pathkeys'in sort_inner_and_outer being length of 500 and still cannot see anynotable performance improvements gained by list_copy_move_nth_to_head.Maybe the cost of other parts of planning swamps the performance gainhere? Now I agree that maybe 0002 is not worthwhile to do.ThanksRichard",
"msg_date": "Tue, 9 May 2023 11:13:44 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Tue, May 9, 2023 at 1:48 AM Ranier Vilela <[email protected]> wrote:\n\n> I think you missed list_nth_xid, It makes perfect sense to exist.\n>\n\nIt seems that list_nth_xid is more about simplification. So maybe we\nshould put it in 0001?\n\nThanks\nRichard\n\nOn Tue, May 9, 2023 at 1:48 AM Ranier Vilela <[email protected]> wrote:I think you missed list_nth_xid, It makes perfect sense to exist.It seems that list_nth_xid is more about simplification. So maybe weshould put it in 0001?ThanksRichard",
"msg_date": "Tue, 9 May 2023 11:15:42 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On 2023-May-08, Ranier Vilela wrote:\n\n> Em seg., 8 de mai. de 2023 às 14:26, Alvaro Herrera <[email protected]>\n> escreveu:\n> \n> > The problem I see is that each of these new functions has a single\n> > caller, and the only one that looks like it could have a performance\n> > advantage is list_copy_move_nth_to_head() (which is the weirdest of the\n> > lot). I'm inclined not to have any of these single-use functions unless\n> > a performance case can be made for them.\n> >\n> I think you missed list_nth_xid, It makes perfect sense to exist.\n\nI saw that one; it's just syntactic sugar, just like list_nth_int and\nlist_nth_oid, except it has only one possible callsite instead of a\ndozen like those others. I see no harm in that function, but no\npractical advantage to it either. Xid lists are a very fringe feature,\nthere being exactly one place in the whole server that uses them. \n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 9 May 2023 10:01:38 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On 09.05.23 05:13, Richard Guo wrote:\n> \n> On Tue, May 9, 2023 at 1:26 AM Alvaro Herrera <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> The problem I see is that each of these new functions has a single\n> caller, and the only one that looks like it could have a performance\n> advantage is list_copy_move_nth_to_head() (which is the weirdest of the\n> lot). I'm inclined not to have any of these single-use functions unless\n> a performance case can be made for them.\n> \n> \n> Yeah, maybe this is the reason I failed to devise a query that shows any\n> performance gain. I tried with a query which makes the 'all_pathkeys'\n> in sort_inner_and_outer being length of 500 and still cannot see any\n> notable performance improvements gained by list_copy_move_nth_to_head.\n> Maybe the cost of other parts of planning swamps the performance gain\n> here? Now I agree that maybe 0002 is not worthwhile to do.\n\nI have committed patch 0001. Since you have withdrawn 0002, this closes \nthe commit fest item.\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:41:07 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve list manipulation in several places"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 5:41 PM Peter Eisentraut <\[email protected]> wrote:\n\n> On 09.05.23 05:13, Richard Guo wrote:\n> > Yeah, maybe this is the reason I failed to devise a query that shows any\n> > performance gain. I tried with a query which makes the 'all_pathkeys'\n> > in sort_inner_and_outer being length of 500 and still cannot see any\n> > notable performance improvements gained by list_copy_move_nth_to_head.\n> > Maybe the cost of other parts of planning swamps the performance gain\n> > here? Now I agree that maybe 0002 is not worthwhile to do.\n>\n> I have committed patch 0001. Since you have withdrawn 0002, this closes\n> the commit fest item.\n\n\nThanks for pushing it and closing the item!\n\nThanks\nRichard\n\nOn Mon, Jul 3, 2023 at 5:41 PM Peter Eisentraut <[email protected]> wrote:On 09.05.23 05:13, Richard Guo wrote:\n> Yeah, maybe this is the reason I failed to devise a query that shows any\n> performance gain. I tried with a query which makes the 'all_pathkeys'\n> in sort_inner_and_outer being length of 500 and still cannot see any\n> notable performance improvements gained by list_copy_move_nth_to_head.\n> Maybe the cost of other parts of planning swamps the performance gain\n> here? Now I agree that maybe 0002 is not worthwhile to do.\n\nI have committed patch 0001. Since you have withdrawn 0002, this closes \nthe commit fest item.Thanks for pushing it and closing the item!ThanksRichard",
"msg_date": "Tue, 4 Jul 2023 08:48:00 +0800",
"msg_from": "Richard Guo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve list manipulation in several places"
}
] |
[
{
"msg_contents": "Hi Michael\r\nthank you for your explanation.\r\nactually, some location can be tricky to add.\r\nit looks like CREATE, but it’s actually ALTER, should call InvokeObjectPostAlterHook instead of InvokeObjectPostCreateHook? eg.,CREATE OR REPLACE, CREATE TYPE(perfecting shell type)\r\n\r\n\r\nThank you\r\n\r\n------------------ Original ------------------\r\nFrom: Michael Paquier <[email protected]>\r\nDate: Tue,Apr 18,2023 0:34 PM\r\nTo: Legs Mansion <[email protected]>\r\nCc: pgsql-hackers <[email protected]>\r\nSubject: Re: A Question about InvokeObjectPostAlterHook\r\n\r\n\r\n\r\nOn Tue, Apr 18, 2023 at 09:51:30AM +0800, Legs Mansion wrote:\r\n> Recently, I ran into a problem, InvokeObjectPostAlterHook was\r\n> implemented for sepgsql, sepgsql use it to determine whether to\r\n> check permissions during certain operations. But\r\n> InvokeObjectPostAlterHook doesn't handle all of the alter's\r\n> behavior, at least the table is not controlled. e.g., ALTER \r\n> TABLE... ENABLE/DISABLE ROW LEVEL SECURITY,ALTER TABLE ... DISABLE\r\n> TRIGGER, GRANT and REVOKE and so on. \r\n> Whether InvokeObjectPostAlterHook is not fully controlled? it's\r\n> a bug? \r\n\r\nYes, tablecmds.c has some holes and these are added when there is a\r\nask for it, as far as I recall. In some cases, these locations can be\r\ntricky to add, so usually they require an independent analysis. For\r\nexample, EnableDisableTrigger() has one AOT for the trigger itself,\r\nbut not for the relation changed in tablecmds.c, as you say, anyway we\r\nshould be careful with cross-dependencies.\r\n\r\nNote that 90efa2f has made the tests for OATs much easier, and there\r\nis no need to rely only on sepgsql for that. (Even if test_oat_hooks\r\nhas been having some stability issues with namespace lookups because\r\nof the position on the namespace search hook.)\r\n\r\nAlso, the additions of InvokeObjectPostAlterHook() are historically\r\nconservative because they create behavior changes in stable branches,\r\nmeaning no backpatch. See a995b37 or 7b56584 as past examples, for\r\nexample.\r\n\r\nNote that the development of PostgreSQL 16 has just finished, so now\r\nmay not be the best moment to add these extra AOT calls, but these\r\ncould be added in 17~ for sure at the beginning of July once the next\r\ndevelopment cycle begins.\r\n\r\nAttached would be what I think would be required to add OATs for RLS,\r\ntriggers and rules, for example. There are much more of these at\r\nquick glance, still that's one step in providing more checks. Perhaps\r\nyou'd like to expand this patch with more ALTER TABLE subcommands\r\ncovered?\r\n--\r\nMichael\nHi Michaelthank you for your explanation.actually, some location can be tricky to add.it looks like CREATE, but it’s actually ALTER, should call InvokeObjectPostAlterHook instead of InvokeObjectPostCreateHook? eg.,CREATE OR REPLACE, CREATE TYPE(perfecting shell type)Thank you------------------ Original ------------------From: Michael Paquier <[email protected]>Date: Tue,Apr 18,2023 0:34 PMTo: Legs Mansion <[email protected]>Cc: pgsql-hackers <[email protected]>Subject: Re: A Question about InvokeObjectPostAlterHookOn Tue, Apr 18, 2023 at 09:51:30AM +0800, Legs Mansion wrote:> Recently, I ran into a problem, InvokeObjectPostAlterHook was> implemented for sepgsql, sepgsql use it to determine whether to> check permissions during certain operations. But> InvokeObjectPostAlterHook doesn't handle all of the alter's> behavior, at least the table is not controlled. e.g., ALTER > TABLE... ENABLE/DISABLE ROW LEVEL SECURITY,ALTER TABLE ... DISABLE> TRIGGER, GRANT and REVOKE and so on. > Whether InvokeObjectPostAlterHook is not fully controlled? it's> a bug? Yes, tablecmds.c has some holes and these are added when there is aask for it, as far as I recall. In some cases, these locations can betricky to add, so usually they require an independent analysis. Forexample, EnableDisableTrigger() has one AOT for the trigger itself,but not for the relation changed in tablecmds.c, as you say, anyway weshould be careful with cross-dependencies.Note that 90efa2f has made the tests for OATs much easier, and thereis no need to rely only on sepgsql for that. (Even if test_oat_hookshas been having some stability issues with namespace lookups becauseof the position on the namespace search hook.)Also, the additions of InvokeObjectPostAlterHook() are historicallyconservative because they create behavior changes in stable branches,meaning no backpatch. See a995b37 or 7b56584 as past examples, forexample.Note that the development of PostgreSQL 16 has just finished, so nowmay not be the best moment to add these extra AOT calls, but thesecould be added in 17~ for sure at the beginning of July once the nextdevelopment cycle begins.Attached would be what I think would be required to add OATs for RLS,triggers and rules, for example. There are much more of these atquick glance, still that's one step in providing more checks. Perhapsyou'd like to expand this patch with more ALTER TABLE subcommandscovered?--Michael",
"msg_date": "Fri, 21 Apr 2023 16:16:10 +0800",
"msg_from": "\"=?utf-8?B?IExlZ3MgTWFuc2lvbg==?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A Question about InvokeObjectPostAlterHook"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 04:16:10PM +0800, Legs Mansion wrote:\n> actually, some location can be tricky to add.\n> it looks like CREATE, but it’s actually ALTER, should call\n> InvokeObjectPostAlterHook instead\n> of InvokeObjectPostCreateHook? eg.,CREATE OR REPLACE, CREATE\n> TYPE(perfecting shell type) \n\nSure, it could be possible to plaster more of these depending on the\ncontrol one may want with OATs. Coming back to the main you point of\nthe thread you were making, what are the use cases with ALTER TABLE\nyou were interested in for sepgsql on top of what the patch I sent\nupthread is doing?\n\nNote that it is perfectly fine to do the changes incrementally, though\nI'd rather add some proper coverage for each one of them using the\nmodule I've patched (sepgsql's tests are annoying to setup and run).\n--\nMichael",
"msg_date": "Sat, 22 Apr 2023 17:44:13 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A Question about InvokeObjectPostAlterHook"
}
] |
[
{
"msg_contents": "Hi.\n\nWe've found that in cases like the one attached, when we insert into \nforeign partition with batch_size set, buffer refcount leak is detected.\n\nThe above example we see a dozen of similar messages:\n\nrepro_small.sql:31: WARNING: buffer refcount leak: [14621] \n(rel=base/16718/16732, blockNum=54, flags=0x93800000\n\nThe issue was introduced in the following commit\n\ncommit b676ac443b6a83558d4701b2dd9491c0b37e17c4\nAuthor: Tomas Vondra <[email protected]>\nDate: Fri Jun 11 20:19:48 2021 +0200\n\n Optimize creation of slots for FDW bulk inserts\n\nIn this commit we avoid recreating slots for each batch. But it seems \nthat created slots should still be cleared in the end of \nExecBatchInsert().\n\nAt least the attached patch seems to fix the issue.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Fri, 21 Apr 2023 13:07:03 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "buffer refcount leak in foreign batch insert code"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 01:07:03PM +0300, Alexander Pyhalov wrote:\n> We've found that in cases like the one attached, when we insert into foreign\n> partition with batch_size set, buffer refcount leak is detected.\n> \n> The above example we see a dozen of similar messages:\n> \n> repro_small.sql:31: WARNING: buffer refcount leak: [14621]\n> (rel=base/16718/16732, blockNum=54, flags=0x93800000\n\nIndeed, nice repro! That's obviously wrong, I'll look into that.\n--\nMichael",
"msg_date": "Fri, 21 Apr 2023 19:16:03 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: buffer refcount leak in foreign batch insert code"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 07:16:03PM +0900, Michael Paquier wrote:\n> On Fri, Apr 21, 2023 at 01:07:03PM +0300, Alexander Pyhalov wrote:\n>> We've found that in cases like the one attached, when we insert into foreign\n>> partition with batch_size set, buffer refcount leak is detected.\n>> \n>> The above example we see a dozen of similar messages:\n>> \n>> repro_small.sql:31: WARNING: buffer refcount leak: [14621]\n>> (rel=base/16718/16732, blockNum=54, flags=0x93800000\n\nThis reminds me a lot of the recent multi-insert logic added to\nvarious DDL code paths for catalogs, bref..\n\nThe number of slots ready for a batch is tracked by ri_NumSlots, and\nit is reset to 0 each time a batch has been processed. How about\nresetting the counter at the same place as the slots are cleared, at\nthe end of ExecBatchInsert() as the same time as when the slots are\ncleared?\n\nI was wondering as well about resetting the slot just before copying \nsomething into ri_Slots in ExecInsert() (actually close to the slot\ncopy), which is something that would make the operation cheaper for\nlarge batches because the last batch could be cleaned up with\nExecEndModifyTable(), but this makes the code much messier when a\ntuple is added into one of the slots as we would need to switch \nback-and-forth with es_query_cxt from what I can see, because th\nbatches are inserted before any slot initialization is done. In\nshort, I'm okay with what's proposed here and clean up things at the\nsame time as ri_NumSlots.\n\nAnother thing was the interaction of this change with triggers\n(delete, insert with returning and batches to flush pending inserts,\netc.), and that looked correct to me (I have plugged in some of these\ntriggers noisy on notices on the relations of the partitions tree).\n\nSelf-reminder: the tests of postgres_fdw are rather long now, perhaps\nthese should be split into more files in the future..\n\nThe attached is what I am finishing with, with a minimal regression\ntest added to postgres_fdw. Two partitions are enough. Alexander,\nwhat do you think?\n--\nMichael",
"msg_date": "Mon, 24 Apr 2023 09:57:10 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: buffer refcount leak in foreign batch insert code"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 09:57:10AM +0900, Michael Paquier wrote:\n> The attached is what I am finishing with, with a minimal regression\n> test added to postgres_fdw. Two partitions are enough.\n\nWell, I have gone through that again this morning, and applied the fix\ndown to 14. The buildfarm is digesting it fine.\n--\nMichael",
"msg_date": "Tue, 25 Apr 2023 10:30:42 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: buffer refcount leak in foreign batch insert code"
},
{
"msg_contents": "Michael Paquier писал 2023-04-25 04:30:\n> On Mon, Apr 24, 2023 at 09:57:10AM +0900, Michael Paquier wrote:\n>> The attached is what I am finishing with, with a minimal regression\n>> test added to postgres_fdw. Two partitions are enough.\n> \n> Well, I have gone through that again this morning, and applied the fix\n> down to 14. The buildfarm is digesting it fine.\n> --\n> Michael\n\nThank you. Sorry for the late response, was on vacation.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Tue, 02 May 2023 10:03:15 +0300",
"msg_from": "Alexander Pyhalov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: buffer refcount leak in foreign batch insert code"
}
] |
[
{
"msg_contents": "Hello, hackers,\n\nwe have a duplicate line, declaration of default_multirange_selectivity() in\nsrc/backend/utils/adt/multirangetypes_selfuncs.c:\n\nstatic double default_multirange_selectivity(Oid operator);\nstatic double default_multirange_selectivity(Oid operator);\n\nAffected branches: REL_14_STABLE and above.\n\nBoth lines come from the same commit:\n > commit 6df7a9698bb036610c1e8c6d375e1be38cb26d5f\n > Author: Alexander Korotkov <[email protected]>\n > Date: Sun Dec 20 07:20:33 2020\n >\n > Multirange datatypes\n\nNo harm from this duplication, still, I suggest to clean it up for \ntidiness' sake.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:32:01 +0300",
"msg_from": "Anton Voloshin <[email protected]>",
"msg_from_op": true,
"msg_subject": "duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "> On 21 Apr 2023, at 12:32, Anton Voloshin <[email protected]> wrote:\n\n> we have a duplicate line, declaration of default_multirange_selectivity() in\n> src/backend/utils/adt/multirangetypes_selfuncs.c:\n> \n> static double default_multirange_selectivity(Oid operator);\n> static double default_multirange_selectivity(Oid operator);\n\nNice catch.\n\n> No harm from this duplication, still, I suggest to clean it up for tidiness' sake.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 12:34:11 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "Hi!\n\nOn Fri, 21 Apr 2023 at 14:34, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 21 Apr 2023, at 12:32, Anton Voloshin <[email protected]> wrote:\n>\n> > we have a duplicate line, declaration of default_multirange_selectivity() in\n> > src/backend/utils/adt/multirangetypes_selfuncs.c:\n> >\n> > static double default_multirange_selectivity(Oid operator);\n> > static double default_multirange_selectivity(Oid operator);\n>\n> Nice catch.\n>\n> > No harm from this duplication, still, I suggest to clean it up for tidiness' sake.\n>\n> +1\n>\nThe patch is attached. Anyone to commit?\n\nPavel Borisov\nSupabase",
"msg_date": "Fri, 21 Apr 2023 14:45:16 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "On 21/04/2023 13:45, Pavel Borisov wrote:\n> The patch is attached. Anyone to commit?\n\nSpeaking of duplicates, I just found another one:\n > break;\n > break;\nin src/interfaces/ecpg/preproc/variable.c\n(in all stable branches).\n\nSorry for missing it in the previous letter.\n\nAdditional patch attached. Or both could go in the same commit, it's up \nto committer.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru",
"msg_date": "Fri, 21 Apr 2023 13:58:34 +0300",
"msg_from": "Anton Voloshin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "> On 21 Apr 2023, at 12:58, Anton Voloshin <[email protected]> wrote:\n> \n> On 21/04/2023 13:45, Pavel Borisov wrote:\n>> The patch is attached. Anyone to commit?\n> \n> Speaking of duplicates, I just found another one:\n> > break;\n> > break;\n> in src/interfaces/ecpg/preproc/variable.c\n> (in all stable branches).\n\nIndeed, coming in via 086cf1458 it's over a decade old.\n\n> Additional patch attached. Or both could go in the same commit, it's up to committer.\n\nI'll take care of these in a bit (unless someone finds more, or objects)\nbackpatching them to their respective origins branches.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:14:45 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "Hi!\n\nOn Fri, 21 Apr 2023 at 15:14, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 21 Apr 2023, at 12:58, Anton Voloshin <[email protected]> wrote:\n> >\n> > On 21/04/2023 13:45, Pavel Borisov wrote:\n> >> The patch is attached. Anyone to commit?\n> >\n> > Speaking of duplicates, I just found another one:\n> > > break;\n> > > break;\n> > in src/interfaces/ecpg/preproc/variable.c\n> > (in all stable branches).\n>\n> Indeed, coming in via 086cf1458 it's over a decade old.\n>\n> > Additional patch attached. Or both could go in the same commit, it's up to committer.\n>\n> I'll take care of these in a bit (unless someone finds more, or objects)\n> backpatching them to their respective origins branches.\n>\n> --\n> Daniel Gustafsson\nTechnically patches 0001 and 0002 in the thread above don't form\npatchset i.e. 0002 will not apply over 0001. Fixed this in v2.\n(They could be merged into one but as they fix completely unrelated\nthings, I think a better way to commit them separately.)\n\nPavel.",
"msg_date": "Fri, 21 Apr 2023 15:21:24 +0400",
"msg_from": "Pavel Borisov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "On 21/04/2023 14:14, Daniel Gustafsson wrote:\n> I'll take care of these in a bit (unless someone finds more, or objects)\n> backpatching them to their respective origins branches\n\nThanks!\n\nI went through master with\nfind . -name \"*.[ch]\" -exec bash -c 'echo {}; uniq -d {}' \\;|sed -E \n'/^[[:space:]*]*$/d;'\n\nand could not find any other obvious unintentional duplicates, except \nthe two mentioned already. There seems to be some strange duplicates in \nsnowball, but that's external and generated code and I could not figure \nout quickly whether those are intentional or not. Hopefully, they are \nharmless or intentional.\n\nAll other duplicated lines I've analyzed seem to be intentional.\n\nGranted, I've mostly ignored lines without ';', also I could have missed \nsomething, but currently I'm not aware of any other unintentionally \nduplicated lines.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n",
"msg_date": "Fri, 21 Apr 2023 15:20:54 +0300",
"msg_from": "Anton Voloshin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 2:21 PM Pavel Borisov <[email protected]> wrote:\n> On Fri, 21 Apr 2023 at 15:14, Daniel Gustafsson <[email protected]> wrote:\n> >\n> > > On 21 Apr 2023, at 12:58, Anton Voloshin <[email protected]> wrote:\n> > >\n> > > On 21/04/2023 13:45, Pavel Borisov wrote:\n> > >> The patch is attached. Anyone to commit?\n> > >\n> > > Speaking of duplicates, I just found another one:\n> > > > break;\n> > > > break;\n> > > in src/interfaces/ecpg/preproc/variable.c\n> > > (in all stable branches).\n> >\n> > Indeed, coming in via 086cf1458 it's over a decade old.\n> >\n> > > Additional patch attached. Or both could go in the same commit, it's up to committer.\n> >\n> > I'll take care of these in a bit (unless someone finds more, or objects)\n> > backpatching them to their respective origins branches.\n> >\n> > --\n> > Daniel Gustafsson\n> Technically patches 0001 and 0002 in the thread above don't form\n> patchset i.e. 0002 will not apply over 0001. Fixed this in v2.\n> (They could be merged into one but as they fix completely unrelated\n> things, I think a better way to commit them separately.)\n\nI wonder if we should backpatch this. On the one hand, this is not\ncritical, and we may skip backpatching. On the other hand,\nbackpatching will evade unnecessary code differences between major\nversions and potentially simplify further backpatching.\n\nI would prefer backpathing. Other opinions?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 23 Apr 2023 14:58:42 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "> On 23 Apr 2023, at 13:59, Alexander Korotkov <[email protected]> wrote:\n> \n> On Fri, Apr 21, 2023 at 2:21 PM Pavel Borisov <[email protected]> wrote:\n>>> On Fri, 21 Apr 2023 at 15:14, Daniel Gustafsson <[email protected]> wrote:\n>>> \n>>>> On 21 Apr 2023, at 12:58, Anton Voloshin <[email protected]> wrote:\n>>>> \n>>>> On 21/04/2023 13:45, Pavel Borisov wrote:\n>>>>> The patch is attached. Anyone to commit?\n>>>> \n>>>> Speaking of duplicates, I just found another one:\n>>>>> break;\n>>>>> break;\n>>>> in src/interfaces/ecpg/preproc/variable.c\n>>>> (in all stable branches).\n>>> \n>>> Indeed, coming in via 086cf1458 it's over a decade old.\n>>> \n>>>> Additional patch attached. Or both could go in the same commit, it's up to committer.\n>>> \n>>> I'll take care of these in a bit (unless someone finds more, or objects)\n>>> backpatching them to their respective origins branches.\n>>> \n>>> --\n>>> Daniel Gustafsson\n>> Technically patches 0001 and 0002 in the thread above don't form\n>> patchset i.e. 0002 will not apply over 0001. Fixed this in v2.\n>> (They could be merged into one but as they fix completely unrelated\n>> things, I think a better way to commit them separately.)\n> \n> I wonder if we should backpatch this. On the one hand, this is not\n> critical, and we may skip backpatching. On the other hand,\n> backpatching will evade unnecessary code differences between major\n> versions and potentially simplify further backpatching.\n> \n> I would prefer backpathing. Other opinions?\n\nI had planned to backpatch these two fixes for just that reason, to avoid the risk for other backpatches not applying. \n\n./daniel\n\n",
"msg_date": "Sun, 23 Apr 2023 14:35:53 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "On Sun, Apr 23, 2023 at 3:36 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 23 Apr 2023, at 13:59, Alexander Korotkov <[email protected]> wrote:\n> >\n> > On Fri, Apr 21, 2023 at 2:21 PM Pavel Borisov <[email protected]> wrote:\n> >>> On Fri, 21 Apr 2023 at 15:14, Daniel Gustafsson <[email protected]> wrote:\n> >>>\n> >>>> On 21 Apr 2023, at 12:58, Anton Voloshin <[email protected]> wrote:\n> >>>>\n> >>>> On 21/04/2023 13:45, Pavel Borisov wrote:\n> >>>>> The patch is attached. Anyone to commit?\n> >>>>\n> >>>> Speaking of duplicates, I just found another one:\n> >>>>> break;\n> >>>>> break;\n> >>>> in src/interfaces/ecpg/preproc/variable.c\n> >>>> (in all stable branches).\n> >>>\n> >>> Indeed, coming in via 086cf1458 it's over a decade old.\n> >>>\n> >>>> Additional patch attached. Or both could go in the same commit, it's up to committer.\n> >>>\n> >>> I'll take care of these in a bit (unless someone finds more, or objects)\n> >>> backpatching them to their respective origins branches.\n> >>>\n> >>> --\n> >>> Daniel Gustafsson\n> >> Technically patches 0001 and 0002 in the thread above don't form\n> >> patchset i.e. 0002 will not apply over 0001. Fixed this in v2.\n> >> (They could be merged into one but as they fix completely unrelated\n> >> things, I think a better way to commit them separately.)\n> >\n> > I wonder if we should backpatch this. On the one hand, this is not\n> > critical, and we may skip backpatching. On the other hand,\n> > backpatching will evade unnecessary code differences between major\n> > versions and potentially simplify further backpatching.\n> >\n> > I would prefer backpathing. Other opinions?\n>\n> I had planned to backpatch these two fixes for just that reason, to avoid the risk for other backpatches not applying.\n\nOK. I'm good with this plan.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 24 Apr 2023 04:35:21 +0300",
"msg_from": "Alexander Korotkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
},
{
"msg_contents": "> On 24 Apr 2023, at 03:35, Alexander Korotkov <[email protected]> wrote:\n> On Sun, Apr 23, 2023 at 3:36 PM Daniel Gustafsson <[email protected]> wrote:\n\n>> I had planned to backpatch these two fixes for just that reason, to avoid the risk for other backpatches not applying.\n> \n> OK. I'm good with this plan.\n\nDone.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 24 Apr 2023 11:43:18 +0200",
"msg_from": "Daniel Gustafsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: duplicate function declaration in multirangetypes_selfuncs.c"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nFollowing my previous mail about adding stats on parallelism[1], this \npatch introduces the log_parallel_worker_draught parameter, which \ncontrols whether a log message is produced when a backend attempts to \nspawn a parallel worker but fails due to insufficient worker slots. The \nshortage can stem from insufficent settings for max_worker_processes, \nmax_parallel_worker or max_parallel_maintenance_workers. It could also \nbe caused by another pool (max_logical_replication_workers) or an \nextention using bg worker slots. This new parameter can help database \nadministrators and developers diagnose performance issues related to \nparallelism and optimize the configuration of the system accordingly.\n\nHere is a test script:\n\n```\npsql << _EOF_\n\nSET log_parallel_worker_draught TO on;\n\n-- Index creation\nDROP TABLE IF EXISTS test_pql;\nCREATE TABLE test_pql(i int, j int);\nINSERT INTO test_pql SELECT x,x FROM generate_series(1,1000000) as F(x);\n\nSET max_parallel_workers TO 0;\n\nCREATE INDEX ON test_pql(i);\nREINDEX TABLE test_pql;\n\nRESET max_parallel_workers;\n\n-- VACUUM\nCREATE INDEX ON test_pql(j);\nCREATE INDEX ON test_pql(i,j);\nALTER TABLE test_pql SET (autovacuum_enabled = off);\nDELETE FROM test_pql WHERE i BETWEEN 1000 AND 2000;\n\nSET max_parallel_workers TO 1;\n\nVACUUM (PARALLEL 2, VERBOSE) test_pql;\n\nRESET max_parallel_workers;\n\n-- SELECT\nSET min_parallel_table_scan_size TO 0;\nSET parallel_setup_cost TO 0;\nSET max_parallel_workers TO 1;\n\nEXPLAIN (ANALYZE) SELECT i, avg(j) FROM test_pql GROUP BY i;\n\n_EOF_\n```\n\nWhich produces the following logs:\n\n```\nLOG: Parallel Worker draught during statement execution: workers \nspawned 0, requested 1\nSTATEMENT: CREATE INDEX ON test_pql(i);\n\nLOG: Parallel Worker draught during statement execution: workers \nspawned 0, requested 1\nSTATEMENT: REINDEX TABLE test_pql;\n\nLOG: Parallel Worker draught during statement execution: workers \nspawned 1, requested 2\nCONTEXT: while scanning relation \"public.test_pql\"\nSTATEMENT: VACUUM (PARALLEL 2, VERBOSE) test_pql;\n\nLOG: Parallel Worker draught during statement execution: workers \nspawned 1, requested 2\nSTATEMENT: EXPLAIN (ANALYZE) SELECT i, avg(j) FROM test_pql GROUP BY i;\n```\n\n[1] \nhttps://www.postgresql.org/message-id/[email protected]\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Fri, 21 Apr 2023 15:04:01 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logging parallel worker draught"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 6:34 PM Benoit Lobréau\n<[email protected]> wrote:\n>\n> Following my previous mail about adding stats on parallelism[1], this\n> patch introduces the log_parallel_worker_draught parameter, which\n> controls whether a log message is produced when a backend attempts to\n> spawn a parallel worker but fails due to insufficient worker slots.\n>\n\nI don't think introducing a GUC for this is a good idea. We can\ndirectly output this message in the server log either at LOG or DEBUG1\nlevel.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Apr 2023 16:36:02 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 4/22/23 13:06, Amit Kapila wrote:\n> I don't think introducing a GUC for this is a good idea. We can\n> directly output this message in the server log either at LOG or DEBUG1\n> level.\n\nHi,\n\nSorry for the delayed answer, I was away from my computer for a few \ndays. I don't mind removing the guc, but I would like to keep it at the \nLOG level. When I do audits most client are at that level and setting it \nto DEBUG1 whould add too much log for them on the long run.\n\nI'll post the corresponding patch asap.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Fri, 28 Apr 2023 10:11:20 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On Sat, Apr 22, 2023 at 7:06 AM Amit Kapila <[email protected]> wrote:\n> I don't think introducing a GUC for this is a good idea. We can\n> directly output this message in the server log either at LOG or DEBUG1\n> level.\n\nWhy not? It seems like something some people might want to log and\nothers not. Running the whole server at DEBUG1 to get this information\ndoesn't seem like a suitable answer.\n\nWhat I was wondering was whether we would be better off putting this\ninto the statistics collector, vs. doing it via logging. Both\napproaches seem to have pros and cons.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 May 2023 12:33:25 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 5/1/23 18:33, Robert Haas wrote:\n > Why not? It seems like something some people might want to log and\n > others not. Running the whole server at DEBUG1 to get this information\n > doesn't seem like a suitable answer.\n\nSince the statement is also logged, it could spam the log with huge \nqueries, which might also be a reason to stop logging this kind of \nproblems until the issue is fixed.\n\nI attached the patch without the guc anyway just in case.\n\n> What I was wondering was whether we would be better off putting this\n> into the statistics collector, vs. doing it via logging. Both\n> approaches seem to have pros and cons.\n\nWe tried to explore different options with my collegues in another \nthread [1]. I feel like both solutions are worthwhile, and would be \nhelpful. I plan to take a look at the pg_stat_statement patch [2] next.\n\nSince it's my first patch, I elected to choose the easiest solution to \nimplement first. I also proposed this because I think logging can help \npinpoint a lot of problems at a minimal cost, since it is easy to setup \nand exploit for everyone, everywhere\n\n[1] \nhttps://www.postgresql.org/message-id/[email protected]\n\n[2] \nhttps://www.postgresql.org/message-id/flat/6acbe570-068e-bd8e-95d5-00c737b865e8%40gmail.com\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Tue, 2 May 2023 12:44:58 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On Mon, May 1, 2023 at 10:03 PM Robert Haas <[email protected]> wrote:\n>\n> On Sat, Apr 22, 2023 at 7:06 AM Amit Kapila <[email protected]> wrote:\n> > I don't think introducing a GUC for this is a good idea. We can\n> > directly output this message in the server log either at LOG or DEBUG1\n> > level.\n>\n> Why not? It seems like something some people might want to log and\n> others not. Running the whole server at DEBUG1 to get this information\n> doesn't seem like a suitable answer.\n>\n\nWe can output this at the LOG level to avoid running the server at\nDEBUG1 level. There are a few other cases where we are not able to\nspawn the worker or process and those are logged at the LOG level. For\nexample, \"could not fork autovacuum launcher process ..\" or \"too many\nbackground workers\". So, not sure, if this should get a separate\ntreatment. If we fear this can happen frequently enough that it can\nspam the LOG then a GUC may be worthwhile.\n\n> What I was wondering was whether we would be better off putting this\n> into the statistics collector, vs. doing it via logging. Both\n> approaches seem to have pros and cons.\n>\n\nI think it could be easier for users to process the information if it\nis available via some view, so there is a benefit in putting this into\nthe stats subsystem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 May 2023 16:27:19 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On Tue, May 2, 2023 at 6:57 AM Amit Kapila <[email protected]> wrote:\n> We can output this at the LOG level to avoid running the server at\n> DEBUG1 level. There are a few other cases where we are not able to\n> spawn the worker or process and those are logged at the LOG level. For\n> example, \"could not fork autovacuum launcher process ..\" or \"too many\n> background workers\". So, not sure, if this should get a separate\n> treatment. If we fear this can happen frequently enough that it can\n> spam the LOG then a GUC may be worthwhile.\n\nI think we should definitely be afraid of that. I am in favor of a separate GUC.\n\n> > What I was wondering was whether we would be better off putting this\n> > into the statistics collector, vs. doing it via logging. Both\n> > approaches seem to have pros and cons.\n>\n> I think it could be easier for users to process the information if it\n> is available via some view, so there is a benefit in putting this into\n> the stats subsystem.\n\nUnless we do this instead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 May 2023 09:17:45 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "Hi,\r\n\r\nThis thread has been quiet for a while, but I'd like to share some\r\nthoughts.\r\n\r\n+1 to the idea of improving visibility into parallel worker saturation.\r\nBut overall, we should improve parallel processing visibility, so DBAs can\r\ndetect trends in parallel usage ( is the workload doing more parallel, or doing less )\r\nand have enough data to either tune the workload or change parallel GUCs.\r\n\r\n>> We can output this at the LOG level to avoid running the server at\r\n>> DEBUG1 level. There are a few other cases where we are not able to\r\n>> spawn the worker or process and those are logged at the LOG level. For\r\n>> example, \"could not fork autovacuum launcher process ..\" or \"too many\r\n>> background workers\". So, not sure, if this should get a separate\r\n>> treatment. If we fear this can happen frequently enough that it can\r\n>> spam the LOG then a GUC may be worthwhile.\r\n\r\n\r\n> I think we should definitely be afraid of that. I am in favor of a separate GUC.\r\n\r\nCurrently explain ( analyze ) will give you the \"Workers Planned\"\r\nand \"Workers launched\". Logging this via auto_explain is possible, so I am\r\nnot sure we need additional GUCs or debug levels for this info.\r\n\r\n -> Gather (cost=10430.00..10430.01 rows=2 width=8) (actual tim\r\ne=131.826..134.325 rows=3 loops=1)\r\n Workers Planned: 2\r\n Workers Launched: 2\r\n\r\n\r\n>> What I was wondering was whether we would be better off putting this\r\n>> into the statistics collector, vs. doing it via logging. Both\r\n>> approaches seem to have pros and cons.\r\n>>\r\n>> I think it could be easier for users to process the information if it\r\n>> is available via some view, so there is a benefit in putting this into\r\n>> the stats subsystem.\r\n\r\n\r\n> Unless we do this instead.\r\n\r\nAdding cumulative stats is a much better idea.\r\n\r\n3 new columns can be added to pg_stat_database:\r\nworkers_planned, \r\nworkers_launched,\r\nparallel_operations - There could be more than 1 operation\r\nper query, if for example there are multiple Parallel Gather\r\noperations in a plan.\r\n\r\nWith these columns, monitoring tools can trend if there is more\r\nor less parallel work happening over time ( by looking at parallel\r\noperations ) or if the workload is suffering from parallel saturation.\r\nworkers_planned/workers_launched < 1 means there is a lack\r\nof available worker processes.\r\n\r\nAlso, We could add this information on a per query level as well \r\nin pg_stat_statements, but this can be taken up in a seperate\r\ndiscussion.\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 9 Oct 2023 15:51:34 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 2023-Oct-09, Imseih (AWS), Sami wrote:\n\n> > I think we should definitely be afraid of that. I am in favor of a\n> > separate GUC.\n\nI agree.\n\n> Currently explain ( analyze ) will give you the \"Workers Planned\"\n> and \"Workers launched\". Logging this via auto_explain is possible, so I am\n> not sure we need additional GUCs or debug levels for this info.\n> \n> -> Gather (cost=10430.00..10430.01 rows=2 width=8) (actual tim\n> e=131.826..134.325 rows=3 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n\nI don't think autoexplain is a good substitute for the originally\nproposed log line. The possibility for log bloat is enormous. Some\nexplain plans are gigantic, and I doubt people can afford that kind of\nlog traffic just in case these numbers don't match.\n\n> Adding cumulative stats is a much better idea.\n\nWell, if you read Benoit's earlier proposal at [1] you'll see that he\ndoes propose to have some cumulative stats; this LOG line he proposes\nhere is not a substitute for stats, but rather a complement. I don't\nsee any reason to reject this patch even if we do get stats.\n\nAlso, we do have a patch on stats, by Sotolongo and Bonne here [2]. I\nthink there was some drift on the scope, so eventually they gave up (I\ndon't blame them). If you have concrete ideas on what direction that\neffort should take, I think that thread would welcome that. I have not\nreviewed it myself, and I'm not sure when I'll have time for that.\n\n[1] https://postgr.es/m/[email protected]\n[2] https://postgr.es/m/[email protected] \n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I'm impressed how quickly you are fixing this obscure issue. I came from \nMS SQL and it would be hard for me to put into words how much of a better job\nyou all are doing on [PostgreSQL].\"\n Steve Midgley, http://archives.postgresql.org/pgsql-sql/2008-08/msg00000.php\n\n\n",
"msg_date": "Wed, 11 Oct 2023 12:11:30 +0200",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": ">> Currently explain ( analyze ) will give you the \"Workers Planned\"\r\n>> and \"Workers launched\". Logging this via auto_explain is possible, so I am\r\n>> not sure we need additional GUCs or debug levels for this info.\r\n>>\r\n>> -> Gather (cost=10430.00..10430.01 rows=2 width=8) (actual tim\r\n>> e=131.826..134.325 rows=3 loops=1)\r\n>> Workers Planned: 2\r\n>> Workers Launched: 2\r\n\r\n> I don't think autoexplain is a good substitute for the originally\r\n> proposed log line. The possibility for log bloat is enormous. Some\r\n> explain plans are gigantic, and I doubt people can afford that kind of\r\n> log traffic just in case these numbers don't match.\r\n\r\nCorrect, that is a downside of auto_explain in general. \r\n\r\nThe logging traffic can be controlled by \r\nauto_explain.log_min_duration/auto_explain.sample_rate/etc.\r\nof course. \r\n\r\n> Well, if you read Benoit's earlier proposal at [1] you'll see that he\r\n> does propose to have some cumulative stats; this LOG line he proposes\r\n> here is not a substitute for stats, but rather a complement. I don't\r\n> see any reason to reject this patch even if we do get stats.\r\n\r\n> Also, we do have a patch on stats, by Sotolongo and Bonne here [2]. I\r\n\r\nThanks. I will review the threads in depth and see if the ideas can be combined\r\nin a comprehensive proposal.\r\n\r\nRegarding the current patch, the latest version removes the separate GUC,\r\nbut the user should be able to control this behavior. \r\n\r\nQuery text is logged when log_min_error_statement > default level of \"error\".\r\n\r\nThis could be especially problematic when there is a query running more than 1 Parallel\r\nGather node that is in draught. In those cases each node will end up \r\ngenerating a log with the statement text. So, a single query execution could end up \r\nhaving multiple log lines with the statement text.\r\n\r\ni.e.\r\nLOG: Parallel Worker draught during statement execution: workers spawned 0, requested 2\r\nSTATEMENT: select (select count(*) from large) as a, (select count(*) from large) as b, (select count(*) from large) as c ;\r\nLOG: Parallel Worker draught during statement execution: workers spawned 0, requested 2\r\nSTATEMENT: select (select count(*) from large) as a, (select count(*) from large) as b, (select count(*) from large) as c ;\r\nLOG: Parallel Worker draught during statement execution: workers spawned 0, requested 2\r\nSTATEMENT: select (select count(*) from large) as a, (select count(*) from large) as b, (select count(*) from large) as c ;\r\n\r\nI wonder if it will be better to accumulate the total # of workers planned and # of workers launched and\r\nlogging this information at the end of execution?\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n",
"msg_date": "Wed, 11 Oct 2023 15:26:49 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 10/11/23 17:26, Imseih (AWS), Sami wrote:\n\nThank you for resurrecting this thread.\n\n>> Well, if you read Benoit's earlier proposal at [1] you'll see that he\n>> does propose to have some cumulative stats; this LOG line he proposes\n>> here is not a substitute for stats, but rather a complement. I don't\n>> see any reason to reject this patch even if we do get stats.\n\nI believe both cumulative statistics and logs are needed. Logs excel in \npinpointing specific queries at precise times, while statistics provide \na broader overview of the situation. Additionally, I often encounter \nsituations where clients lack pg_stat_statements and can't restart their \nproduction promptly.\n\n> Regarding the current patch, the latest version removes the separate GUC,\n> but the user should be able to control this behavior.\n\nI created this patch in response to Amit Kapila's proposal to keep the \ndiscussion ongoing. However, I still favor the initial version with the \nGUCs.\n\n> Query text is logged when log_min_error_statement > default level of \"error\".\n> \n> This could be especially problematic when there is a query running more than 1 Parallel\n> Gather node that is in draught. In those cases each node will end up\n> generating a log with the statement text. So, a single query execution could end up\n> having multiple log lines with the statement text.\n> ...\n> I wonder if it will be better to accumulate the total # of workers planned and # of workers launched and\n> logging this information at the end of execution?\n\nlog_temp_files exhibits similar behavior when a query involves multiple \non-disk sorts. I'm uncertain whether this is something we should or need \nto address. I'll explore whether the error message can be made more \ninformative.\n\n[local]:5437 postgres@postgres=# SET work_mem to '125kB';\n[local]:5437 postgres@postgres=# SET log_temp_files TO 0;\n[local]:5437 postgres@postgres=# SET client_min_messages TO log;\n[local]:5437 postgres@postgres=# WITH a AS ( SELECT x FROM \ngenerate_series(1,10000) AS F(x) ORDER BY 1 ) , b AS (SELECT x FROM \ngenerate_series(1,10000) AS F(x) ORDER BY 1 ) SELECT * FROM a,b;\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.20\", size \n122880 => First sort\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.19\", size 140000\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.23\", size 140000\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.22\", size \n122880 => Second sort\nLOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.21\", size 140000\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Thu, 12 Oct 2023 12:01:46 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=c3=a9au?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "> I believe both cumulative statistics and logs are needed. Logs excel in \r\n> pinpointing specific queries at precise times, while statistics provide \r\n> a broader overview of the situation. Additionally, I often encounter \r\n> situations where clients lack pg_stat_statements and can't restart their \r\n> production promptly.\r\n\r\nI agree that logging will be very useful here. \r\nCumulative stats/pg_stat_statements can be handled in a separate discussion.\r\n\r\n> log_temp_files exhibits similar behavior when a query involves multiple\r\n> on-disk sorts. I'm uncertain whether this is something we should or need\r\n> to address. I'll explore whether the error message can be made more\r\n> informative.\r\n\r\n\r\n> [local]:5437 postgres@postgres=# SET work_mem to '125kB';\r\n> [local]:5437 postgres@postgres=# SET log_temp_files TO 0;\r\n> [local]:5437 postgres@postgres=# SET client_min_messages TO log;\r\n> [local]:5437 postgres@postgres=# WITH a AS ( SELECT x FROM\r\n> generate_series(1,10000) AS F(x) ORDER BY 1 ) , b AS (SELECT x FROM\r\n> generate_series(1,10000) AS F(x) ORDER BY 1 ) SELECT * FROM a,b;\r\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.20\", size\r\n> 122880 => First sort\r\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.19\", size 140000\r\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.23\", size 140000\r\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.22\", size\r\n> 122880 => Second sort\r\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp138850.21\", size 140000\r\n\r\nThat is true.\r\n\r\nUsers should also control if they want this logging overhead or not, \r\nThe best answer is a new GUC that is OFF by default.\r\n\r\nI am also not sure if we want to log draught only. I think it's important\r\nto not only see which operations are in parallel draught, but to also log \r\noperations are using 100% of planned workers. \r\nThis will help the DBA tune queries that are eating up the parallel workers.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n",
"msg_date": "Sun, 15 Oct 2023 17:48:51 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "Hi,\n\nI see the thread went a bit quiet, but I think it'd be very useful (and\ndesirable) to have this information in log. So let me share my thoughts\nabout the patch / how it should work.\n\nThe patch is pretty straightforward, I don't have any comments about the\ncode as is. Obviously, some of the following comments might require\nchanges to the patch, etc.\n\nI see there was discussion about logging vs. adding this to the pgstats\nsystem. I agree with Alvaro it should not be one or the other - these\nare complementary approaches, used by different tools. The logging is\nneeded for tools like pgbadger etc. for example.\n\nI do see a lot of value in adding this to the statistics collector, and\nto things like pg_stat_statements, but that's being discussed in a\nseparate thread, so I'll comment on that there.\n\nAs for whether to have a GUC for this, I certainly think we should have\none. We have GUC for logging stuff that we generally expect to happen,\nlike lock waits, temp files, etc.\n\nTrue, there are similar cases that we just log every time, like when we\ncan't fork a process (\"could not fork autovacuum launcher process\"), but\nI'd say those are unexpected to happen in general / seem more like an\nerror in the environment. While we may exhaust parallel workers pretty\neasily / often, especially on busy systems.\n\n\nThere's a couple things I'm not quite sure about:\n\n\n1) name of the GUC\n\nI find the \"log_parallel_worker_draught\" to be rather unclear :-( Maybe\nit's just me and everyone else just immediately understands what this\ndoes / what will happen after it's set to \"on\", but I find it rather\nnon-intuitive.\n\n\n2) logging just the failures provides an incomplete view\n\nAs a DBA trying to evaluate if I need to bump up the number of workers,\nI'd be interested what fraction of parallel workers fails to start. If\nit's 1%, that's probably a short spike and I don't need to do anything.\nIf it's 50%, well, that might have unpredictable impact on user queries,\nand thus something I may need to look into. But if we only log the\nfailures, that's not possible.\n\nI may be able to approximate this somehow by correlating this with the\nquery/transaction rate, or something, but ideally I'd like to know how\nmany parallel workers we planned to use, and how many actually started.\n\nObviously, logging this for every Gather [Merge] node, even when all the\nworkers start, that may be a lot of data. Perhaps the GUC should not be\non/off, but something like\n\nlog_parallel_workers = {none | failures | all}\n\nwhere \"failures\" only logs when at least one worker fails to start, and\n\"all\" logs everything.\n\nAFAIK Sami made the same observation/argument in his last message [1].\n\n\n3) node-level or query-level?\n\nThere's a brief discussion about whether this should be a node-level or\nquery-level thing in [2]:\n\n> I wonder if it will be better to accumulate the total # of workers\n> planned and # of workers launched and logging this information at the\n> end of execution?\n\nAnd there's a reference to log_temp_files, but it's not clear to me if\nthat's meant to be an argument for doing it the same way (node-level).\n\nI think we should not do node-level logging just because that's what\nlog_temp_files=on dies. I personally think log_temp_files was\nimplemented like this mostly because it was simple.\n\nThere's no value in having information about every individual temp file,\nbecause we don't even know which node produced it. It'd be perfectly\nsufficient to log some summary statistics (number of files, total space,\n...), either for the query as a whole, or perhaps per node (because\nthat's what matters for work_mem). So I don't think we should mimic this\njust because log_temp_files does this.\n\nI don't know how difficult would it be to track/collect this information\nfor the whole query.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/D04977E3-9F54-452C-A4C4-CDA67F392BD1%40amazon.com\n\n[2]\nhttps://www.postgresql.org/message-id/11e34b80-b0a6-e2e4-1606-1f5077379a34%40dalibo.com\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 25 Feb 2024 20:13:33 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On Mon, Feb 26, 2024 at 6:13 AM Tomas Vondra\n<[email protected]> wrote:\n>\n\n> 1) name of the GUC\n>\n> I find the \"log_parallel_worker_draught\" to be rather unclear :-( Maybe\n> it's just me and everyone else just immediately understands what this\n> does / what will happen after it's set to \"on\", but I find it rather\n> non-intuitive.\n>\n\nAlso, I don't understand how the word \"draught\" (aka \"draft\") makes\nsense here -- I assume the intended word was \"drought\" (???).\n\n==========\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 26 Feb 2024 09:32:52 +1100",
"msg_from": "Peter Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 2/25/24 20:13, Tomas Vondra wrote:\n > 1) name of the GUC\n...\n > 2) logging just the failures provides an incomplete view\n > log_parallel_workers = {none | failures | all}>\n > where \"failures\" only logs when at least one worker fails to start, and\n > \"all\" logs everything.\n >\n > AFAIK Sami made the same observation/argument in his last message [1].\n\nI like the name and different levels you propose. I was initially \nthinking that the overall picture would be better summarized (an easier \nto implement) with pg_stat_statements. But with the granularity you \npropose, the choice is in the hands of the DBA, which is great and \nprovides more options when we don't have full control of the installation.\n\n > 3) node-level or query-level?\n...\n > There's no value in having information about every individual temp file,\n > because we don't even know which node produced it. It'd be perfectly\n > sufficient to log some summary statistics (number of files, total space,\n > ...), either for the query as a whole, or perhaps per node (because\n > that's what matters for work_mem). So I don't think we should mimic this\n > just because log_temp_files does this.\n\nI must admit that, given my (poor) technical level, I went for the \neasiest way.\n\nIf we go for the three levels you proposed, \"node-level\" makes even less \nsense (and has great \"potential\" for spam).\n\nI can see one downside to the \"query-level\" approach: it might be more \ndifficult to to give information in cases where the query doesn't end \nnormally. It's sometimes useful to have hints about what was going wrong \nbefore a query was cancelled or crashed, which log_temp_files kinda does.\n\n > I don't know how difficult would it be to track/collect this information\n > for the whole query.\n\nI am a worried this will be a little too much for me, given the time and \nthe knowledge gap I have (both in C and PostgreSQL internals). I'll try \nto look anyway.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 10:55:27 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "\n\nOn 2/25/24 23:32, Peter Smith wrote:\n> Also, I don't understand how the word \"draught\" (aka \"draft\") makes\n> sense here -- I assume the intended word was \"drought\" (???).\n\nyes, that was the intent, sorry about that. English is not my native \nlangage and I was convinced the spelling was correct.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Tue, 27 Feb 2024 11:03:54 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 2/27/24 10:55, Benoit Lobréau wrote:\n> On 2/25/24 20:13, Tomas Vondra wrote:\n>> 1) name of the GUC\n> ...\n>> 2) logging just the failures provides an incomplete view\n>> log_parallel_workers = {none | failures | all}>\n>> where \"failures\" only logs when at least one worker fails to start, and\n>> \"all\" logs everything.\n>>\n>> AFAIK Sami made the same observation/argument in his last message [1].\n> \n> I like the name and different levels you propose. I was initially\n> thinking that the overall picture would be better summarized (an easier\n> to implement) with pg_stat_statements. But with the granularity you\n> propose, the choice is in the hands of the DBA, which is great and\n> provides more options when we don't have full control of the installation.\n> \n\nGood that you like the idea with multiple levels.\n\nI agree pg_stat_statements may be an easier way to get a summary, but it\nhas limitations too (e.g. no built-in ability to show how the metrics\nevolves over time, which is easier to restore from logs). So I think\nthose approaches are complementary.\n\n>> 3) node-level or query-level?\n> ...\n>> There's no value in having information about every individual temp file,\n>> because we don't even know which node produced it. It'd be perfectly\n>> sufficient to log some summary statistics (number of files, total space,\n>> ...), either for the query as a whole, or perhaps per node (because\n>> that's what matters for work_mem). So I don't think we should mimic this\n>> just because log_temp_files does this.\n> \n> I must admit that, given my (poor) technical level, I went for the\n> easiest way.\n> \n\nThat's understandable, I'd do the same thing.\n\n> If we go for the three levels you proposed, \"node-level\" makes even less\n> sense (and has great \"potential\" for spam).\n> \n\nPerhaps.\n\n> I can see one downside to the \"query-level\" approach: it might be more\n> difficult to to give information in cases where the query doesn't end\n> normally. It's sometimes useful to have hints about what was going wrong\n> before a query was cancelled or crashed, which log_temp_files kinda does.\n> \n\nThat is certainly true, but it's not a new thing, I believe. IIRC we may\nnot report statistics until the end of the transaction, so no progress\nupdates, I'm not sure what happens if the doesn't end correctly (e.g.\nbackend dies, ...). Similarly for the temporary files, we don't report\nthose until the temporary file gets closed, so I'm not sure you'd get a\nlot of info about the progress either.\n\nI admit I haven't tried and maybe I don't remember the details wrong.\nMight be useful to experiment with this first a little bit ...\n\n>> I don't know how difficult would it be to track/collect this information\n>> for the whole query.\n> \n> I am a worried this will be a little too much for me, given the time and\n> the knowledge gap I have (both in C and PostgreSQL internals). I'll try\n> to look anyway.\n> \n\nI certainly understand that, and I realize it may feel daunting and even\ndiscouraging. What I can promise is that I'm willing to help, either by\nsuggesting ways to approach the problems, doing reviews, and so on.\nWould that be sufficient for you to continue working on this patch?\n\nSome random thoughts/ideas about how to approach this:\n\n- For parallel workers I think this might be as simple as adding some\ncounters into QueryDesc, and update those during query exec (instead of\njust logging stuff directly). I'm not sure if it should be added to\n\"Instrumentation\" or separately.\n\n- I was thinking maybe we could just walk the execution plan and collect\nthe counts that way. But I'm not sure that'd work if the Gather node\nhappens to be executed repeatedly (in a loop). Also, not sure we'd want\nto walk all plans.\n\n- While log_temp_files is clearly out of scope for this patch, it might\nbe useful to think about it and how it should behave. We've already used\nlog_temp_files to illustrate some issues with logging the info right\naway, so maybe there's something to learn here ...\n\n- For temporary files I think it'd be more complicated, because we can\ncreate temporary files from many different places, not just in executor,\nso we can't simply update QueryDesc etc. Also, the places that log info\nabout temporary files (using ReportTemporaryFileUsage) only really know\nabout individual temporary files, so if a Sort or HashJoin creates a\n1000 files, we'll get one LOG for each of those temp files. But they're\nreally a single \"combined\" file. So maybe we should introduce some sort\nof \"context\" to group those files, and only accumulate/log the size for\nthe group as a whole? Maybe it'd even allow printing some info about\nwhat the temporary file is for (e.g. tuplestore / tuplesort / ...).\n\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 27 Feb 2024 15:09:24 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "\nOn 2024-02-27 Tu 05:03, Benoit Lobréau wrote:\n>\n>\n> On 2/25/24 23:32, Peter Smith wrote:\n>> Also, I don't understand how the word \"draught\" (aka \"draft\") makes\n>> sense here -- I assume the intended word was \"drought\" (???).\n>\n> yes, that was the intent, sorry about that. English is not my native \n> langage and I was convinced the spelling was correct.\n\n\nBoth are English words spelled correctly, but with very different \nmeanings. (Drought is definitely the one you want here.) This reminds me \nof the Errata section of Sellars and Yeatman's classic \"history\" work \n\"1066 And All That\":\n\n\"For 'pheasant' read 'peasant' throughout.\"\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 27 Feb 2024 20:45:26 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 2/27/24 15:09, Tomas Vondra wrote> That is certainly true, but it's \nnot a new thing, I believe. IIRC we may\n > not report statistics until the end of the transaction, so no progress\n > updates, I'm not sure what happens if the doesn't end correctly (e.g.\n > backend dies, ...). Similarly for the temporary files, we don't report\n > those until the temporary file gets closed, so I'm not sure you'd get a\n > lot of info about the progress either.\n >\n > I admit I haven't tried and maybe I don't remember the details wrong.\n > Might be useful to experiment with this first a little bit ...\n\nAh, yes, Tempfile usage is reported on tempfile closure or deletion\nfor shared file sets.\n\n > I certainly understand that, and I realize it may feel daunting and even\n > discouraging. What I can promise is that I'm willing to help, either by\n > suggesting ways to approach the problems, doing reviews, and so on.\n > Would that be sufficient for you to continue working on this patch?\n\nYes, thanks for the proposal, I'll work on it on report here. I am otherwise\nbooked on company projects so I cannot promise a quick progress.\n\n > Some random thoughts/ideas about how to approach this:\n >\n > - For parallel workers I think this might be as simple as adding some\n > counters into QueryDesc, and update those during query exec (instead of\n > just logging stuff directly). I'm not sure if it should be added to\n > \"Instrumentation\" or separately.\n\nI will start here to see how it works.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Thu, 29 Feb 2024 09:24:20 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "\n\n> On 29 Feb 2024, at 11:24, Benoit Lobréau <[email protected]> wrote:\n> \n> Yes, thanks for the proposal, I'll work on it on report here.\n\nHi Benoit!\n\nThis is kind reminder that this thread is waiting for your response.\nCF entry [0] is in \"Waiting on Author\", I'll move it to July CF.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4291/\n\n",
"msg_date": "Mon, 8 Apr 2024 11:05:36 +0300",
"msg_from": "\"Andrey M. Borodin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "On 4/8/24 10:05, Andrey M. Borodin wrote:\n> Hi Benoit!\n> \n> This is kind reminder that this thread is waiting for your response.\n> CF entry [0] is in \"Waiting on Author\", I'll move it to July CF.\n\nHi thanks for the reminder,\n\nThe past month as been hectic for me.\nIt should calm down by next week at wich point I'll have time to go back \nto this. sorry for the delay.\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Mon, 8 Apr 2024 10:13:05 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "Hi,\n\nHere is a new version of the patch. Sorry for the long delay, I was hit \nby a motivation drought and was quite busy otherwise.\n\nThe guc is now called `log_parallel_workers` and has three possible values:\n\n* \"none\": disables logging\n* \"all\": logs parallel worker info for all parallel queries or utilities\n* \"failure\": logs only when the number of parallel workers planned \ncouldn't be reached.\n\nFor this, I added several members to the EState struct.\n\nEach gather node / gather merge node updates the values and the \noffending queries are displayed during standard_ExecutorEnd.\n\nFor CREATE INDEX / REINDEX on btree and brin, I check the parallel \ncontext struct (pcxt) during _bt_end_parallel() or _brin_end_parallel() \nand display a log message when needed.\n\nFor vacuum, I do the same in parallel_vacuum_end().\n\nI added some information to the error message for parallel queries as an \nexperiment. I find it useful, but it can be removed, if you re not \nconvinced.\n\n2024-08-27 15:59:11.089 CEST [54585] LOG: 1 parallel nodes planned (1 \nobtained all their workers, 0 obtained none), 2 workers planned (2 \nworkers launched)\n2024-08-27 15:59:11.089 CEST [54585] STATEMENT: EXPLAIN (ANALYZE)\n\t\tSELECT i, avg(j) FROM test_pql GROUP BY i;\n\n2024-08-27 15:59:14.006 CEST [54585] LOG: 2 parallel nodes planned (0 \nobtained all their workers, 1 obtained none), 4 workers planned (1 \nworkers launched)\n2024-08-27 15:59:14.006 CEST [54585] STATEMENT: EXPLAIN (ANALYZE)\n\t\tSELECT i, avg(j) FROM test_pql GROUP BY i\n\t\tUNION\n\t\tSELECT i, avg(j) FROM test_pql GROUP BY i;\n\nFor CREATE INDEX / REDINDEX / VACUUMS:\n\n2024-08-27 15:58:59.769 CEST [54521] LOG: 1 workers planned (0 workers \nlaunched)\n2024-08-27 15:58:59.769 CEST [54521] STATEMENT: REINDEX TABLE test_pql;\n\nDo you think this is better?\n\nI am not sure if a struct is needed to store the es_nworkers* and if the \nmodification I did to parallel.h is ok.\n\nThanks to: Jehan-Guillaume de Rorthais, Guillaume Lelarge and Franck \nBoudehen for the help and motivation boost.\n\n(sorry for the spam, I had to resend the mail to the list)\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Wed, 28 Aug 2024 14:58:51 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "I found out in [1] that I am not correctly tracking the workers for \nvacuum commands. I trap workers used by \nparallel_vacuum_cleanup_all_indexes but not \nparallel_vacuum_bulkdel_all_indexes.\n\nBack to the drawing board.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com\n\n\n",
"msg_date": "Wed, 4 Sep 2024 17:31:25 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
},
{
"msg_contents": "Here is a new version that fixes the aforementioned problems.\n\nIf this patch is accepted in this form, the counters could be used for \nthe patch in pg_stat_database. [1]\n[1] \nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n-- \nBenoit Lobréau\nConsultant\nhttp://dalibo.com",
"msg_date": "Wed, 18 Sep 2024 16:46:00 +0200",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Logging parallel worker draught"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.